text
stringlengths
0
1.22M
Determination of the chemical potential using energy-biased sampling R.  Delgado-Buscalioni r.delgado-buscalioni@ucl.ac.uk Depto. Ciencias y Técnicas Fisicoquímicas, Facultad de Ciencias, UNED, Paseo Senda del Rey 9, Madrid 28040, Spain.    G. De Fabritiis g.defabritiis@ucl.ac.uk Centre for Computational Science, Department of Chemistry, University College London, 20 Gordon Street, WC1H 0AJ London, U.K.    P. V. Coveney p.v.coveney@ucl.ac.uk Centre for Computational Science, Department of Chemistry, University College London, 20 Gordon Street, WC1H 0AJ London, U.K. (November 20, 2020) Abstract An energy-biased method to evaluate ensemble averages requiring test-particle insertion is presented. The method is based on biasing the sampling within the subdomains of the test-particle configurational space with energies smaller than a given value freely assigned. These energy-wells are located via unbiased random insertion over the whole configurational space and are sampled using the so called Hit&Run algorithm, which uniformly samples compact regions of any shape immersed in a space of arbitrary dimensions. Because the bias is defined in terms of the energy landscape it can be exactly corrected to obtain the unbiased distribution. The test-particle energy distribution is then combined with the Bennett relation for the evaluation of the chemical potential. We apply this protocol to a system with relatively small probability of low-energy test-particle insertion, liquid argon at high density and low temperature, and show that the energy-biased Bennett method is around five times more efficient than the standard Bennett method. A similar performance gain is observed in the reconstruction of the energy distribution. I Introduction The chemical potential is a central quantity underpinning many physical and chemical processes, such as phase equilibria, osmosis, thermodynamic stability, binging affinity and so on Lu et al. (2003). However, its evaluation by computer simulation is more complicated and time-consuming than for other intensive thermodynamic quantities, such as the pressure $P$ or temperature $T$. While $P$ and $T$ can be evaluated from averages over mechanical properties of molecules (forces, velocities and positions), the chemical potential is a thermal average and therefore it requires sampling the phase space of the system. Indeed, computing the chemical potential is a special case of the more general problem of computing a free-energy difference $A_{1}-A_{0}$ between two states (labelled as 0 and 1), a problem for which the inherent difficulty is well understood Allen and Tildesley (1987); Frenkel and Smith (2002); Kollman (1993); Lu et al. (2003). Free energy perturbation (FEP) is an important category of methods for free energy calculation; we refer to the recent works by Lu et al. Lu et al. (2003) and by Shirts and Pande Shirts and Pande (2005) for review and comparisons. As explained by Lu et al. Lu et al. (2003), the general working equation for FEP methods can be cast as $$\exp[-\beta(A_{1}-A_{0})]=\frac{\langle w(u)\exp[-\beta u/2]\rangle_{0}}{% \langle w(u)\exp[-\beta u/2]\rangle_{1}},$$ (1) with $\beta=1/k_{B}T$ and $u\equiv U_{1}-U_{0}$ the energy difference between both systems; $K_{B}$ is the Boltzmann constant. The angular brackets denote ensemble averages performed on the system labelled by the subscript “0” or “1”. The weighting function $w(u)$ is arbitrary and differs for each method introduced in the literature. The chemical potential is the free energy difference between two thermodynamic states differing by the presence of a single molecule. In other words, the chemical potential is $A_{1}-A_{0}$ where $A_{1}=A(N+1,V,T)$ and $A_{0}=A(N,V,T)$. Here $A(N,V,T)$ is the Helmholtz free energy of the system which depends on the number of molecules N, the volume $V$ and temperature of the system. In order to express the averages of Eq. (1) in terms of one-dimensional integrals of the energy difference $u$ one can then introduce the following distribution functions Deitrick et al. (1989) $$\displaystyle f(u)$$ $$\displaystyle=$$ $$\displaystyle\int\langle\delta\left(u-U_{1}+U_{0}\right)\rangle_{0}V^{-1}d{\bf r},$$ (2) $$\displaystyle g(u)$$ $$\displaystyle=$$ $$\displaystyle\langle\delta\left(u-U_{1}+U_{0}\right)\rangle_{1},$$ (3) where $\delta(.)$ is the Dirac delta function. In Eq. (2), $U_{1}=U_{1}({\bf R}^{N},{\bf r})$, where ${\bf R}^{N}$ is the configuration of the first N molecules and ${\bf r}$ denotes the configuration of the N+1 molecule. Note that in Eq. (2) the N+1 molecule acts as a “test-molecule” which probes the system “0” (i.e. the system with N molecules), but does no interact with it. Therefore $f(u)$ is the probability density of the N molecule ensemble increasing in potential energy by an amount $u$ if this test-molecule were randomly inserted into the ensemble. Conversely, $g(u)$ is the probability density of the (N+1)-molecule ensemble decreasing in potential energy by an amount $u$ if a randomly selected real molecule were removed from the ensemble. ¿From Eq. (1)-(3) an expression for the excess chemical potential $\mu=A_{1}-A_{0}-\mu_{id}$ (where $\mu_{id}$ is the ideal gas chemical potential Frenkel and Smith (2002)) can be derived in terms of the $f$ and $g$ distributions Shing and Gubbins (1982); Deitrick et al. (1989); Lu et al. (2003) $$\exp(\beta\mu)=\frac{\int w(u)g(u)du}{\int w(u)f(u)\exp(-\beta u)du}.$$ (4) A good choice of the weighting function $w(u)$ is key for the efficiency of the method. For instance, the Widom method Frenkel and Smith (2002); Allen and Tildesley (1987) ($w(u)=1$) is known to provide very poor convergence at large densities. The Widom method is a single stage FEP, meaning that sampling is only performed in the reference system “0” (i.e., in the $f$ distribution, see Eq. (4)). As discussed by Lu et al. Lu et al. (2003), multiple staging provides much better efficiency. The efficiency is generally defined as the reciprocal of the product of the variance of the estimator multiplied by its cost $n_{cost}$ (that is, the total number of energy evaluations performed by the algorithm) $$\varepsilon=(n_{cost}\mathtt{Var}[\beta\mu])^{-1}.$$ (5) Bennett Bennett (1976) showed that the variance of Eq. (4) is minimised if the weighting function is $w(u)=\mathcal{F}[\beta(u-c)]$, where $\mathcal{F}(x)=1/(1+\exp(x))$ is the Fermi function and $c$ is an arbitrary constant. The Bennett estimator is then $$\beta\mu=\ln\left(\frac{\langle\mathcal{F}[-\beta(u-c)]\rangle_{g}}{\langle% \mathcal{F}[\beta(u-c)]\rangle_{f}}\right)+\beta c,$$ (6) where the subscripts $g$ and $f$ indicate (simple) averages over the distributions $g(u)$ and $f(u)$. The value of $c$ providing the minimum variance and maximum overlap is $c=\mu$ and to evaluate $\mu$ using the optimum $c(=\mu)$ one requires to use a self-consistent procedure, iterating the value of $c$ in Eq. (6) and resetting $c=\mu$ until $\langle\mathcal{F}[-\beta(u-c)]\rangle_{g}=\langle\mathcal{F}[\beta(u-c)]% \rangle_{f}$. In practise, this step only requires a small number of iterations. Recent publications Lu et al. (2003); Shirts and Pande (2005) demonstrate that the Bennett method remains the best general method to compute the chemical potential for many applications. Note that the Bennett method is a two-stage FEP and therefore it also requires sampling of the system “1”. In the case of the determination of the chemical potential this system has N+1 molecules and $g(u)$ is obtained from its single-molecule energy distribution. However this extra requirement is not really a drawback. Lu et al. Lu et al. (2003) showed that, provided $N>O(100)$, the $g-$average can be evaluated in the same simulation as is used to sample the $f$ distribution (system “0”) without any noticeable loss in accuracy. The $g$ distribution (constructed from the energy of the real particles) is thus a byproduct of the simulation so the average $\langle\mathcal{F}\rangle_{g}$ does not demand any extra computational cost. Another group of methods for determination of the chemical potential are based on biased instead of uniform sampling. In particular, cavity-biased methods first select spherical cavities of minimum radius $R_{c}$ (a free parameter) in which to insert the test-molecule. This accelerates the evaluation of the ensemble average in dense phases because the low-energy configurations of the test-molecule (with large Boltzmann factors) are usually located in larger cavities with less steric hindrance. Variations of this method have been proposed by several authors; these include the Cavity Insertion Widom method (CIW) due to Mezei and coworkers Jedlovszky and Mezei (2000), the Excluded Volume Map Sampling by Deitrick et al. Deitrick et al. (1989) and the method proposed by Pohorille and Wilson Pohorille and Wilson (1996). The cavities are located by a grid search over the whole simulation cell. A cavity centre is assigned at each grid point whose distance to the closest particle is greater than $R_{c}$. In order to correct the bias introduced in sampling only inside the cavities one also has to calculate the probability of finding a cavity, which is obtained in the same grid-search step. A drawback of the cavity-biased method is that it is only indirectly related to the test-particle energy via the excluded volume. This fact introduces a certain inaccuracy in the estimation of the chemical potential, as it can depend on the value of the cavity radius $R_{c}$ selected. For instance, the CIW has recently been used to calculate the chemical potential of several species across a lipid bilayer Jedlovszky and Mezei (2000). As a test calculation the authors estimated the chemical potential of water in water and reported variations of about 1 Kcal/mol as $R_{c}$ was varied from $2.6\AA$ to $2.8\AA$. Also, using $R_{c}\in[2.6,2.9]\AA$ resulted in uncertainties of about 2 Kcal/mol in estimates of the excess chemical potential of some species across the lipid layer. Note that the important region of the cavity-biased method is constructed over the translational degrees of freedom of a “coarse-grained” spherical molecule with an effective radius. This means that it can only be applied to small solutes with spherical or roughly spherical shapes Deitrick et al. (1989). In this work we present an energy-biased method for the estimation of the chemical potential and reconstruction of the energy distribution $f(u)$ in dense phases. The idea is to restrict the sample to an important region defined by the set of bounded domains in the configurational space of the test-molecule where the energy $u$ is smaller than a given free parameter $u_{w}$. We denote as an energy-well each compact subdomain within the test-molecule energy-landscape for which $u<u_{w}$. Note that the present approach retains the main benefit of the cavity-biased method, but provides an exact evaluation of the energy distribution $f(u)$ and the chemical potential, because the energy-wells are defined directly in terms of the energy landscape. Moreover our energy-biased method does not assume any particular molecular shape and therefore it may be used for non-spherical molecules and can coherently sample over rotational degrees of freedom as well. We also note that the number of stages are not limited to two. When systems 0 and 1 are very different it may be impossible within the simulation time to sample the importance region of the two systems. In this case it is more efficient to compute the total free energy difference by using a set of intermediate states. The energy bias method can be applied on each of these intermediate state transitions at the cost of performing independent simulations for each state. Other approaches include, for instance, slow and fast growth methods where the system is changed from one state to another within a certain simulation time $\tau$ (large for slow growth). The fast growth method consists of sampling rapid transformation from many simulations which are then combined by using Jarzynski nonequilibrium work relation Jarynski (1997) to obtain the total free energy difference. The rest of the paper proceeds as follows. The energy-biased method is explained in Sec. II, while in Sec. III we derive an analytical expression for the efficiency of the method and estimate the optimal parameter $u_{w}$ by maximising the efficiency. In Sec IV the method is tested in liquid argon at high density (modelled as Lennard-Jones atoms) where it is used to reconstruct the test-particle energy distribution $f(u)$ and the chemical potential. We also demonstrate the gain in efficiency obtained with energy-biased sampling with respect to uniform sampling. We conclude with a summary of our findings in Sec. V. Finally in Appendix A we briefly explain the Hit&Run algorithm which efficiently samples bounded regions of arbitrary shape immersed in an arbitrary number of dimensions. II Overview of the method  As stated in the introduction, energy-biased sampling consists of uniform sampling of the importance region defined by the set of subdomains in the test-molecule configurational space where its potential energy is less than $u_{w}$. The probability density is therefore given by $$h(u)=\left\{\begin{array}[]{cc}f(u)/F_{w}&u\leq u_{w}\\ 0&u>u_{w},\\ \end{array}\right.$$ (7) where the normalisation factor $F_{w}\equiv\int_{-\infty}^{u_{w}}f(u)du$ is the cumulative probability of the unbiased distribution $f(u)$ and $u_{w}$ is an arbitrary energy (free parameter). Note that the energy-biased distribution of Eq. (7) can be straightforwardly combined with any of the popular methods to calculate the chemical potential from Eq. (4). We shall use the Bennett method due to its excellent performance. Introducing the weighting function $w(u)=\mathcal{F}[\beta(c-u)]$ in Eq. (6) and using Eq. (7), one obtains the energy-biased Bennett estimator for $\beta\mu$, $$\beta\mu=\ln\left(\frac{\langle\mathcal{F}_{c}\rangle_{g}}{F_{w}\langle% \mathcal{F}_{c}\rangle_{h}}\right)+\beta c,$$ (8) where we have introduced the notation $\mathcal{F}_{c}\equiv\mathcal{F}[\beta(u-c)]$ to indicate that after the ensemble average we still have a function of $c$. As before, the subscript $h$ indicates the average over the biased distribution of Eq. (7). Sampling from the energy probability distribution $h(u)$ requires a more careful consideration of the energy landscape of the system. We indicate by ${\bf r}$ a configuration of the (N+1)th molecule and by ${\bf R}$ the configuration of the remaining N molecules. For a simple argon fluid ${\bf r}\in D$ where $D\subset R^{3}$, while for a 3 sites flexible water model like TIP3P $D\subset R^{9}$, which includes the three Euler angles determining the molecule orientation, the H-O-H angle and the two H-O distances. As shown in Fig. (1), the region $$A_{u_{w}}=\{{\bf r}\in D:u({\bf r,R})<u_{w}\}$$ (9) is composed of many disconnected bounded regions of different sizes such that $A_{u_{w}}=\cup_{\alpha}A_{u_{w}}^{\alpha}$, where each $A_{u_{w}}^{\alpha}$ is now a connected region. Of course, for $u_{w}\rightarrow\infty$ we have that all the regions $A_{u_{w}}^{\alpha}$ connect and $A_{\infty}^{\alpha}=D$, the entire domain. The sampling algorithm must reproduce a uniform probability distribution $$p_{u_{w}}({\bf r})=\frac{1}{\Omega(A_{u_{w}})},$$ (10) where $\Omega(A_{u_{w}})$ is the volume of the region. For a given energy bias $u_{w}$, the algorithm for selecting configurations ${\bf r}$ according to Eq. (10) can be described in terms of two main steps which are applied iteratively: 1. Locate a compact energy-well $A_{u_{w}}^{\alpha}$ in the configurational space D, where $u<u_{w}$. 2. Sample the energy-well $A_{u_{w}}^{\alpha}$ with a uniform probability density. The simplest procedure for locating energy wells in step (1) is to perform a random search over the whole configurational space until a fixed number of cavities is found. This procedure, however, does not avoid the probability of exploring the same well more than once, and we observed that it can easily lead to highly correlated data. Instead we perform step (1) by choosing points on a grid within the whole configurational space of the test-molecule. In the case of the Lennard-Jones fluid, the three-dimensional configurational space is probed at the nodes of a Cartesian grid of size $n_{x}\times n_{y}\times n_{z}$, where $n_{\alpha}$ is the number of nodes along the coordinate $\alpha$. We observed that the minimum distance between nodes that guarantees statistically independent samples is around $0.5\sigma$. An energy well is found at each node where the energy of the test-molecule is $u<u_{w}$. Then, the locations of each of these nodes are used as starting configurations for independent well samplings. In this way we ensure that we are sampling different cavities for each explored configuration (snapshot) of the system. Note that using grid-sampling the number of cavities found per snapshot is a fluctuating quantity. The search requires an average of $n_{0}=1/F_{w}$ energy evaluations to locate one well (i.e. one configuration with energy $u<u_{w}$.) During this same step (1) one can calculate the cumulative probability $F_{w}$ from the estimator $m/n_{0}$, with $n_{0}$ being the total number of samples (Bernoulli trials) and $m$ the number of successful trials with $u<u_{w}$, i.e., the total number of energy-wells found. This number $m/n_{0}$ converges to $F_{w}$ as $n_{0}\rightarrow\infty$ and, for a finite number of statistically independent trials $n_{0}$, its variance is $(1-F_{w})F_{w}/n_{0}$. In practise, the estimation of $F_{w}$ requires the number of unbiased samples to be $n_{0}>>1/F_{w}$; this condition also ensures that a significant number of energy-wells ($m>0$) are to be found. Step (2) of the loop mentioned above requires a procedure to sample in an unbiased way the interior of each energy well. This is a delicate step because any bias incurred in sampling the importance region will be transfered to the estimator for $\beta\mu$, resulting in inaccuracy of the method. To tackle this problem we use the so-called Hit&Run algorithm Smith (1984), which is explained in Appendix A. III Efficiency and optimal parameters of the method We now calculate the efficiency of the method and provide a way of choosing the optimal value of the parameter $u_{w}$ by maximising the efficiency. We also compare the efficiency of the estimator in Eq. (8) based on energy-biased sampling with that of the standard Bennett algorithm of Eq. (6). III.1 Energy-biased Bennett method The variance of the Bennett method can be cast in terms of the probability densities $f(u)$ and $g(u)$. Starting from Eq. (6), after some algebra the variance of the Bennett method assumes the form $$\mathtt{Var}_{B}[\beta\mu]=\frac{1}{n_{0}\langle\mathcal{F}[\beta(u-c)]\rangle% _{f}},$$ (11) where $n_{0}$ is the number of insertions used to sample the complete configurational space of the test-particle. Note that the computational cost of the standard Bennett method is $n_{0}$, so according to (5) and Eq. (11) its maximum efficiency is given by $$\varepsilon_{B}=\langle\mathcal{F}_{c}\rangle_{f}.$$ (12) Let us now consider the variance of the estimator in Eq. (8), which is the sum of the variance of the estimator for $F_{w}$ and the estimator for the ensemble average $$\mathtt{Var}_{EB}[\beta\mu]=\mathtt{Var}[\ln F_{w}]+\frac{1}{n_{w}\langle% \mathcal{F}_{c}\rangle_{h}}\simeq\frac{1}{n_{0}F_{w}}+\frac{1}{n_{w}\langle% \mathcal{F}_{c}\rangle_{h}},$$ (13) where we have used the relation $\mathtt{Var}[\ln(F_{w})]\simeq\mathtt{Var}[F_{w}]/F_{w}^{2}=(1-F_{w})/(n_{0}F_% {w})\simeq 1/(n_{0}F_{w})$, for $F_{w}<<1$. Here $n_{0}$ is the number of random insertions in the entire configurational space and $n_{w}$ is the number of independent samples within the importance region $u<u_{w}$. The probability of finding an energy-well with $u<u_{w}$ using uniform sampling over the whole configurational space is $F_{w}$, so the number of cavities found after $n_{0}$ trials is $m=F_{w}n_{0}$. If the number of statistically independent samples per well is $s$, the total number of independent samples within the restricted configurational space $u<u_{w}$ is $$n_{w}=n_{0}sF_{w}.$$ (14) We note that the number of independent samples per well $s$ depends on the fluid considered and, of course, on the biasing energy $u_{w}$. In Appendix B we provide a way of estimating $s$ from the outcome of the data obtained from Hit&Run sampling. Inserting Eq. (14) into Eq. (13) one obtains for the energy-biased algorithm $$\mathtt{Var}_{EB}[\beta\mu]=\frac{1}{n_{0}}\left(\frac{1}{F_{w}}+\frac{1}{s% \langle\mathcal{F}_{c}\rangle_{f}}\right).$$ (15) In deriving Eq.(15) we used that $\langle\mathcal{F}_{c}\rangle_{f}=F_{w}\langle\mathcal{F}_{c}\rangle_{h}$ up to a negligible amount. This can be seen by noticing that the function $\mathcal{F}[\beta(u-c)]$ in the integrand of $\langle\mathcal{F}_{c}\rangle_{f}=\int_{-\infty}^{\infty}f(u)\mathcal{F}[\beta% (u-c)]du$ decays exponentially for $u>c$. Hence, in any practical case ($u_{w}>c$) most of the integral weight comes from $u<u_{w}$, for which the energy-biased reconstruction of the energy profile $f(u)$ is exact (see Fig. 3). We now evaluate the cost, which is given by the total number of energy evaluations of the test molecule needed to obtain $n_{w}$ samples: $$n_{cost}=n_{0}+n_{w}/{\tt a},$$ (16) where ${\tt a}<1$ is the acceptance ratio of the Hit&Run sampling algorithm, defined in Appendix A. Introducing Eq.(14) into Eq. (16) we obtain $$n_{cost}=n_{0}\left(1+\frac{sF_{w}}{{\tt a}}\right).$$ (17) For the energy-biased algorithm the efficiency is $\varepsilon=(n_{cost}\mathtt{Var}_{EB}[\beta\mu])^{-1}$. Using Eq.(15) and Eq.(17) one obtains $$\varepsilon_{EB}^{-1}=\frac{1}{F_{w}}+\frac{1}{s\langle\mathcal{F}_{c}\rangle_% {f}}+\frac{s}{{\tt a}}+\frac{F_{w}}{{\tt a}\langle\mathcal{F}_{c}\rangle_{f}}.$$ (18) By maximising the efficiency $\varepsilon=\varepsilon(F_{w})$ in Eq. (18) with respect to $F_{w}$, one obtains the optimal value $F_{w}^{opt}$ and the maximum efficiency $\varepsilon_{EB_{\max}}=\varepsilon_{EB}(F_{w}^{opt})$: $$\displaystyle F_{w}^{opt}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{{\tt a}\langle\mathcal{F}_{c}\rangle_{f}}$$ (19) $$\displaystyle\varepsilon_{EB_{\max}}^{-1}$$ $$\displaystyle=$$ $$\displaystyle 2\frac{1}{\sqrt{{\tt a}\langle\mathcal{F}_{c}\rangle_{f}}}+\frac% {s}{a}+\frac{1}{s\langle\mathcal{F}_{c}\rangle_{f}}.$$ (20) Finally, we compare the efficiency of the energy-biased algorithm with that provided by the Bennett algorithm, given by $\varepsilon_{B}=\langle\mathcal{F}_{c}\rangle_{f}$. According to Eq. (20) the ratio of efficiencies is given by $$\frac{\varepsilon_{B}}{\varepsilon_{EB_{\max}}}=2\sqrt{\frac{\langle\mathcal{F% }_{c}\rangle_{f}}{{\tt a}}}+\frac{s\langle\mathcal{F}_{c}\rangle_{f}}{{\tt a}}% +\frac{1}{s}.$$ (21) Equation (21) yields the range of values of $\langle\mathcal{F}_{c}\rangle_{f}$ for which the energy-biased Bennett estimator for $\beta\mu$ method is more efficient than the standard (unbiased) Bennett algorithm. Note that for $s=\sqrt{{\tt a}/\langle\mathcal{F}_{c}\rangle_{f}}$ the efficiency ratio given by Eq. (21) reaches its minimum value, $\varepsilon_{B}/\varepsilon_{EB_{\max}}=4\sqrt{\langle\mathcal{F}_{c}\rangle_{% f}/{\tt a}}$, and therefore $\varepsilon_{B}<\varepsilon_{EB}$ if $\langle\mathcal{F}_{c}\rangle_{f}>{\tt a}/16$. Hence the energy-biased method is suited for fluids at high densities or low temperatures or for molecular fluids with low insertion probability. In this regime $\langle\mathcal{F}_{c}\rangle_{f}<<{\tt a}/16$ and the dominant term in Eq. (21) is $1/s$, hence $\varepsilon_{EB_{\max}}\simeq s\varepsilon_{B}$. In other words, the maximal efficiency of the present energy-biased method is limited by the average number $s$ of independent samples that can be obtained within one energy-well. As shown in Appendix B, for the Lennard-Jones fluid we have observed that in the most unfavourable case (high density and low temperature) $s\sim[5-10]$. III.2 Reconstruction of the energy distribution We now show that the reconstruction of $f(u)$ using the energy-biased procedure (EB) is faster and more efficient than that obtained using any unbiased sampler which uniformly explores the whole configurational space. To that end we consider the evaluation of the cumulative probability $F(u)=\int_{-\infty}^{u}f(u^{\prime})du$ for $u<u_{w}$ (i.e. for $F(u)<F_{w}$). We shall compare the variance of two estimators for $F$: one based on uniform insertion over the whole domain and the other based on the energy-biased procedure. The variance of the unbiased estimator is simply $\mathtt{Var}(F)=F(1-F)/n_{0}$ and for low energies ($F<<1$) its efficiency is $1/F$. The expected value of the energy-biased estimator is $HF_{w}$, where $H(u)=\int_{-\infty}^{u}h(u^{\prime})du^{\prime}$ is the cumulative probability of the biased distribution in Eq. (7). This estimator is constructed as a product of two statistically independent fluctuating variables and its variance is Goodman (1960) $$\displaystyle\mathtt{Var}_{EB}(F)$$ $$\displaystyle=$$ $$\displaystyle\mathtt{Var}(HF_{w})=F_{w}^{2}\mathtt{Var}(H)$$ $$\displaystyle+$$ $$\displaystyle H^{2}\mathtt{Var}(F_{w})+\mathtt{Var}(F_{w})\mathtt{Var}(H).$$ Using $\mathtt{Var}(H)=H(1-H)/n_{w}$ and $\mathtt{Var}(F_{w})=F_{w}(F_{w}-1)/n_{0}$ one obtains $$\mathtt{Var}_{EB}=\frac{F_{w}(F_{w}-1)H^{2}}{n_{0}}+\frac{H(1-H)F_{w}^{2}}{n_{% w}}+\frac{F_{w}H(1-H)}{n_{0}n_{w}}.$$ (23) Note that, as expected, for $H\simeq 1$ one recovers the variance of the unbiased insertion method. The interesting part of the energy distribution is the importance region, located in the low energy range, where $H<<1$. In this regime one can make the approximation $1-H\simeq 1$. Using $F=HF_{w}$ and $n_{w}=n_{0}sF_{w}$, one gets $$\mathtt{Var}_{EB}=\frac{F}{n_{0}}\left(\frac{F}{F_{w}}+\frac{1}{s}+\frac{1}{n_% {0}sF_{w}}\right).$$ (24) Note that the term in brackets is the reduction in variance with respect to uniform unbiased sampling. Because $F_{w}$ is evaluated from $n_{0}$ probes, this means that necessarily $n_{0}>>1/F_{w}$ so the third term inside the brackets is much smaller than unity. On the other hand, for the low energy range considered $F<<F_{w}$ and one finally concludes that $\mathtt{Var}_{EB}\simeq\mathtt{Var}(F)/s$, where $\mathtt{Var}(F)\simeq F/n_{0}$ is the variance obtained in the unbiased uniform sampling of the whole domain. The cost associated with the energy-biased procedure is $n_{cost}=n_{0}(1+sF_{w}/{\tt a})$. In the case of a Lennard-Jones liquid we have found that ${\tt a}\simeq 0.17$ and $s\sim O(10)$, while the optimal cumulative probability is $F_{w}\lesssim 10^{-3}$. This means that, in practical situations, $sF_{w}/{\tt a}\lesssim 1$ and $n_{cost}\gtrsim n_{0}$. Thus, according to Eq. (24) the energy-biased sampling procedure is around $s$ times faster than a uniform unbiased (grid or random) sampler in reconstructing the low energy range of $f(u)$. As before, $s$ is the average number of independent samples taken per well. IV Results In order to confirm the foregoing theoretical relations about efficiency and variance reduction, we performed molecular dynamics simulations of a Lennard-Jones liquid at high density and low temperature ($\rho=0.0236\AA^{-3}$ and $T=84$K). These simulations were performed in a cubic periodic box of side $L=10\sigma$. We used the standard Verlet method Allen and Tildesley (1987) to integrate Newton’s equations of motion, incorporating a Langevin thermostat Kremer and Grest (1990) to keep the system in the NVT ensemble. During the simulation, the iterative loop (1)+(2) explained in Sec. II was performed $m$ times per time interval $\delta t_{samp}=0.5\tau$, which corresponds to about three times the collision time. The search for wells performed in step (1) was done by probing at the nodes of a Cartesian grid comprising $15^{3}$ nodes. This ensured that the explored cavities are independent. All the cavities found in step (1) were sampled using the Hit&Run algorithm (see Appendix A). IV.1 Estimation of the chemical potential One way to measure the efficiency of the method is to evaluate the convergence of the estimated value of the chemical potential for an increasing number of test-particle probes $n_{cost}$. Convergence can be calculated from the difference between successive values of $\mu_{n}$, where $n(=n_{cost})$ indicates the total number of evaluations of the test-particle energy. Figure 2 shows how this difference decreases in calculations based on both the energy-biased and the unbiased samples. These calculations correspond to liquid argon with number density $\rho=0.0236\AA^{-3}$ and temperature $T=84$K (these values correspond to $\rho=0.92\sigma^{3}$ and $T=0.7$ in Lennard-Jones units), for which the average of the Fermi function is $\langle\mathcal{F}_{c}\rangle_{f}=8.9\times 10^{-6}$. According to Eq. (19) the optimum value of $F_{w}$ is $0.0012$, which corresponds to $u_{w}\simeq 14.19$ Kcal/mol. We selected the predicted optimum parameter ($u_{w}=14.19$ Kcal/mol) and performed $d=15$ samples per well. As can be seen in Fig. 2, for equal numbers of energy probes ($n=n_{cost}$), the average difference between successive estimates of the chemical potential via the energy-biased method is about five times smaller than that obtained with the unbiased sampler. As predicted by Eq. (21), such a gain in efficiency is consistent with the average number $s$ of independent samples per well (see table 2), which for this simulation was $s\simeq 5$. Evaluations of the chemical potential for Lennard-Jones (LJ) fluids are shown in Table 1 together with the estimated efficiency of each calculation. For a LJ fluid with $\rho=0.02360\AA^{-3}$ and $T=84$K the numerically obtained net gain is around 7, which coincides with the prediction in Eq. (21) using $s=7$. For illustrative purposes we also analysed a case for which the efficiency of our implementation of the energy-biased sampling is similar to the uniform-unbiased Bennett method. For instance, $\langle\mathcal{F}_{c}\rangle_{f}=0.0102$ for $\rho=0.01755\AA^{-3}$ and $T=178.5$K. Using ${\tt a}=0.165$ and the (optimum) number of samples $s=\sqrt{{\tt a}/\langle\mathcal{F}_{c}\rangle_{f}}\simeq 4$ in Eq. (21) one obtains $\varepsilon_{B}/\varepsilon_{EB_{\max}}\simeq 1$; our numerical calculations, with $u_{w}=7.33$ and $d=8$, confirmed this conclusion. We note that for any value of $u_{w}$ considered the energy-biased estimation of the chemical potential $\mu$ agrees within about $0.01$ Kcal/mol with the unbiased Bennett result. This is illustrated in Table 2 where we show the estimated $\mu$ for the higher density liquid, using several values of $u_{w}$. IV.2 Reconstruction of the energy distribution $f(u)$ In Fig. 3 we compare the reconstructed energy distribution $f(u)$ at energies $u<u_{w}$ with that computed from an unbiased method, which consists of a large number of random insertions within the entire configurational space. Figure 3 clearly illustrates that the energy-biased method exactly reproduces the unbiased distribution $f(u)$ for energies smaller that $u_{w}$. This attractive feature is a consequence of the fact that it is easy to exactly correct for the bias in terms of the cavity energies. This is not true for the accessible volume of the molecule, as in cavity-biased procedures Jedlovszky and Mezei (2000); Deitrick et al. (1989). In order to illustrate the above conclusion we show in Fig. 4 the estimation of the cumulative probability $F(u)$ versus the total number of test-particle energy probes used for the evaluation. The particular case shown corresponds to $u=5$ Kcal/mol, for a LJ liquid at $\rho=0.0236\AA^{-3}$ and $T=84$K. The energy-biased sampling was done using $u_{w}=14.19$ Kcal/mol and $d=15$ samples per well, and for this calculation we obtained $s\simeq 5$ (see Appendix and Table 2). Compared with the unbiased procedure, the reduction of variance provided by the energy-biased sampler is immediately apparent on inspection of Fig. 4. A numerical evaluation of the variance of each data set in Fig. 4 provides: $\mathtt{Var}_{EB}=4.14\times 10^{-5}/n_{0}$, while the (best) result for the algorithm based on uniform unbiased sampling is $\mathtt{Var}(F)=F/n_{0}=1.9\times 10^{-4}/n_{0}$. Hence the net gain in efficiency is about 4.6, in agreement with the value of $s=5$ obtained from the independent correlation analysis explained in Appendix B. As shown in Table 1, the estimated net gain in the evaluation of the chemical potential compared with the unbiased Bennett method is $7\pm 1$, which is close to the estimate $s\simeq 5$ obtained from the analysis of the cumulative probability. V Conclusion We have presented a new method for sampling the energy of a test-molecule in order to calculate single-particle ensemble averages and, in particular, the chemical potential. The method, called energy-biased sampling, restricts the important region to the bounded domains in the test-molecule energy-landscape where the test-molecule energy $u$ is smaller than a given free parameter $u_{w}$. This energy-biased sampling retains the principal benefit of cavity-biased methods Jedlovszky and Mezei (2000); Deitrick et al. (1989) in the sense that, by sampling only within regions with a significant Boltzmann factor, convergence is greatly accelerated with respect to uniform sampling. Furthermore, because the energy-biased sampling is accurately defined in terms of the test-particle energy it has some important benefits: first, it allows accurate reproduction of the test-particle energy distribution $f(u)$ and the chemical potential; second, it is possible to sample cavities of arbitrary shape (not only spherical ones) and to generalise the cavity dimensionality to include the rotational degrees of freedom in the energy-well reconstruction; finally, and rather importantly, it enables one to combine the sampling results with standard free energy perturbation (FEP) formulae. In particular, we combined it with the Bennett method Bennett (1976) which minimises the variance of the estimator and has proved to be the best method in the literatureLu et al. (2003); Shirts and Pande (2005). Energy-biased sampling is a general protocol to bias the sampling and consists of two sequential steps: (1) searching and (2) sampling the interior of energy-wells. In this work we have implemented these two steps using relatively simple algorithms: uniform unbiased search and Hit&Run sampling. However we note that other solutions are also possible. For instance, non-uniform sampling of the importance region may surely increase the efficiency of the present method. In dense systems, the searching step becomes the most difficult one and a more effective extension of this method could be to perform a biased search (using, for instance, some variation of the usher algorithm Delgado-Buscalioni and Coveney (2003); De Fabritiis et al. (2004)) so as to significantly increase the probability of finding favourable cavities for insertion of the test particle. These extensions are left for future studies. Acknowledgements.This research was supported by the EPSRC Integrative Biology project GR/S72023 and by the EPSRC RealityGrid project GR/67699. R.D-B acknowledges support from the European Commission via the MERG-CT-2004-006316 grant and from the Spanish research grants FIS2004-01934 and CTQ2004-05706/BQU. Appendix A Sampling bounded regions with the Hit&Run algorithm There exists a relatively large literature on sampling a bounded connected region (see for instance Ref. Liu (2001) and references therein). In this work we have used the so-called Hit&Run algorithm for its simplicity and good performance Liu (2001). The Hit&Run sampler is a special Monte Carlo Markov chain which draws numbers from an assigned distribution Smith (1984); Liu (2001) $p({\bf r})$, where ${\bf r}\in A$ lies within a bounded connected region of an n-dimensional space $A\subset R^{n}$. In our case, $p({\bf r})$ is a uniform probability density over the region $A_{u_{w}}^{\alpha}$ such that $$p({\bf r})=\frac{1}{\Omega(A_{u_{w}}^{\alpha})}.$$ (25) The Hit&Run algorithm starts from a point ${\bf r_{0}}$ within the bounded region $A$ and performs the following steps: i. Choose a random direction ${\bf e}$ and find the intersections of the cavity border with the line ${\bf r}(\lambda)={\bf r}_{0}+\lambda{\bf e}$, where $\lambda$ is a real number. As the cavity $A$ is bounded the intersection is composed by two points ${\bf r}(\lambda^{+})$ and ${\bf r}(\lambda^{-})$ (here $\lambda^{+}>0$ and $\lambda^{-}<0$). ii. Select a point ${\bf r}_{1}$ within the segment (${\bf r}(\lambda^{+})$, ${\bf r}(\lambda^{-})$), i.e., $${\bf r}_{1}={\bf r}(\lambda^{-})+\xi({\bf r}(\lambda^{+})-{\bf r}(\lambda^{-}))$$ (26) where $\xi\in(0,1)$ is a uniformly distributed random number. iii. Sample at ${\bf r}_{1}$, set ${\bf r}_{1}\rightarrow{\bf r}_{0}$ as the new starting point and go to (i). The above procedure is repeated to obtain the desired number of samples $d$. In our case the starting point for the sample chain, ${\bf r}_{0}$, is the test-particle configuration returned by the algorithm for energy-well searching ($U({\bf r}_{0},{\bf R})<u_{w}$). In order to locate the borders of the energy well ${\bf r}(\lambda^{+})$ and ${\bf r}(\lambda^{-})$ we use the following procedure. Starting from ${\bf r}_{0}$ we cross the well along the line defined by the random unit vector ${\bf e}$ moving in steps of size $\delta s$, i.e., according to $${\bf r}(k)={\bf r}_{0}+\,k\,\delta s\,{\bf e},$$ (27) with $k$ being an integer starting from $k=\pm 1$. The energy is computed at each point ${\bf r}(k)$ until one crosses the edges of the well at $k=k^{+}$ and $k=k^{-}$ (for which $u({\bf r({k^{\pm})}},{\bf R})>u_{w}$). An approximate location of the cavity borders is provided by setting $\lambda^{\pm}=k^{\pm}$. We used typically $\delta s\simeq 0.3\AA$ and required, on average, about five iterations to cross the well in one random direction (this value depends on the density and $u_{w}$). Note that the acceptance ratio is ${\tt a}=\langle k^{+}-k^{-}\rangle^{-1}$ and for the high density cases considered here ${\tt a}\simeq 0.17$. Appendix B Optimal number of sampling directions It is possible to reduce the cost without increasing the variance by setting the number of samples per cavity $d$ equal to or somewhat larger than $s$, the average number of independent samples per cavity. Note that the number of statistically independent samples within one cavity is $s=d/\tau_{c}$, where $\tau_{c}$ is an empirically estimated autocorrelation length of the whole chain of data. This number $\tau_{c}$ can be estimated from the large $m$ limit of the quantity $m\,\mathtt{Var}[\mathcal{F}^{(m)}]/\mathtt{Var}[\mathcal{F}]$, where $\mathcal{F}_{c}=\mathcal{F}[\beta(u-c)]$ is the Fermi function evaluated at a single energy $u$ and $\mathcal{F}^{(m)}$ denotes the mean of $m$ consecutive $\mathcal{F}$ values. The value of $s$ can be estimated by performing several Hit&Run samplings with an increasing number of directions per cavity $d>s$, then computing $\tau_{c}$ for the chain of samples and evaluating $d/\tau_{c}$, which should be nearly independent of $d$. We carried out this evaluation of $s$ for varying values of $u_{w}$ within the same system and for fixed $u_{w}$ and varying density. The results of this study, reported in Table 2, clearly indicate that $s$ does not greatly vary for a broad range of values of the cavity-border energy $u_{w}$. In fact, at low and moderate values of $u_{w}$ the energy-cavities are isolated and their average size (in $\AA$) grows quite slowly with $u_{w}$. This is due to the steepness of the hard-core part of the Lennard-Jones potential. Above a certain energy $u_{w}$ the cavities become connected and a steep rise in the average size of the energy-cavities is observed. This is reflected in the value of $s$. As shown in Table 2 for $u_{w}=14.19$ Kcal/mol we obtained $s\simeq 4.5$ and $s\simeq 11$ for two calculations using $d=15$ and $d=100$ respectively. We obtained a relatively close value $s\simeq 7$ for twice as large an energy limit $u_{w}=28.38$ Kcal/mol. However using $u_{w}=165.53$ Kcal/mol the average number of independent samples increased up to $25$, reflecting the more complex shape and larger volume of these energy cavities. In summary, for the optimum range of values of $u_{w}\sim[10-30]$ Kcal/mol we find $s\simeq[5-10]$ in the case of the Lennard-Jones liquid. References Lu et al. (2003) N. Lu, J. K. Singh, and D. A. Kofke, J. Chem. Phys. 118, 2977 (2003). Allen and Tildesley (1987) M. Allen and D. Tildesley, Computer Simulations of Liquids (Oxford University Press, 1987). Frenkel and Smith (2002) D. Frenkel and B. Smith, Understanding Molecular Simulation: From Algorithms to Applications (Academic Press, San Diego, 2nd edition, 2002). Kollman (1993) P. Kollman, Chem. Rev. 93, 2395 (1993). Shirts and Pande (2005) M. R. Shirts and V. S. Pande, J. Chem. Phys. 122, 144107 (2005). Deitrick et al. (1989) G. L. Deitrick, L. E. Scriven, and H. T. Davis, J. Chem. Phys. 90, 2370 (1989). Shing and Gubbins (1982) K. S. Shing and K. E. Gubbins, Mol. Phys. 46, 1109 (1982). Bennett (1976) C. H. Bennett, J. Comput. Phys. 22, 245 (1976). Jedlovszky and Mezei (2000) P. Jedlovszky and M. Mezei, J. Am. Chem. Soc. 122, 5125 (2000). Pohorille and Wilson (1996) A. Pohorille and M. A. Wilson, J. Chem. Phys. 104, 3760 (1996). Jarynski (1997) C. Jarynski, Phys. Rev. Lett. 78, 2690 (1997). Smith (1984) R. L. Smith, Operations Research 32, 1296 (1984). Goodman (1960) L. Goodman, J. Amer. Stat. Assoc. 55, 708 (1960). Kremer and Grest (1990) K. Kremer and G. Grest, J. Chem. Phys. 92, 5057 (1990). Delgado-Buscalioni and Coveney (2003) R. Delgado-Buscalioni and P. V. Coveney, J. Chem. Phys. 119, 978 (2003). De Fabritiis et al. (2004) G. De Fabritiis, R. Delgado-Buscalioni, and P. V. Coveney, J. Chem. Phys. 121, 12139 (2004). Liu (2001) J. S. Liu, Monte Carlo Strategies in Scientific Computing (New York: Springer-Verlag, 2001).
$\alpha$ Centauri A as a potential stellar model calibrator: establishing the nature of its core B. Nsamba,${}^{1,2}$ M. J. P. F. G. Monteiro, ${}^{1,2}$ T. L. Campante, ${}^{1,2}$ M. S. Cunha, ${}^{1,2}$ and S. G. Sousa, ${}^{1}$ ${}^{1}$Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, Rua das Estrelas, PT4150-762 Porto, Portugal ${}^{2}$Departamento de Física e Astronomia, Faculdade de Ciências da Universidade do Porto, PT4169-007 Porto, Portugal E-mail: benard.nsamba@astro.up.pt (Accepted 2018 May 23. Received 2018 May 10; in original form 2018 March 23) Abstract Understanding the physical process responsible for the transport of energy in the core of $\alpha$ Centauri A is of the utmost importance if this star is to be used in the calibration of stellar model physics. Adoption of different parallax measurements available in the literature results in differences in the interferometric radius constraints used in stellar modelling. Further, this is at the origin of the different dynamical mass measurements reported for this star. With the goal of reproducing the revised dynamical mass derived by Pourbaix & Boffin, we modelled the star using two stellar grids varying in the adopted nuclear reaction rates. Asteroseismic and spectroscopic observables were complemented with different interferometric radius constraints during the optimisation procedure. Our findings show that best-fit models reproducing the revised dynamical mass favour the existence of a convective core ($\gtrsim$ 70% of best-fit models), a result that is robust against changes to the model physics. If this mass is accurate, then $\alpha$ Centauri A may be used to calibrate stellar model parameters in the presence of a convective core. keywords: $\alpha$ Centauri A – method: asteroseismology – stars: fundamental parameters – stars: convection and radiation ††pubyear: 2018††pagerange: $\alpha$ Centauri A as a potential stellar model calibrator: establishing the nature of its core–$\alpha$ Centauri A as a potential stellar model calibrator: establishing the nature of its core\AtBeginShipout 1 Introduction Stellar physicists have for decades yearned for a star more massive than the Sun with a range of precisely measured observables, namely, spectroscopic parameters, an interferometric radius, asteroseismic properties, and a dynamical mass measurement. This stems from the fact that stellar model physics (e.g. mixing length parameter, treatment of the initial helium mass fraction, surface element abundances etc.) is often calibrated based on the Sun and used in the modelling of other stars (e.g. Christensen-Dalsgaard 2009; Asplund et al. 2009; Bonaca et al. 2012; Vorontsov et al. 2013; Silva Aguirre et al. 2015, 2017). This is a reasonable approach for stars within the same mass range and with a similar metal content as the Sun. For more massive stars, however, this may not hold, since their internal structure significantly differs from that of the Sun. $\alpha$ Centauri A presents a unique opportunity to improve our understanding of the underlying physical processes taking place in stars slightly more massive than the Sun. This is due to a number of reasons: (i) $\alpha$ Centauri A is one of the components of the closest binary system to the Sun, having well determined orbital parameters (Pourbaix & Boffin, 2016; Kervella et al., 2017)111Hereafter, we note Pourbaix et al. (2002) as P02, Kervella et al. (2003) as K03, Kervella et al. (2016) as K16, Pourbaix & Boffin (2016) as P16, Kervella et al. (2017) as K17, and Söderhjelm (1999) as S99.. A dynamical mass measurement is available for both components of the binary (P02; P16; K16). (ii) Precise parallax measurements are available and have been used to yield a distance to the star (S99; K16). This distance has been combined with an interferometric measurement of the star’s angular diameter to obtain its radius (K03; P16; K17). (iii) Spectroscopic parameters (e.g. effective temperature, metallicity etc.) are readily available. (iv) Several ground-based campaigns have been conducted in order to obtain asteroseismic data for this star (Bouchy & Carrier, 2002; Bedding et al., 2005; Bazot et al., 2007; de Meulenaer et al., 2010). The combination of the above set of observables has thus the potential to place tight constraints on the stellar modelling process and help generating best-fit models222In this work, we refer to a set of models that reproduce a specific set of spectroscopic, seismic, and interferometric constraints as best-fit models. that can be used in understanding the internal structure of $\alpha$ Centauri A with unprecedented precision. The dynamical mass of $\alpha$ Centauri A is estimated to span the range [1.10, 1.13] M${}_{\odot}$. Stellar models constructed at solar metallicity within this mass range may display a convective core while on the main sequence, making core overshoot a crucial process to be included in stellar model grids. For this reason, efforts have been made throughout the years to unveil the nature and core properties of $\alpha$ Centauri A using the above set of observables (Miglio & Montalbán, 2005; Bazot et al., 2016). A radius measurement (with a precision of about 1%), when combined with spectroscopic and seismic constraints, has been shown to yield stellar masses with a precision of about 1% (Creevey et al., 2007). $\alpha$ Centauri A has been modelled by several teams, who adopted the interferometric radius of K03 (i.e., 1.224 $\pm$ 0.003 $\rm R_{\odot}$) as well as complementary spectroscopic and seismic data (Thoul et al., 2003; Miglio & Montalbán, 2005; Bazot et al., 2016). They were able to reproduce the dynamical mass derived by P02. However, differences in the parallax measurements available in the literature inevitably lead to differences in the interferometric radius measurements. This also yields different dynamical mass measurements for the star (see Table 1). P16 combined radial velocity data from HARPS (High Accuracy Radial velocity Planet Searcher) spanning a period of ten years with data obtained with the Coudé Echelle Spectrograph (CES), further complemented by visual observations (Pourbaix et al., 1999), to generate a revised parallax measurement. This revised parallax places the star at a slightly different distance compared to that measured by S99. This led to the revision of the dynamical mass of $\alpha$ Centauri A by P16. The interferometric radius was also revised by combining the new parallax with the angular diameter measurement from K03 (see Table 1). K16 also computed orbital parameters for $\alpha$ Centauri A by combining the same high precision radial velocity data set as P16 with their latest astrometric measurements. They found most of the orbital elements to be commensurate with those found by P16. However, they found a smaller semi-major axis, $a$, emerging from the new astrometry (see table 1 in K16). When $a$ was combined with the high precision radial velocities, they obtained a parallax measurement similar to the one found by S99 but larger than that of P16 (see Table 1). Differences are also evident in the derived dynamical masses and interferometric radius measurements of P16 and K16. With regard to the nature of the core of $\alpha$ Centauri A, no definitive answer has been reached yet. Miglio & Montalbán (2005) attribute this to the quality of the seismic data available by then (see Kjeldsen et al. 2004). Bazot et al. (2007) obtained a new set of seismic data using the HARPS spectrograph, having investigated the nature of the core of $\alpha$ Centauri A in Bazot et al. (2016). They found that approximately 40% of their best-fit models, which reproduce the dynamical mass derived by P02, possess convective cores. However, the authors point out that this number depends sensitively on the nuclear reaction rates adopted in their models. We note that they used the interferometric radius derived by K03 in their optimisation procedure. de Meulenaer et al. (2010) have generated the state-of-the-art seismic data set for $\alpha$ Centauri A by combining the radial velocity time series obtained with three spectrographs in Chile and Australia (namely, CORALIE, UVES, and UCLES). Here, we adopt this data set and assess the occurrence of best-fit models with convective cores when trying to reproduce the dynamical masses derived by both P16 and K16. The paper is organised as follows. In Sect. 2, we describe our stellar models and the parameter ranges used in the construction of the model grids. In Sect. 3, we present the sets of observables used in the optimisation procedure, while the main results are discussed in Sect. 4. Section 5 contains our conclusions. 2 Stellar Model Grids We constructed two grids (A and B) of stellar models using MESA (Modules for Experiments in Stellar Astrophysics) version 9793 (Paxton et al., 2015). These grids differ only in the adopted nuclear reaction rates (see Table 2 for details). Bazot et al. (2016) found the occurrence of models of $\alpha$ Centauri A with convective cores to vary mainly due to the choice of nuclear reaction rates, in particular that of the ${}^{14}{\rm N}(p,\gamma)^{15}{\rm O}$ reaction. This reaction rate is crucial for the CNO (carbon-nitrogen-oxygen) cycle and its variation is expected to significantly affect the chances of a model developing a convective core. We therefore varied the nuclear reaction rates in order to test the robustness of the occurrence of best-fit models with convective cores when using different observational constraints (see Sect. 3). Grid A employs nuclear reaction rates from JINA REACLIB (Joint Institute for Nuclear Astrophysics Reaction Library) version 2.2 (Cyburt et al., 2010). It should be noted that grid A uses specific rates for ${}^{14}{\rm N}(p,\gamma)^{15}{\rm O}$ and ${}^{12}{\rm C}(\alpha,\gamma)^{16}{\rm O}$ described by Imbriani et al. (2005) and Kunz et al. (2002), respectively. Grid B employs nuclear reaction rates as obtained from tables provided by the NACRE (Nuclear Astrophysics Compilation of Reaction Rates) collaboration (Angulo et al., 1999). Furthermore, element diffusion is a relevant transport process in low-mass stars, i.e., below $\sim$1.2 $\rm M_{\odot}$ (e.g. Nsamba et al. 2018), and was therefore included in our model grids. Core overshoot becomes a vital process once a stellar model develops a convective core and was included in such models. The version of MESA used in this paper adopts the 2005 update of the OPAL equation of state (Rogers & Nayfonov, 2002). Opacities from OPAL tables (Iglesias & Rogers, 1996) were used at high temperatures while tables from Ferguson et al. (2005) were adopted at lower temperatures. We used the surface chemical abundances of Grevesse et al. (1998) with a solar metal mass fraction value of 0.0169. The standard Grey–Eddington atmosphere was used to describe the surface boundary (it integrates the atmosphere structure from the photosphere down to an optical depth of $10^{-4}$). Convection was described using the mixing length theory (MLT; Böhm-Vitense, 1958) while element diffusion was implemented according to Thoul et al. (1994). Element diffusion includes gravitational settling and chemical diffusion. The helium-to-heavy metal enrichment relation was used to determine the helium mass fraction ($\rm Y$). The ratio $\Delta\rm Y/\Delta\rm Z$ = $\rm 2$ (Chiosi & Matteucci, 1982) was used, while $\rm Z_{0}$ = 0.0 and $\rm Y_{0}$ = 0.2484 were set based on the big bang nucleosynthesis (Cyburt et al., 2003). Evolutionary tracks are varied in mass, $\rm M$, metal mass fraction, $\rm Z$, mixing length parameter, $\alpha_{\rm mlt}$, and core overshoot parameter, $\rm f$. We used the exponential diffusive overshoot recipe in MESA when describing core overshoot mixing. The diffusion coefficient (D${}_{\rm c}$) in the overshoot region is expressed as (Herwig, 2000): $$\rm D_{\rm c}=\rm A_{\rm 0}exp\left(\frac{-2z}{\rm f\cdot H_{\rm p}}\right)~{}% ~{},$$ (1) where A${}_{\rm 0}$ is the diffusion coefficient in the convectively unstable region near the convective boundary determined using MLT, H${}_{\rm p}$ is the pressure scale height, and $\rm z$ is the distance from the edge of the convective zone. Grid parameter ranges are: $\rm M\in$ [1.0, 1.2] M${}_{\odot}$ in steps of 0.01 M${}_{\odot}$, $\rm Z\in$ [0.023, 0.039] in steps of 0.001, $\alpha_{\rm mlt}\in$ [1.3, 2.5] in steps of 0.1, and $\rm f\in$ [0, 0.03] in steps of 0.005. We kept models starting from the ZAMS (zero age main-sequence; defined as the point along the evolutionary track where the nuclear luminosity is 99% of the total luminosity) to the end of the sub-giant evolution stage. Using GYRE (Townsend & Teitler, 2013), we generated adiabatic oscillation frequencies for spherical degrees $l$ = 0, 1, 2, and 3 for each model. 3 Observational Constraints and Optimisation Procedure We downloaded a few high S/N, individually reduced HARPS observations of $\alpha$ Centauri A, which were combined to generate a final spectrum for subsequent analysis. Spectroscopic parameters (i.e., effective temperature, $T_{\rm eff}$, and metallicity, $\rm[Fe/H]$) were derived based on the analysis of the equivalent widths of Fe I and Fe II lines measured with ARES (Sousa et al., 2007, 2015) and assuming LTE (Local Thermodynamical Equilibrium). We used the MOOG code (Sneden, 1973) and a set of plane-parallel ATLAS9 model atmospheres (Kurucz, 1993) in our analysis, as described in Sousa et al. (2011). For more details on the combined ARES+MOOG method, we refer the reader to Sousa (2014). We obtained $T_{\rm eff}=5832\pm 62$ $\rm K$ and $\rm[Fe/H]=0.23\pm 0.05$ dex. Using the angular diameter measurement of K17 together with the parallax measurement of P16, we revised the interferometric radius of $\alpha$ Centauri A by means of the expression (Ligi et al., 2016): $$R(R_{\odot})=\frac{\theta_{\rm LD}\times\rm d{[\rm pc]}}{9.305}~{}~{},$$ (2) where $\rm d{[\rm pc]}$ is the distance to the star expressed in parsec. We find the revised interferometric radius to be 1.230 $\pm$ 0.0056 $\rm R_{\odot}$. This is in agreement with the value obtained by P16 within 1$\sigma$. We will be considering two optimisation runs in this work (Run 1 and Run 2) depending on the set of observables adopted (see Table 3). The value of $T_{\rm eff}$ used in Run 1 was derived in this work while the interferometric radius is from P16. For self-consistency, Run 2 uses the interferometric radius and $T_{\rm eff}$ from K16. The value of [Fe/H] used in both runs is the one derived here. The set of observables in Table 3 was complemented with seismic data (i.e., individual oscillation frequencies) from de Meulenaer et al. (2010). We treated modes exhibiting rotational splittings in the same way as de Meulenaer et al. (2010), i.e., by taking their average and summing the associated uncertainties in quadrature. This is based on the assumption that such splittings are symmetric. The combined-term, surface frequency correction method of Ball et al. (2014) was used to handle the offset between observed and model frequencies (Dziembowski et al., 1988). This method has been shown to yield the least internal systematics in stellar mass, radius, and age when compared to other methods (for details, see Nsamba et al. 2018). Finally, we used AIMS333http://bison.ph.bham.ac.uk/spaceinn/aims/ (Asteroseismic Inference on a Massive Scale; Lund & Reese 2018) to generate a representative set of models reproducing the set of asteroseismic, spectroscopic, and interferometric constraints (as per above). The mean and standard deviation of the posterior probability distribution functions (PDFs) are taken as estimates of the modelled stellar parameters and their uncertainties, respectively. 4 Results Results obtained by combining both grids (A and B) and sets of observables (Run 1 and Run 2) are shown in Table 4 and Fig. 1. Only posterior PDFs showing significant differences are shown in Fig. 1. Results based on the set of observables in Run 2 are consistent within 1$\sigma$. The derived stellar mass is in agreement with the dynamical mass obtained by P02 and K16 (even if at the 2$\sigma$ level when considering grid A). These results are also consistent with those obtained by other modelling teams (Miglio & Montalbán, 2005; Bazot et al., 2012; Bazot et al., 2016). This is because these teams complemented seismic and spectroscopic constraints with the interferometric radius of K03, whose value is in close agreement (within 1$\sigma$) with that used in Run 2. Results based on the set of observables in Run 1 are also consistent within 1$\sigma$. The derived stellar mass is now in agreement with the revised dynamical mass of P16. We found similar results when replacing the interferometric radius of P16 with that derived in this work (cf. Sect. 3). This was expected as both values agree within 1$\sigma$. When adopting the set of observables in Run 1, grids A and B return similar yields of 70% and 77% of best-fit models with convective cores, respectively. A contrasting picture emerges when considering the set of observables in Run 2: 46% (grid A) versus 77% (grid B). This is mainly due to the different nuclear reaction rates used in both grids. The reaction rate for ${}^{14}{\rm N}(p,\gamma)^{15}{\rm O}$ from Imbriani et al. (2005) used in grid A is lower compared to that from NACRE (Angulo et al., 1999) in grid B. This reduces the chances of having convection as a means of energy transport in the core of stellar models in grid A. In addition, Run 1 yields more models in the high-mass regime (see Fig. 1), which increases the chances of the CNO cycle being the main energy production chain, resulting in more models with convective cores. We also find models with convective cores to have on average a higher metallicity compared to those with radiative cores. This is consistent with the findings of Bazot et al. (2016). In the leftmost panel of Fig. 1, a shift (although still retaining a 1$\sigma$ agreement) in the posterior PDFs for the stellar mass can be seen (dashed lines or Run 2). This is again due to the change in the nuclear reaction rates. Results based on grid B yield a large fraction of models with convective cores and therefore higher masses compared to results based on grid A, for which a relatively large fraction of models with radiative cores are obtained, thus resulting in lower masses on average. The lower masses obtained in the latter case lead to a slightly higher age (middle panel of Fig. 1). The mass range of best-fit models obtained in Run 1 is shifted toward higher masses than that of Run 2. Also, the effect of changing the nuclear reaction rates turns out to be less effective in this higher-mass range since most models have developed convective cores. This explains the consistency in the results obtained with both grids when using Run 1. A noticeable difference in the mixing length parameter, $\alpha_{\rm mlt}$, can be seen between the results based on Run 1 and Run 2 (rightmost panel of Fig 1). The most probable cause for this, is the different interferometric radius constraint used. The percentage of best-fit models with convective cores that reproduce the revised dynamical mass derived by P16 (Run 1) is similar for both grids. However, when reproducing the dynamical mass derived by P02 and K16 (Run 2), the percentage of best-fit models with convective cores varies depending on which grid is used, indicating a strong sensitivity to the nuclear reaction rates adopted (cf. Bazot et al., 2016). We further compared the observed frequency ratios, $r_{10}$ (see Roxburgh et al. 2003), to those computed for a handful of representative best-fit models (for Run 1, grid A) having either convective or radiative cores (see Fig. 2). Frequency ratios are less prone to the outer layers of the star and are therefore reliable indicators of the deep stellar interior conditions. Our findings seem to indicate that models having a convective core lead to a better agreement with the observed $r_{10}$, contrary to what was found by de Meulenaer et al. (2010). This is not surprising, as we complemented our seismic data with the interferometric radius of P16, which yields models that reproduce well the revised dynamical mass of P16. de Meulenaer et al. (2010), on the other hand, used models from Miglio & Montalbán (2005), which reproduce the dynamical mass of P02. 5 Conclusions In this study, we have successfully reproduced the revised dynamical mass of $\alpha$ Centauri A derived by P16 using a forward stellar modelling approach. Our findings show that best-fit models favour the presence of a convective core in $\alpha$ Centauri A, regardless of the nuclear reaction rates adopted in the modelling. We therefore conclude that, if the revised dynamical mass of P16 is accurate, then $\alpha$ Centauri A may be used to calibrate stellar model parameters in the presence of a convective core. Furthermore, the percentage of best-fit models having convective cores that reproduce the smaller dynamical mass published by P02 and K16 varies depending on the choice of nuclear reaction rates. Our findings further stress the importance of a precise interferometric radius (with a precision better than 1%) in complementing seismic data with the aim of tightly constraining stellar models when adopting a forward modelling approach (cf. Miglio & Montalbán, 2005; Creevey et al., 2007). Seismic diagnostics of the nature of stellar cores based on frequency combinations demand a relative uncertainty on the observed individual frequencies of about $10^{-4}$ (e.g., Cunha & Metcalfe, 2007; Brandão et al., 2014), commensurate with that obtained from multi-year, space-based photometry (Silva Aguirre et al., 2013; Lund et al., 2017). Our results reveal that, for $\alpha$ Centauri A, a median relative uncertainty on the observed individual frequencies of $2.5\times 10^{-4}$ is sufficient to allow the use of frequency ratios in drawing a distinction – even if merely qualitative – between best-fit models with different core properties. Acknowledgements This work was supported by Fundação para a Ciência e a Tecnologia (FCT, Portugal) through national funds (UID/FIS/04434/2013), by FEDER through COMPETE2020 (POCI-01-0145-FEDER-007672), (POCI-01-0145-FEDER-030389) and FCT/CNRS project PICS. BN is supported by FCT through Grant PD/BD/113744/2015 from PhD::SPACE, an FCT PhD programme. MSC is supported by FCT through an Investigador contract with reference IF/00894/2012 and POPH/FSE (EC) by FEDER funding through the program COMPETE. SGS acknowledges support from FCT through Investigador FCT contract No. IF/00028/2014/CP1215/CT0002 and from FEDER through COMPETE2020 (grants UID/FIS/04434/2013 & PTDC/FIS-AST/7073/2014 & POCI-01-0145 FEDER-016880). Based on data obtained from the ESO Science Archive Facility under request number SAF Alpha Cen A 86436. The authors also acknowledge the anonymous referee for the helpful and constructive remarks. References Angulo et al. (1999) Angulo C., et al., 1999, Nuclear Physics A, 656, 3 Asplund et al. (2009) Asplund M., et al., 2009, Annu Rev Astron Astrophys, 47, 481 Ball et al. (2014) Ball W. H., et al., 2014, A & A, 568, A123 Bazot et al. (2007) Bazot M., Bouchy F., et al., 2007, A & A, 470, 295 Bazot et al. (2012) Bazot M., Bourguignon S., et al., 2012, MNRAS, 427, 1847 Bazot et al. (2016) Bazot M., et al., 2016, MNRAS, 460, 1254 Bedding et al. (2005) Bedding T. R., et al., 2005, A & A, 432, L43 Böhm-Vitense (1958) Böhm-Vitense E., 1958, Zeit. Astrophys., 46, 108 Bonaca et al. (2012) Bonaca A., et al., 2012, ApJl, 755, L12 Bouchy & Carrier (2002) Bouchy F., Carrier F., 2002, A & A, 390, 205 Brandão et al. (2014) Brandão I. M., et al., 2014, MNRAS, 438, 1751 Chiosi & Matteucci (1982) Chiosi C., Matteucci F. M., 1982, A & A, 105, 140 Christensen-Dalsgaard (2009) Christensen-Dalsgaard J., 2009, in The Ages of Stars. pp 431–442 Creevey et al. (2007) Creevey O. L., et al., 2007, ApJ, 659, 616 Cunha & Metcalfe (2007) Cunha M. S., Metcalfe T. S., 2007, ApJ, 666, 413 Cyburt et al. (2003) Cyburt R. H., et al., 2003, Physics Letters B, 567, 227 Cyburt et al. (2010) Cyburt R. H., et al., 2010, ApJS, 189, 240 Dziembowski et al. (1988) Dziembowski W. A., Paterno L., et al., 1988, A & A, 200, 213 Ferguson et al. (2005) Ferguson J. W., Alexander D. R., et al., 2005, ApJ, 623, 585 Grevesse et al. (1998) Grevesse N., et al., 1998, Space Science Reviews, 85, 161 Herwig (2000) Herwig F., 2000, A & A, 360, 952 Iglesias & Rogers (1996) Iglesias C. A., Rogers F. J., 1996, ApJ, 464, 943 Imbriani et al. (2005) Imbriani G., Costantini H., et al., 2005, EPJ A, 25, 455 Kervella et al. (2003) Kervella P., et al., 2003, A & A, 404, 1087 Kervella et al. (2016) Kervella P., Mignard F., et al., 2016, A & A, 594, A107 Kervella et al. (2017) Kervella P., Bigot L., et al., 2017, A & A, 597, A137 Kjeldsen et al. (2004) Kjeldsen H., et al., 2004, ESA Special Publication, 559, 101 Kunz et al. (2002) Kunz R., Fey M., et al., 2002, ApJ, 567, 643 Kurucz (1993) Kurucz R. L., 1993, SYNTHE spectrum synthesis programs and line data, Cambridge Ligi et al. (2016) Ligi R., Creevey O., et al., 2016, A & A, 586, A94 Lund & Reese (2018) Lund M. N., Reese D. R., 2018, ASSSP, 49, 149 Lund et al. (2017) Lund M. N., et al., 2017, ApJ, 835, 172 Miglio & Montalbán (2005) Miglio A., Montalbán J., 2005, A & A, 441, 615 Nsamba et al. (2018) Nsamba B., et al., 2018, MNRAS, arXiv:1804.04935 Paxton et al. (2015) Paxton B., Marchant P., et al., 2015, ApJS, 220, 15 Pourbaix & Boffin (2016) Pourbaix D., Boffin H. M. J., 2016, A & A, 586, A90 Pourbaix et al. (1999) Pourbaix D., et al., 1999, A & A, 344, 172 Pourbaix et al. (2002) Pourbaix D., Nidever D., et al., 2002, A & A, 386, 280 Rogers & Nayfonov (2002) Rogers F. J., Nayfonov A., 2002, ApJ, 576, 1064 Roxburgh et al. (2003) Roxburgh I. W., et al., 2003, A & A, 411, 215 Silva Aguirre et al. (2013) Silva Aguirre V., et al., 2013, ApJ, 769, 141 Silva Aguirre et al. (2015) Silva Aguirre V., et al., 2015, MNRAS, 452, 2127 Silva Aguirre et al. (2017) Silva Aguirre V., et al., 2017, ApJ, 835, 173 Sneden (1973) Sneden C. A., 1973, PhD thesis, THE UNIVERSITY OF TEXAS Söderhjelm (1999) Söderhjelm S., 1999, A & A, 341, 121 Sousa (2014) Sousa S. G., 2014, ARES + MOOG: A Practical Overview of an Equivalent Width (EW) Method to Derive Stellar Parameters. pp 297–310 Sousa et al. (2007) Sousa S. G., Santos N. C., et al., 2007, A & A, 469, 783 Sousa et al. (2011) Sousa S. G., Santos N. C., et al., 2011, A & A, 533, A141 Sousa et al. (2015) Sousa S. G., Santos N. C., et al., 2015, A & A, 577, A67 Thoul et al. (1994) Thoul A. A., et al., 1994, ApJ, 421, 828 Thoul et al. (2003) Thoul A., Scuflaire R., et al., 2003, A & A, 402, 293 Townsend & Teitler (2013) Townsend R. H. D., Teitler S. A., 2013, MNRAS, 435, 3406 Vorontsov et al. (2013) Vorontsov S. V., Baturin V. A., et al., 2013, MNRAS, 430, 1636 de Meulenaer et al. (2010) de Meulenaer P., et al., 2010, A & A, 523, A54
Quantum folded string in $S^{5}$ and the Konishi multiplet at strong coupling Matteo Beccaria Dipartimento di Fisica, Universita’ del Salento & INFN, Via Arnesano, 73100 Lecce, Italy E-mail: matteo.beccaria$∙$le.infn.it    Guido Macorini Niels Bohr Institute (NBI), University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark E-mail: guido.macorini$∙$le.infn.it Abstract: The Konishi superconformal multiplet is an important theoretical laboratory where one can test AdS/CFT methods to compute strong coupling corrections to the spectrum of superstrings in $AdS_{5}\times S^{5}$ . In particular, one can exploit integrability for finite charge states/operators. The multiplet ground state is a singlet operator with two simple descendants in the rank-1 sectors $\mathfrak{sl}(2)$ and $\mathfrak{su}(2)$ of $\mathcal{N}=4$ super Yang-Mills theory. Recently, the next-to-leading quantum correction to the $\mathfrak{sl}(2)$ state has been computed. Here, we use the algebraic curve approach to determine the correction to the other state recovering universality of the correction inside the multiplet. 1 Introduction and result AdS/CFT correspondence [1, *Witten:1998qj, *Gubser:1998bc] relates the spectrum of conformal dimensions of the ${\cal N}=4$ SYM theory to the spectrum of $AdS_{5}\times S^{5}$ superstring. In the planar limit integrability emerges [4, *Faddeev:1994zg, *Minahan:2002ve, *Beisert:2003tq, *Bena:2003wd, *Kazakov:2004qf] and anomalous dimensions can be computed as eigenvalues of an integrable super spin chain by solving nested non-perturbative Bethe Ansatz equations [10, *Beisert:2005fw, *Beisert:2006ez]. These equations are asymptotic, i.e. valid for states with large enough charges. Finite charge states are more difficult and their anomalous dimensions, including the so-called wrapping corrections, are captured by the Y-system [13, *Bombardelli:2009ns, *Gromov:2009bc, *Arutyunov:2009ur, 17] successfully checked at strong coupling in the quasi-classical limit [18, *Gromov:2010vb]. At weak-coupling the leading order predictions from the Y-system [13] agree with standard field theoretical calculations [20, *Velizhanin:2008jd]. At next-to-leading order they are also in agreement [22, *Balog:2010xa, *Balog:2010vf] with the Lüscher corrections [25, *Bajnok:2009vm]. Beyond perturbation theory the Y-system can be treated numerically. The anomalous dimension of the states in the Konishi multiplet have been an important theoretical laboratory to test the method. In [13] the Y-system was combined with the vacuum TBA equations to produce an infinite set of integral equations for the $\mathfrak{sl}(2)$ sector of the spectrum. They were then solved numerically for the simplest state in the Konishi multiplet [27]. The numerical approach starts in the weak-coupling regime and pushes the ‘t Hooft coupling $\lambda$ to large values in order to extrapolate to the strong-coupling limit [27, 28]. The prediction obtained in [27] for the Konishi anomalous dimension $\gamma$ is $$\gamma+4=2.0004\lambda^{1/4}+1.99/\lambda^{1/4}+\cdots\,\,.$$ (1) The leading coefficient agrees with the prediction of [29] giving $2$. This was also confirmed in a recent paper [30] in the light-cone approach. The problem with an analytical proof of a relation like (1) is only technical, but very hard. In particular, it is expected that the analytical structure of the Y-system at finite coupling [17] becomes very complicated at strong coupling. A very interesting approach, pioneered by A. Tseytlin and collaborators, is based on the semiclassical quantization of spinning string solutions with large charges and recently systematically applied to the problem of the Konishi multiplet in [31, 32]. To explain the basic idea we can consider the simple case of the spinning folded string with two charges, the Lorentz spin $S$ and R-charge $J$. Let us introduce the ratios $\mathcal{S}=S/\sqrt{\lambda},\ \mathcal{J}=J/\sqrt{\lambda}$. If we expand at large $\lambda$ and fixed $\mathcal{S},\mathcal{J}$, the expansion of the energy is of the form $$E\equiv\gamma+S+J=\sqrt{\lambda}\,E_{0}(\mathcal{S},\mathcal{J})+E_{1}(% \mathcal{S},\mathcal{J})+\frac{1}{\sqrt{\lambda}}\,E_{2}(\mathcal{S},\mathcal{% J})+\ldots\ .$$ (2) If we now replace the ratios $\mathcal{S},\mathcal{J}$ by their definitions, fix $S$ and $J$, and re-expand at large $\lambda$ we find that the above expansion turns into a power series of the type [31] $$E=\lambda^{1/4}a_{0}+\frac{1}{\lambda^{1/4}}a_{2}+\cdots\,.$$ (3) Here, the classical energy $E_{0}$ contributes to the first coefficient $a_{0}$ while both $E_{1}$, the one-loop $\sigma$-model correction, and $E_{0}$ contribute to the coefficient $a_{2}$. Eq. (3) is indeed the expected near-flat space large $\lambda$ expansion for the energy of a finite charge state. Although the above result is obtained from a semiclassical calculation where $S,J$ are always large, it is tempting to identify $a_{0}$, $a_{2}$ with the coefficients of the expansion of the finite charge state. The advantage of this approach is that all calculations can be done by semiclassical methods in the string theory or, exploiting the integrability structures, by working with the simpler quasi-classical Y-system whose equivalence with the semiclassical computation has been established in [18]. The short string expansion of the energy for the $(S,J)$ folded string reads (see for instance [31, 32]) $$E^{\mathfrak{sl}(2)}(S,J)=\sqrt{2\,S}\,\lambda^{1/4}\,\left[1+\frac{1}{\sqrt{% \lambda}}\left(\underbrace{\frac{3\,S}{8}+\frac{J^{2}}{4\,S}}_{\rm classical}+% \underbrace{a_{01}^{\mathfrak{sl}(2)}}_{\rm quantum}\right)\right]+\dots\,.$$ (4) In this expression, the terms labeled classical come from the expansion of the classical energy. The one-loop corrections are fully encoded in the quantum term $a_{01}^{\mathfrak{sl}(2)}$. The algebraic curve quantization procedure for an arbitrary $\mathcal{S}$ and $\mathcal{J}$ [33, 34] leads to the result [35] $$a_{01}^{\mathfrak{sl}(2)}=-\frac{1}{4}.$$ (5) The Konishi state is associated with $S=J=2$ and we obtain an analytical prediction for the coefficients in Eq. (3) in full agreement with the numerical results of [27] (see also [32, 36]), $$a_{0}=a_{2}=2.$$ (6) It is very interesting to study the manifestation of superconformal invariance at the level of strong coupling corrections. The multiplet structure can be regarded as a consistency check of any method attempting to deal with such regime. This problem has been addressed in [31, 32] from the perspective of semiclassical string quantization. Here, we would like to test the algebraic curve approach from this point of view. To this aim, we remind that quantum string states as well as dual gauge theory operators are highest weight states with Dynkin labels $$[p_{1},q,p_{2}]_{\left(s_{L},s_{R}\right)},$$ (7) where, in terms of the classical charges $S_{1,2},J_{1,2,3}$ , the $\mathfrak{so}(4)=\mathfrak{su}(2)\oplus\mathfrak{su}(2)$ labels $(s_{L},s_{R})$ are given by $s_{L,R}=\frac{1}{2}(S_{1}\pm S_{2})$ and the Dynkin labels $[p_{1},q,p_{2}]$ of $\mathfrak{su}(4)$ are given by $p_{1,2}=J_{2}\mp J_{3}$, $q=J_{1}-J_{2}$. With this notation, the singlet operator $\mbox{Tr}(\overline{\Phi}^{i}\,\Phi_{i})$ with bare dimension 2 is the top state $[0,0,0]_{(0,0)}$ of the Konishi multiplet. It has two superconformal descendants in the $\mathfrak{sl}(2)$ and $\mathfrak{su}(2)$ sectors given by the following states with bare dimension 4 $$\begin{array}[]{ccc}{\rm sector}&{\rm state}&[p_{1},q,p_{2}]_{s_{L},s_{R}}\\ \hline\mathfrak{sl}(2)&\mbox{Tr}(\Phi_{1}\,D^{2}\,\Phi_{1})&[0,2,0]_{(1,1)}\\ \mathfrak{su}(2)&\mbox{Tr}([\Phi_{1},\Phi_{2}]^{2})&[2,0,2]_{(0,0)}\end{array}$$ (8) The state in the $\mathfrak{sl}(2)$ sector has been worked out in details in [35]. As is well known it is associated with a classical string solution represented by a string rotating in just one plane in $S^{5}$ with a spin in $AdS_{5}$ [37]. We shall denote is as the $(S,J)$ folded string. The second state has been discussed in details in [38, 39] and it is associated with a classical string rotating in two planes in $S^{5}$, the $(J_{1},J_{2})$ folded string [40]. The two (classical) solutions are related by an analytic continuation connecting the respective string profiles and conserved charges. From the point of view of the Bethe Ansatz description, at least in the gauge theory, they are quite different. The folded $(S,J)$ string is described by a 2-cut solution with symmetric cuts on the real axis. Instead, the folded $(J_{1},J_{2})$ string is associated (at least at weak coupling) with a 2-cut solution with two cuts symmetric around the imaginary axis and with a non-trivial geometry. The special role of these particular very symmetric 2-cut solutions has been investigated in details in [41]. It is very interesting to pursue the duality in the context of the algebraic curve approach (or the equivalent quasi-classical Y-system). In particular, one would like to check whether the multiplet structure is obeyed by the first non trivial strong coupling correction to the energy. A first analysis in this direction has been presented in [31, 32]. The one-loop corrected energy for the $(J_{1},J_{2})$ folded string takes a form similar to Eq. (4) $$\displaystyle E^{\mathfrak{su}(2)}(J_{1},J_{2})$$ $$\displaystyle=$$ $$\displaystyle\sqrt{2\,J_{2}}\,\lambda^{1/4}\,\left[1+\frac{1}{\sqrt{\lambda}}% \left(\underbrace{\frac{J_{2}}{8}+\frac{J_{1}^{2}}{4\,J_{2}}}_{\rm classical}+% \underbrace{a_{01}^{\mathfrak{su}(2)}}_{\rm quantum}\right)\right]+\dots\,.$$ (9) The authors of [31, 32] conjectured that $a_{01}^{\mathfrak{su}(2)}$ should be the same with an opposite sign , i.e. $+\frac{1}{4}$, reflecting the opposite sign of the curvature of $S^{3}$ as compared to $AdS_{3}$. This proposal is consistent with similar behaviour of the correction for circular spinning strings [31, 32]. For the Konishi representative with $J_{1}=J_{2}=2$, the assignment $a_{01}^{\mathfrak{su}(2)}=\frac{1}{4}$ leads to the same strong coupling correction as for the $\mathfrak{sl}(2)$ Konishi descendant $$E^{S=2,\,J=2}=E^{J_{1}=2,\,J_{2}=2}=2\,\lambda^{1/4}+\frac{2}{\lambda^{1/4}}+\cdots.$$ (10) Beyond the Konishi state, this choice is also consistent with the superconformal degeneracy of the states $(S=2,J)$ and $(J_{1}=J,J_{2}=2)$ 111 It follows for instance by duality of the Bethe equations and adding roots at infinity to implement superconformal transformations. because it predicts the same correction 222The case $S=2,J=3$ has been confirmed by an independent TBA computation in [35]. $$E^{S=2,\,J}=E^{J_{1}=J,\,J_{2}=2}=2\,\lambda^{1/4}+\frac{J^{2}+4}{4}\,\frac{1}% {\lambda^{1/4}}+\cdots.$$ (11) Finally, as a further support of the conjecture $a_{01}^{\mathfrak{su}(2)}=\frac{1}{4}$, we recall that an argument in [32] 333We thank A. Tseytlin and R. Roiban for pointing out this issue. suggests that the independence of $a_{01}$ on the charge ratio is not accidental and has instead a deep origin being related to the continuity of observables with respect to the addition of a small charge to the principal one ($S$ or $J_{2}$ for the two folded strings). In this paper we perform an algebraic curve calculation of the correction and provide very convincing numerical evidence that the result $a_{01}^{\mathfrak{su}(2)}=\frac{1}{4}$ proposed in [31, 32] is indeed correct. 2 Algebraic curve method for the $AdS_{5}\times S^{5}$  superstring The general construction of the algebraic curve for the $AdS_{5}\times S^{5}$ superstring is discussed for instance in [42, 34]. Here, we summarize in a self-contained way the main results for the reader’s convenience. 2.1 Classical algebraic curve The monodromy matrix of the Lax connection for the integrable dynamics of the $AdS_{5}\times S^{5}$ superstring has eigenvalues $$\{e^{i\,\widehat{p}_{1}},e^{i\,\widehat{p}_{2}},e^{i\,\widehat{p}_{3}},e^{i\,% \widehat{p}_{4}}|e^{i\,\widetilde{p}_{1}},e^{i\,\widetilde{p}_{2}},e^{i\,% \widetilde{p}_{3}},e^{i\,\widetilde{p}_{4}}\}$$ (12) The eigenvalues are roots of the characteristic polynomial and define an 8-sheeted Riemann surface. The classical algebraic curve has macroscopic cuts connecting various pairs of sheets. Around each cut, we have $$p^{+}_{i}-p^{-}_{j}=2\,\pi\,n_{ij},\qquad x\in\mathcal{C}^{ij}_{n},$$ (13) where $n$ is an integer associated with the cut. The possible combinations of sheets (a.k.a. polarizations) that are relevant for $AdS_{5}\times S^{5}$ are $$i=\widetilde{1},\widetilde{2},\widehat{1},\widehat{2},\qquad j=\widetilde{3},% \widetilde{4},\widehat{3},\widehat{4}.$$ (14) The properties of the monodromy matrix implies (for folded configurations) the inversion properties $$\displaystyle\widetilde{p}_{1,2}(x)$$ $$\displaystyle=$$ $$\displaystyle-2\,\pi\,m-\widetilde{p}_{2,1}(1/x),\qquad m\in\mathbb{Z},$$ $$\displaystyle\widetilde{p}_{3,4}(x)$$ $$\displaystyle=$$ $$\displaystyle+2\,\pi\,m-\widetilde{p}_{4,3}(1/x),$$ (15) $$\displaystyle\widehat{p}_{1,2,3,4}(x)$$ $$\displaystyle=$$ $$\displaystyle-\widehat{p}_{2,1,4,3}(1/x).$$ The poles of the connection plus Virasoro constraints implies the pole structure around the special points $x=\pm 1$ 444At weak coupling, the two points collapse and we end with the usual pole at $x=0$ well known in the study of integrable spin chains., $$\{\widehat{p}_{1},\widehat{p}_{2},\widehat{p}_{3},\widehat{p}_{4}|\widetilde{p% }_{1},\widetilde{p}_{2},\widetilde{p}_{3},\widetilde{p}_{4}\}\sim\frac{\{% \alpha_{\pm},\alpha_{\pm},\beta_{\pm},\beta_{\pm}|\alpha_{\pm},\alpha_{\pm},% \beta_{\pm},\beta_{\pm}\}}{x\pm 1}.$$ (16) Also, the asymptotic value at $x\to\infty$ is related to the conserved charges as in ($\mathcal{Q}=\frac{Q}{\sqrt{\lambda}}$) $$\left(\begin{array}[]{c}\widehat{p}_{1}\\ \widehat{p}_{2}\\ \widehat{p}_{3}\\ \widehat{p}_{4}\\ \hline\widetilde{p}_{1}\\ \widetilde{p}_{2}\\ \widetilde{p}_{2}\\ \widetilde{p}_{4}\end{array}\right)=\frac{2\pi}{x}\left(\begin{array}[]{c}+% \mathcal{E}-\mathcal{S}_{1}+\mathcal{S}_{2}\\ +\mathcal{E}+\mathcal{S}_{1}-\mathcal{S}_{2}\\ -\mathcal{E}-\mathcal{S}_{1}-\mathcal{S}_{2}\\ -\mathcal{E}+\mathcal{S}_{1}+\mathcal{S}_{2}\\ \hline+\mathcal{J}_{1}+\mathcal{J}_{2}-\mathcal{J}_{3}\\ +\mathcal{J}_{1}-\mathcal{J}_{2}+\mathcal{J}_{3}\\ -\mathcal{J}_{1}+\mathcal{J}_{2}+\mathcal{J}_{3}\\ -\mathcal{J}_{1}-\mathcal{J}_{2}-\mathcal{J}_{3}\end{array}\right)+\mathcal{O}% (1/x^{2}),$$ (17) 3 Fluctuations frequencies from the algebraic curve The macroscopic cuts can be thought as the condensation of a large number of poles as it happens in semiclassical quantum mechanics for a large excitation number. We shall be interested in the effect of the addition of a single pole and in the shift $p\to p+\delta p$ of the quasi-momenta. This insertion will compute the quantum fluctuations around the classical solution. From the definition of the action-angle variables for the integrable string, we deduce that residue of $\delta p$ around such a pole has to be $$\delta p\sim\pm\frac{\alpha(x_{p})}{x-x_{p}},\qquad\alpha(x)=\frac{4\pi}{\sqrt% {\lambda}}\frac{x^{2}}{x^{2}-1}.$$ (18) The position of the poles can be found by solving (for generic $n$) the equation $$p_{i}(x_{n}^{ij})-p_{j}(x^{ij}_{n})=2\pi\ n,\qquad|x_{n}^{ij}|>1,$$ (19) for all polarizations $(i,j)$ with $i<j$ and the pairs $$\displaystyle S^{5}$$ $$\displaystyle:$$ $$\displaystyle(i,j)=(\widetilde{1},\widetilde{3}),(\widetilde{1},\widetilde{4})% ,(\widetilde{2},\widetilde{3}),(\widetilde{2},\widetilde{4}),$$ (20) $$\displaystyle AdS_{5}$$ $$\displaystyle:$$ $$\displaystyle(i,j)=(\widehat{1},\widehat{3}),(\widehat{1},\widehat{4}),(% \widehat{2},\widehat{3}),(\widehat{2},\widehat{4}),$$ (21) Fermions $$\displaystyle:$$ $$\displaystyle(i,j)=(\widetilde{1},\widehat{3}),(\widetilde{1},\widehat{4}),(% \widetilde{2},\widehat{3}),(\widetilde{2},\widehat{4}),$$ $$\displaystyle\phantom{(i,j)=}\ (\widehat{1},\widetilde{3}),(\widehat{1},% \widetilde{4}),(\widehat{2},\widetilde{3}),(\widehat{2},\widetilde{4}).$$ The correction to the quasi-momenta will be $\delta p_{i}$ with the pole structure (18), regularity across the macroscopic cuts, and asymptotic behaviour ($N_{ij}=\sum_{n}N_{n}^{ij}$ is the number of $(i,j)$ excitations) $$\delta\left(\begin{array}[]{c}\widehat{p}_{1}\\ \widehat{p}_{2}\\ \widehat{p}_{3}\\ \widehat{p}_{4}\\ \hline\widetilde{p}_{1}\\ \widetilde{p}_{2}\\ \widetilde{p}_{3}\\ \widetilde{p}_{4}\end{array}\right)=\frac{4\pi}{x\,\sqrt{\lambda}}\left(\begin% {array}[]{c}+\frac{1}{2}\delta\Delta+N_{\widehat{1}\,\widehat{4}}+N_{\widehat{% 1}\,\widehat{3}}+N_{\widehat{1}\,\widetilde{3}}+N_{\widehat{1}\,\widetilde{4}}% \\ +\frac{1}{2}\delta\Delta+N_{\widehat{2}\,\widehat{4}}+N_{\widehat{2}\,\widehat% {3}}+N_{\widehat{2}\,\widetilde{3}}+N_{\widehat{2}\,\widetilde{4}}\\ -\frac{1}{2}\delta\Delta-N_{\widehat{2}\,\widehat{3}}-N_{\widehat{1}\,\widehat% {3}}-N_{\widetilde{1}\,\widehat{3}}-N_{\widetilde{2}\,\widehat{3}}\\ -\frac{1}{2}\delta\Delta-N_{\widehat{1}\,\widehat{4}}-N_{\widehat{2}\,\widehat% {3}}-N_{\widetilde{2}\,\widehat{4}}-N_{\widetilde{1}\,\widehat{4}}\\ \hline\ \ \ \ \ \ \ \ \ \ \ \ -N_{\widetilde{1}\,\widetilde{4}}-N_{\widetilde{% 1}\,\widetilde{3}}-N_{\widetilde{1}\,\widehat{3}}-N_{\widetilde{1}\,\widehat{4% }}\\ \ \ \ \ \ \ \ \ \ \ \ \ -N_{\widetilde{2}\,\widetilde{3}}-N_{\widetilde{2}\,% \widetilde{4}}-N_{\widetilde{2}\,\widehat{4}}-N_{\widetilde{2}\,\widehat{3}}\\ \ \ \ \ \ \ \ \ \ \ \ \ +N_{\widetilde{2}\,\widetilde{3}}+N_{\widetilde{1}\,% \widetilde{3}}+N_{\widehat{1}\,\widetilde{3}}+N_{\widehat{2}\,\widetilde{3}}\\ \ \ \ \ \ \ \ \ \ \ \ \ +N_{\widetilde{1}\,\widetilde{4}}+N_{\widetilde{2}\,% \widetilde{4}}+N_{\widehat{2}\,\widetilde{4}}+N_{\widehat{1}\,\widetilde{4}}% \end{array}\right)+\mathcal{O}(1/x^{2}),$$ (23) The precise values of the residues can be read off the definition of the action-angle variables and are $$\mathop{\mbox{res}}_{x=x^{ij}_{n}}\widehat{p}_{k}=(\delta_{i\,\widehat{k}}-% \delta_{j\,\widehat{k}})\,\alpha(x^{ij}_{n})\,N_{n}^{ij},\qquad\mathop{\mbox{% res}}_{x=x^{ij}_{n}}\widetilde{p}_{k}=(\delta_{i\,\widetilde{k}}-\delta_{j\,% \widetilde{k}})\,\alpha(x^{ij}_{n})\,N_{n}^{ij},$$ (24) where $k=1,2,3,4$, and $i<j$ taking values $\widehat{1},\widehat{2},\widehat{3},\widehat{4},\widetilde{1},\widetilde{2},% \widetilde{3},\widetilde{4}$. The anomalous shift $\delta\Delta$ can be written as a linear combination of the $N^{ij}$ numbers $$\delta\Delta=\sum_{n,(ij)}N^{ij}_{n}\,\Omega^{ij}_{n}.$$ (25) This formula for $\delta\Delta$ exhibits the classical frequencies $\Omega^{ij}$ around the classical solution. These frequencies can be thought as normal mode frequencies. After quantization, and taking into account statistics, the one loop correction to the energy can be written as a sum over zero point energies $$\delta E=\frac{1}{2}\sum_{n,(ij)}(-1)^{F}\,\Omega_{n}^{ij}.$$ (26) 3.1 Inversion symmetry and linear combinations of frequencies for rank-1 solutions The inversion symmetry (2.1) implies the two important relations $$\displaystyle\Omega^{\widetilde{1}\,\widetilde{4}}(x)$$ $$\displaystyle=$$ $$\displaystyle-\Omega^{\widetilde{2}\,\widetilde{3}}(1/x)+\Omega^{\widetilde{2}% \,\widetilde{3}}(0),$$ (27) $$\displaystyle\Omega^{\widehat{1}\,\widehat{4}}(x)$$ $$\displaystyle=$$ $$\displaystyle-\Omega^{\widehat{2}\,\widehat{3}}(1/x)-2.$$ (28) In addition, we have linear relations between the various $\Omega^{ij}$ which can be easily read by representing a particular frequency connecting two sheets as the sum of the intermediate frequencies connecting an intermediate sheet. Assuming the top-down symmetry (valid for rank-1 solutions) $$p_{\widehat{1},\ \widehat{2},\ \widetilde{1},\ \widetilde{2}}=-p_{\widehat{4},% \ \widehat{3},\ \widetilde{4},\ \widetilde{3}},$$ (29) one can prove that all the $8+8$ physical frequencies can be written in terms of the two basic ones $$\Omega_{S}(x)=\Omega^{\widetilde{2}\,\widetilde{3}}(x),\qquad\Omega_{A}(x)=% \Omega^{\widehat{2}\,\widehat{3}}(x).$$ (30) The final result is $$\displaystyle\Omega^{\widetilde{1}\,\widetilde{4}}=-\Omega_{S}(1/x)+\Omega_{S}% (0),$$ (31) $$\displaystyle\Omega^{\widetilde{2}\,\widetilde{4}}=\Omega^{\widetilde{1}\,% \widetilde{3}}=\frac{1}{2}[\Omega_{S}(x)-\Omega_{S}(1/x)+\Omega_{S}(0)],$$ (32) $$\displaystyle\Omega^{\widehat{1}\,\widehat{4}}=-\Omega_{A}(1/x)-2,$$ (33) $$\displaystyle\Omega^{\widehat{2}\,\widehat{4}}=\Omega^{\widehat{1}\,\widehat{3% }}=\frac{1}{2}[\Omega_{A}(x)-\Omega_{A}(1/x)]-1,$$ (34) $$\displaystyle\Omega^{\widehat{2}\,\widetilde{4}}=\Omega^{\widetilde{1}\,% \widehat{3}}=\frac{1}{2}[\Omega_{A}(x)-\Omega_{S}(1/x)+\Omega_{S}(0)],$$ (35) $$\displaystyle\Omega^{\widetilde{2}\,\widehat{4}}=\Omega^{\widehat{1}\,% \widetilde{3}}=\frac{1}{2}[\Omega_{S}(x)-\Omega_{A}(1/x)]-1,$$ (36) $$\displaystyle\Omega^{\widetilde{1}\,\widehat{4}}=\Omega^{\widehat{1}\,% \widetilde{4}}=\frac{1}{2}[-\Omega_{S}(1/x)-\Omega_{A}(1/x)+\Omega_{S}(0)]-1,$$ (37) $$\displaystyle\Omega^{\widehat{2}\,\widetilde{3}}=\Omega^{\widetilde{2}\,% \widehat{3}}=\frac{1}{2}[\Omega_{S}(x)+\Omega_{A}(x)].$$ (38) 4 Algebraic curve computation for the $(J_{1},J_{2})$ folded strings 4.1 Classical $(S,J)$ folded string in the short string limit According to [39], the folded string rotating in $AdS_{5}$ and $S^{5}$ with angular momenta $S$ and $J$ can be analitically continued to the folded string rotating in $S^{5}$ with two angular momenta $J_{1}$, $J_{2}$ according to the replacement rule $$(E,J_{1},J_{2})\leftrightarrow(-J,-E,S).$$ (39) In the $(S,J)$ folded string, the two cuts of the elliptic curve are symmetrically placed along the real axis, $(a,b)$, $(-a,-b)$, where $1<a<b$. The conserved quantities are given by the expressions [35] $$\displaystyle S$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,g\,\frac{ab+1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}% }{b^{2}}\right)-a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right],$$ $$\displaystyle J$$ $$\displaystyle=$$ $$\displaystyle\frac{4\,n\,g}{b}\,\sqrt{(a^{2}-1)(b^{2}-1)}\,\mathbb{K}\left(1-% \frac{a^{2}}{b^{2}}\right),$$ (40) $$\displaystyle E$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,g\frac{ab-1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}}{% b^{2}}\right)+a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right].$$ The branch points can be expanded for small $S$ and $J$ according to $$\displaystyle a$$ $$\displaystyle=$$ $$\displaystyle 1+\frac{\rho^{2}s^{3}}{8}+\frac{1}{128}\left(\rho^{2}-\rho^{4}% \right)s^{5}+\frac{\rho^{4}s^{6}}{128}+\frac{\rho^{2}\left(4\rho^{4}-22\rho^{2% }-9\right)s^{7}}{4096}+\mathcal{O}\left(s^{8}\right),$$ (41) $$\displaystyle b$$ $$\displaystyle=$$ $$\displaystyle 1+2s+2s^{2}+\frac{1}{8}\left(\rho^{2}+7\right)s^{3}+\frac{1}{4}% \left(\rho^{2}-1\right)s^{4}+\frac{1}{256}\left(-2\rho^{4}+34\rho^{2}-85\right% )s^{5}+\mathcal{O}\left(s^{6}\right).$$ Indeed, the associated charges are $$\displaystyle S$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,\pi\,g\,s^{2}+\mathcal{O}(s^{6}),$$ $$\displaystyle J$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,\pi\,g\,\rho\,s^{2}+\mathcal{O}(s^{7}),$$ (42) $$\displaystyle E$$ $$\displaystyle=$$ $$\displaystyle 4\,n\,\pi\,g\,s+\frac{1}{4}\pi gn\left(2\rho^{2}+3\right)s^{3}-% \frac{1}{128}s^{5}\left(\pi gn\left(4\rho^{4}-20\rho^{2}+21\right)\right)+% \mathcal{O}\left(s^{6}\right).$$ This $s\sim\sqrt{S}$ and $\rho=\frac{J}{S}$. More precisely, from $\sqrt{\lambda}=4\,\pi\,g$, we have $$s=\sqrt{\frac{S}{2\,n\,\pi\,g}}=\frac{\sqrt{2S/n}}{\lambda^{1/4}}.$$ (43) The short string expansion of the energy is $$\frac{E}{n\sqrt{\lambda}}=s+\frac{1}{16}\left(2\rho^{2}+3\right)s^{3}+\frac{1}% {512}\left(-4\rho^{4}+20\rho^{2}-21\right)s^{5}+\mathcal{O}\left(s^{6}\right).$$ (44) 4.2 Analytic continuation to the $(J_{1},J_{2})$ folded string In order to describe the $(J_{1},J_{2})$ string, we apply the continuation (39) and are now led to study $$\displaystyle J_{1}$$ $$\displaystyle=$$ $$\displaystyle-2\,n\,g\,\frac{ab-1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}% }{b^{2}}\right)+a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right],$$ $$\displaystyle J_{2}$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,g\,\frac{ab+1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}% }{b^{2}}\right)-a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right],$$ (45) $$\displaystyle E$$ $$\displaystyle=$$ $$\displaystyle-\frac{4\,n\,g}{b}\,\sqrt{(a^{2}-1)(b^{2}-1)}\,\mathbb{K}\left(1-% \frac{a^{2}}{b^{2}}\right).$$ We expand $a$, $b$ around the point $-1$. Introducing the small parameter $s$, we find the expansion $$\displaystyle a$$ $$\displaystyle=$$ $$\displaystyle-1+is+\left(\frac{1}{2}-\frac{\rho}{2}\right)s^{2}+\frac{1}{16}i(% 8\rho-3)s^{3}+\frac{1}{16}\left(-2\rho^{2}+2\rho-1\right)s^{4}+$$ $$\displaystyle+\frac{1}{512}i\left(32\rho^{2}+16\rho+3\right)s^{5}+\mathcal{O}% \left(s^{6}\right),$$ $$\displaystyle b$$ $$\displaystyle=$$ $$\displaystyle\overline{a}.$$ (46) Notice that this is precisely the short string limit of the double contour discussed in [38]. These branch points give $$\displaystyle J_{2}$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,\pi\,g\,s^{2}+\mathcal{O}(s^{6}),$$ $$\displaystyle J_{1}$$ $$\displaystyle=$$ $$\displaystyle 2\,n\,\pi\,g\,\rho\,s^{2}+\mathcal{O}(s^{7}),$$ (47) $$\displaystyle E$$ $$\displaystyle=$$ $$\displaystyle 4\,n\,\pi\,g\,s+\frac{1}{4}gn\left(2\pi\rho^{2}+\pi\right)s^{3}-% \frac{1}{128}s^{5}\left(\pi gn\left(4\rho^{4}-28\rho^{2}-3\right)\right)+% \mathcal{O}\left(s^{6}\right).$$ Using again the relation (43) and identifying $\rho=\frac{J_{1}}{J_{2}}$, we find the following expansion $$\frac{E}{n\sqrt{\lambda}}=s+\frac{1}{16}\left(2\rho^{2}+1\right)s^{3}+\frac{1}% {512}\left(-4\rho^{4}+28\rho^{2}+3\right)s^{5}+\mathcal{O}\left(s^{6}\right).$$ (48) It can be easily shown that this result is in full agreement with the general treatment in [41]. 4.3 Construction of the $p_{\widetilde{2}}$ quasi-momentum In [35], the reader can find the explicit non-trivial quasimomentum $p_{\widehat{2}}$ for the $(S,J)$ folded string. Following our approach based on the analitic continuation, we can look for a suitable continuation of it. As we can verify a posteriori (see the Appendix), this procedure gives the sphere quasi-momentum $p_{\widetilde{2}}$. The result is (written here with the standard branch line assignment for the square root) $$\displaystyle p_{\widetilde{2}}$$ $$\displaystyle=$$ $$\displaystyle\pi\,n-i\,\frac{\Delta}{2\,g}\left(\frac{a}{a^{2}-1}-\frac{x}{x^{% 2}-1}\right)\,\sqrt{\frac{b}{a}\,\frac{a^{2}-1}{b^{2}-1}}\sqrt{\frac{|a|-i\,a}% {|a|-i\,\overline{a}}\,\frac{\overline{a}-x}{a-x}}\,\sqrt{\frac{a}{\overline{a% }}\frac{|a|-i\,a}{|a|-i\,\overline{a}}\,\frac{\overline{a}+x}{a+x}}+$$ (49) $$\displaystyle-\frac{2abJ_{2}}{g\,(b-a)(ab+1)}\,F_{1}(x)-\frac{\Delta\,(a-b)}{2% \,g\,\sqrt{(a^{2}-1)(b^{2}-1)}}\,F_{2}(x),$$ $$\displaystyle F_{1}(x)$$ $$\displaystyle=$$ $$\displaystyle i\,\mathbb{F}\left(i\,\sinh^{-1}\sqrt{-\frac{a-b}{a+b}\,\frac{a-% x}{a+x}},\frac{(a-b)^{2}}{(a+b)^{2}}\right),$$ (50) $$\displaystyle F_{2}(x)$$ $$\displaystyle=$$ $$\displaystyle i\,\mathbb{E}\left(i\,\sinh^{-1}\sqrt{-\frac{a-b}{a+b}\,\frac{a-% x}{a+x}},\frac{(a-b)^{2}}{(a+b)^{2}}\right),$$ (51) where $$\displaystyle J_{1}$$ $$\displaystyle=$$ $$\displaystyle+2\,n\,g\,\frac{ab-1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}% }{b^{2}}\right)+a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right],$$ $$\displaystyle J_{2}$$ $$\displaystyle=$$ $$\displaystyle-2\,n\,g\,\frac{ab+1}{ab}\,\left[b\,\mathbb{E}\left(1-\frac{a^{2}% }{b^{2}}\right)-a\,\mathbb{K}\left(1-\frac{a^{2}}{b^{2}}\right)\right],$$ (52) $$\displaystyle\Delta$$ $$\displaystyle=$$ $$\displaystyle-\frac{4\,n\,g}{b}\,\sqrt{(a^{2}-1)(b^{2}-1)}\,\mathbb{K}\left(1-% \frac{a^{2}}{b^{2}}\right).$$ This expression is valid provided $$\mbox{Re}(a),\mbox{Im}(a)>0,\qquad b=-\overline{a}.$$ (53) It can be checked that $$p(a)=p(\overline{a})=n\,\pi,\qquad p(-a)=p(-\overline{a})=-n\,\pi,\qquad p(% \infty)=0.$$ (54) Setting $$a=1+is+\frac{1}{2}(\rho-1)s^{2}+\frac{1}{16}i(8\rho-3)s^{3}+\frac{1}{16}\left(% 2\rho^{2}-2\rho+1\right)s^{4}+\frac{1}{512}i\left(32\rho^{2}+16\rho+3\right)s^% {5}+O\left(s^{6}\right),$$ (55) we recover the expansion (4.2) and (48). Notice again that in this section $b=-\overline{a}$ and not $b=\overline{a}$ as in the previous section. This is necessary to have the correct cut structure. The full set of quasi-momentum is obtained by completing the sphere quasi-momenta with the relations $$p_{\widetilde{2}}(x)=-p_{\widetilde{3}}(x)=-p_{\widetilde{1}}(1/x)=p_{% \widetilde{4}}(1/x),$$ (56) and by assigning the following AdS quasi-momenta (following from the absence of cuts in the AdS sheets) $$p_{\widehat{1},\widehat{2}}=-p_{\widehat{3},\widehat{4}}=\frac{\Delta}{2g}\,% \frac{x}{x^{2}-1}=\frac{2\,\pi\,\mathcal{E}\,x}{x^{2}-1}.$$ (57) The sphere quasi-momentum $p_{\widetilde{2}}$ defined in (49) has branch cuts along small arcs of circumference with radius $|a|$. A typical plot of it has the form shown in Fig. (1) where we can see the cuts and the singularity around $x=\pm 1$. Actually, these are not the physical branch cuts  [43]. 4.4 Fluctuation energies for the $(J_{1},J_{2})$ folded string The general structure of quantum fluctuations around symmetric 2-cut $\mathfrak{su}(2)$ solutions has been investigated in detail in [34]. The fluctuations of quasi momenta with excitation of type $(\widehat{2},\widehat{3})$ (with $N^{\widehat{2}\,\widehat{3}}=1$) at $z$ and excitation of type $(\widetilde{2},\widetilde{3})$ (with $N^{\widetilde{2}\,\widetilde{3}}=1$) at $y$ have the general form $$\displaystyle\delta p_{\widehat{2}}$$ $$\displaystyle=$$ $$\displaystyle\frac{\alpha(z)}{x-z}+\frac{\delta\alpha_{-}}{x-1}+\frac{\delta% \alpha_{+}}{x+1},$$ (58) $$\displaystyle\delta p_{\widetilde{2}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{f(x)}\left[-\frac{f(y)\,\alpha(y)}{x-y}+\frac{\delta% \alpha_{-}\,f(1)}{x-1}+\frac{\delta\alpha_{+}\,f(-1)}{x+1}-\frac{4\pi\,x}{% \sqrt{\lambda}}+A\right],$$ (59) where $\delta\alpha_{\pm}$ and $A$ are constants and $f(x)^{2}=(x-a)(x-\overline{a})(x-b)(x-\overline{b})$. Using the inversion relations and replacing in the asymptotic condition we easily find $$\displaystyle\delta\Delta$$ $$\displaystyle=$$ $$\displaystyle\Omega_{S}(y)+\Omega_{A}(z),$$ (60) $$\displaystyle\Omega_{A}(x)$$ $$\displaystyle=$$ $$\displaystyle\frac{2}{x^{2}-1}\left(1+x\,\frac{f(1)-f(-1)}{f(1)+f(-1)}\right),$$ (61) $$\displaystyle\Omega_{S}(x)$$ $$\displaystyle=$$ $$\displaystyle\frac{4}{f(1)+f(-1)}\left(\frac{f(x)}{x^{2}-1}-1\right).$$ (62) The considered solutions have the additional symmetry $$p_{\widetilde{2}}=-p_{\widetilde{3}},\qquad p_{\widetilde{1}}=-p_{\widetilde{4% }},\qquad p_{\widehat{1}}=p_{\widehat{2}}=-p_{\widehat{3}}=-p_{\widehat{4}}.$$ (63) This we can identify all frequencies with the above pairing of indices. Consistency requires the following relation which indeed is true for the above expressions $$\Omega_{A}(x)+\Omega_{A}(1/x)+2=0.$$ (64) We end with the following simple expressions for the six independent frequencies Bosonic fluctuations $$\displaystyle\Omega_{S}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widetilde{2}\,\widetilde{3}},$$ (65) $$\displaystyle\Omega_{\overline{S}}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widetilde{1}\,\widetilde{4}}=-\Omega_{S}(1/x)+\Omega_{S}% (0),$$ (66) $$\displaystyle 2\times\Omega_{S_{\perp}}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widetilde{2}\,\widetilde{4}}=\Omega^{\widetilde{1}\,% \widetilde{3}}=\frac{1}{2}[\Omega_{S}(x)-\Omega_{S}(1/x)+\Omega_{S}(0)],$$ (67) $$\displaystyle 4\times\Omega_{A}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widehat{1}\,\widehat{4}}=\Omega^{\widehat{2}\,\widehat{4% }}=\Omega^{\widehat{1}\,\widehat{3}}=\Omega^{\widehat{2}\,\widehat{3}},$$ (68) Fermionic fluctuations $$\displaystyle 4\times\Omega_{\overline{F}}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widehat{2}\,\widetilde{4}}=\Omega^{\widetilde{1}\,% \widehat{3}}=\Omega^{\widetilde{1}\,\widehat{4}}=\Omega^{\widehat{1}\,% \widetilde{4}}=\frac{1}{2}[\Omega_{A}(x)-\Omega_{S}(1/x)+\Omega_{S}(0)],$$ (69) $$\displaystyle 4\times\Omega_{F}$$ $$\displaystyle=$$ $$\displaystyle\Omega^{\widetilde{2}\,\widehat{4}}=\Omega^{\widehat{1}\,% \widetilde{3}}=\Omega^{\widehat{2}\,\widetilde{3}}=\Omega^{\widetilde{2}\,% \widehat{3}}=\frac{1}{2}[\Omega_{S}(x)+\Omega_{A}(x)].$$ (70) 5 Evaluation of the one-loop correction The standard way to compute the one-loop energy (26) is to write the sum over the mode number $n$ as a contour integral $$\delta E=\frac{1}{2}\sum_{ij}(-1)^{F_{ij}}\oint\frac{dx}{2\,\pi\,i}\left(% \Omega^{ij}(x)\,\partial_{x}\,\log\,\sin\frac{p_{i}-p_{j}}{2}.\right)$$ (71) The integral is conveniently computed by deforming the contour in two pieces: a) The unit circumference $|x|=1$, b) a contour surrounding the cut in the $(\widetilde{2},\widetilde{3})$ plane. The (a) contribution is rather easy. All singularities cancel as a consequence of the ultraviolet finiteness of the correction. The (b) contribution is less trivial since it requires some insight about how to deform the contour integration around the cut. In the simplest case with mode number $n=1$, the one relevant for Konishi 555Notice that for $n>1$, the structure of fluctuations becomes more complicated. In particular, there are $n-1$ additional fluctuations near each branch cut endpoint. We shall not discuss configurations with $n>1$ here. For these states one has to identify the precise contour integration around the cut. , we find the structure of excitations for the $(\widetilde{2},\widetilde{3})$ polarization shown in Fig. (2). Apart from the cut endpoints, we only find poles on the real axis. All but one of them can be grouped in an infinite sequence $\{x_{k}\}$ that accumulates at $x=1$ with $|x_{k}-1|\sim 1/k$ for large $k$. Then, there is a somewhat different pole at $x=\xi$ whose position depends on $\rho$ and tends to infinity $\xi\to\infty$ for $\rho\to 1$. A specific example of this structure is shown in Fig. (3), where we show the plot of $\mbox{Re}\,\partial_{x}\log\sin p_{\widetilde{2}}$ at $s=1/10$ and $\rho=2$. The left and right panels differ by the range of $x$. The left plot focuses on the region near $x=1$ and shows the regular infinite sequence of poles $\{x_{k}\}$. The right plot shows that there is a pole at $x=\xi=1.74078$ well separated from the other poles. Its contribution is non zero and must be included. In order to evaluate the cut integral we continue the quasi-momentum the right in the complex plane. We deform suitably the integration contour and compute the discontinuity taking into account the jump of sign of $f(x)$ across the physical cut. From the computational point of view it is convenient to compute the integral along the dashed polygonal $\Gamma$ in Fig. (2) and evaluate separately the contribution of the special pole $\xi$. This is particularly important for values of $\rho$ near 1, the Konishi case, when $\xi$ is large. We collect in Fig. (4) a few sample numerical values of the one-loop correction evaluated at the special values of the ratio of the two spins $\rho=1,\frac{3}{2},2$ as a function of the spin parameter $s$. The independence of the $s\to 0$ limit with respect to $\rho$ is clear. A simple polynomial fit to a similar larger set of points provides the following estimate of the $s\to 0$ limit $$\lim_{s\to 0}\frac{\delta E}{s}=0.24999999(1).$$ (72) where the error represents the dependence on $\rho$. Our computations clearly provides strong evidence for the correctness of the exact result $a_{01}^{\mathfrak{su}(2)}=\frac{1}{4}$. 6 Conclusions In this paper we have computed the strong coupling next-to-leading correction to the energy of the quantum folded string with two angular momenta $J_{1,2}$ in $S^{5}$ in the limit where $J_{2}$ is large with fixed ratio $J_{1}/J_{2}$ and small $J_{1,2}/\sqrt{\lambda}$ (semiclassical short string limit). This state is expected to capture at semiclassical level the properties of the $\mathfrak{su}(2)$ descendent of the Konishi state. It belongs to the same multiplet as the analogous state dual to the folded string with spin in $AdS$ and angular momentum in $S^{5}$. The correction should be the same as a consequence of superconformal symmetry. We performed the computation by exploiting the algebraic curve method proposed in [34]. We computed the one-loop correction numerically with high precision and confirmed the conjecture proposed in [31, 32]. A natural continuation of this work is of course to perform a similar analysis for the small circular strings solutions considered in [31, 32] in order to prove universality of the next-to-leading strong coupling correction for more states with bare dimension 4 in the Konishi multiplet. This analysis is in progress. Finally, the structure of multiplets beyond Konishi seems unclear at the moment so it is important to collect as much data as possible to see if there are degeneracies in energy for various other semiclassical states with different quantum numbers. The algebraic curve approach is clearly a powerful tool in this respect. Acknowledgments We thank Nikolay Gromov for many clarifications about the algebraic curve approach to one-loop corrections and for help in correcting a mistake in the first version of this paper. We thank Arkady Tseytlin and Radu Roiban for important comments and for suggesting non-trivial consistency checks. We also thank Fedor Levkovich-Maslyuk for helpful discussions. Appendix A Asymptotics of $p_{\widetilde{2}}$ Evaluating $p_{\widetilde{2}}$ at large $x$ we find $$\displaystyle p_{\widetilde{2}}(x)$$ $$\displaystyle\stackrel{{\scriptstyle x\to\infty}}{{\longrightarrow}}$$ $$\displaystyle\frac{J_{2}-J_{1}}{2\,g\,x}=\frac{2\pi}{x}(\mathcal{J}_{2}-% \mathcal{J}_{1}),$$ (73) $$\displaystyle p_{\widetilde{2}}(1/x)$$ $$\displaystyle\stackrel{{\scriptstyle x\to 0}}{{\longrightarrow}}$$ $$\displaystyle-\frac{J_{2}+J_{1}}{2\,g}\,x.$$ (74) Comparing with the general asymptotic behaviour $$\left(\begin{array}[]{c}\widehat{p}_{1}\\ \widehat{p}_{2}\\ \widehat{p}_{3}\\ \widehat{p}_{4}\\ \hline\widetilde{p}_{1}\\ \widetilde{p}_{2}\\ \widetilde{p}_{2}\\ \widetilde{p}_{4}\end{array}\right)=\frac{2\pi}{x}\left(\begin{array}[]{c}+% \mathcal{E}-\mathcal{S}_{1}+\mathcal{S}_{2}\\ +\mathcal{E}+\mathcal{S}_{1}-\mathcal{S}_{2}\\ -\mathcal{E}-\mathcal{S}_{1}-\mathcal{S}_{2}\\ -\mathcal{E}+\mathcal{S}_{1}+\mathcal{S}_{2}\\ \hline+\mathcal{J}_{1}+\mathcal{J}_{2}-\mathcal{J}_{3}\\ +\mathcal{J}_{1}-\mathcal{J}_{2}+\mathcal{J}_{3}\\ -\mathcal{J}_{1}+\mathcal{J}_{2}+\mathcal{J}_{3}\\ -\mathcal{J}_{1}-\mathcal{J}_{2}-\mathcal{J}_{3}\end{array}\right)+\mathcal{O}% (1/x^{2}),$$ (75) and with the inversion properties $$\displaystyle\widetilde{p}_{1,2}(x)$$ $$\displaystyle=$$ $$\displaystyle-\widetilde{p}_{2,1}(1/x)-2\,\pi\,m,$$ (76) $$\displaystyle\widetilde{p}_{3,4}(x)$$ $$\displaystyle=$$ $$\displaystyle-\widetilde{p}_{4,3}(1/x)+2\,\pi\,m,$$ (77) $$\displaystyle\widehat{p}_{1,2,3,4}(x)$$ $$\displaystyle=$$ $$\displaystyle-\widehat{p}_{2,1,4,3}(1/x),$$ (78) we identify the correct asymptotic behaviour of $\widetilde{p}_{2}=-\widetilde{p}_{3}$ after exchanging $J_{1}\leftrightarrow J_{2}$. References [1] J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Adv.Theor.Math.Phys. 2 (1998) 231–252, [hep-th/9711200]. [2] E. Witten, Anti-de Sitter space and holography, Adv.Theor.Math.Phys. 2 (1998) 253–291, [hep-th/9802150]. [3] S. Gubser, I. R. Klebanov, and A. M. Polyakov, Gauge theory correlators from noncritical string theory, Phys.Lett. B428 (1998) 105–114, [hep-th/9802109]. [4] L. Lipatov, High-energy asymptotics of multicolor QCD and exactly solvable lattice models, hep-th/9311037. [5] L. Faddeev and G. Korchemsky, High-energy QCD as a completely integrable model, Phys.Lett. B342 (1995) 311–322, [hep-th/9404173]. [6] J. Minahan and K. Zarembo, The Bethe ansatz for N=4 superYang-Mills, JHEP 0303 (2003) 013, [hep-th/0212208]. [7] N. Beisert, C. Kristjansen, and M. Staudacher, The Dilatation operator of conformal N=4 superYang-Mills theory, Nucl.Phys. B664 (2003) 131–184, [hep-th/0303060]. [8] I. Bena, J. Polchinski, and R. Roiban, Hidden symmetries of the $AdS_{5}\times S^{5}$ superstring, Phys.Rev. D69 (2004) 046002, [hep-th/0305116]. [9] V. Kazakov, A. Marshakov, J. Minahan, and K. Zarembo, Classical/quantum integrability in AdS/CFT, JHEP 0405 (2004) 024, [hep-th/0402207]. [10] N. Beisert and M. Staudacher, The N=4 SYM integrable super spin chain, Nucl.Phys. B670 (2003) 439–463, [hep-th/0307042]. [11] N. Beisert and M. Staudacher, Long-range $\mathfrak{psu}(2,2|4)$ Bethe Ansatze for gauge theory and strings, Nucl.Phys. B727 (2005) 1–62, [hep-th/0504190]. In honor of Hans Bethe. [12] N. Beisert, B. Eden, and M. Staudacher, Transcendentality and Crossing, J.Stat.Mech. 0701 (2007) P01021, [hep-th/0610251]. [13] N. Gromov, V. Kazakov, and P. Vieira, Exact Spectrum of Anomalous Dimensions of Planar N=4 Supersymmetric Yang-Mills Theory, Phys.Rev.Lett. 103 (2009) 131601, [arXiv:0901.3753]. [14] D. Bombardelli, D. Fioravanti, and R. Tateo, Thermodynamic Bethe Ansatz for planar AdS/CFT: A Proposal, J.Phys.A A42 (2009) 375401, [arXiv:0902.3930]. [15] N. Gromov, V. Kazakov, A. Kozak, and P. Vieira, Exact Spectrum of Anomalous Dimensions of Planar N = 4 Supersymmetric Yang-Mills Theory: TBA and excited states, Lett.Math.Phys. 91 (2010) 265–287, [arXiv:0902.4458]. [16] G. Arutyunov and S. Frolov, Thermodynamic Bethe Ansatz for the $AdS_{5}\times S^{5}$ Mirror Model, JHEP 0905 (2009) 068, [arXiv:0903.0141]. [17] A. Cavaglia, D. Fioravanti, and R. Tateo, Extended Y-system for the $AdS_{5}/CFT_{4}$ correspondence, Nucl.Phys. B843 (2011) 302–343, [arXiv:1005.3016]. [18] N. Gromov, Y-system and Quasi-Classical Strings, JHEP 1001 (2010) 112, [arXiv:0910.3608]. [19] N. Gromov, V. Kazakov, and Z. Tsuboi, $PSU(2,2|4)$ Character of Quasiclassical AdS/CFT, JHEP 1007 (2010) 097, [arXiv:1002.3981]. [20] F. Fiamberti, A. Santambrogio, C. Sieg, and D. Zanon, Wrapping at four loops in N=4 SYM, Phys.Lett. B666 (2008) 100–105, [arXiv:0712.3522]. [21] V. Velizhanin, The four-loop anomalous dimension of the Konishi operator in N=4 supersymmetric Yang-Mills theory, JETP Lett. 89 (2009) 6–9, [arXiv:0808.3832]. [22] G. Arutyunov, S. Frolov, and R. Suzuki, Five-loop Konishi from the Mirror TBA, JHEP 1004 (2010) 069, [arXiv:1002.1711]. [23] J. Balog and A. Hegedus, 5-loop Konishi from linearized TBA and the XXX magnet, JHEP 1006 (2010) 080, [arXiv:1002.4142]. [24] J. Balog and A. Hegedus, The Bajnok-Janik formula and wrapping corrections, JHEP 1009 (2010) 107, [arXiv:1003.4303]. [25] Z. Bajnok and R. A. Janik, Four-loop perturbative Konishi from strings and finite size effects for multiparticle states, Nucl.Phys. B807 (2009) 625–650, [arXiv:0807.0399]. [26] Z. Bajnok, A. Hegedus, R. A. Janik, and T. Lukowski, Five loop Konishi from AdS/CFT, Nucl.Phys. B827 (2010) 426–456, [arXiv:0906.4062]. [27] N. Gromov, V. Kazakov, and P. Vieira, Exact Spectrum of Planar ${\cal N}=4$ Supersymmetric Yang-Mills Theory: Konishi Dimension at Any Coupling, Phys.Rev.Lett. 104 (2010) 211601, [arXiv:0906.4240]. [28] S. Frolov, Konishi operator at intermediate coupling, J.Phys.A A44 (2011) 065401, [arXiv:1006.5032]. [29] S. Gubser, I. Klebanov, and A. M. Polyakov, A Semiclassical limit of the gauge / string correspondence, Nucl.Phys. B636 (2002) 99–114, [hep-th/0204051]. [30] F. Passerini, J. Plefka, G. W. Semenoff, and D. Young, On the Spectrum of the $AdS_{5}\times S^{5}$ String at large lambda, JHEP 1103 (2011) 046, [arXiv:1012.4471]. [31] R. Roiban and A. A. Tseytlin, Quantum strings in $AdS_{5}\times S^{5}$: Strong-coupling corrections to dimension of Konishi operator, JHEP 0911 (2009) 013, [arXiv:0906.4294]. [32] R. Roiban and A. Tseytlin, Semiclassical string computation of strong-coupling corrections to dimensions of operators in Konishi multiplet, Nucl.Phys. B848 (2011) 251–267, [arXiv:1102.1209]. [33] S. Schafer-Nameki, Review of AdS/CFT Integrability, Chapter II.4: The Spectral Curve, arXiv:1012.3989. [34] N. Gromov, S. Schafer-Nameki, and P. Vieira, Efficient precision quantization in AdS/CFT, JHEP 0812 (2008) 013, [arXiv:0807.4752]. [35] N. Gromov, D. Serban, I. Shenderovich, and D. Volin, Quantum folded string and integrability: From finite size effects to Konishi dimension, arXiv:1102.1040. [36] B. C. Vallilo and L. Mazzucato, The Konishi multiplet at strong coupling, arXiv:1102.1219. [37] S. Frolov and A. A. Tseytlin, Semiclassical quantization of rotating superstring in $AdS_{5}\times S^{5}$, JHEP 0206 (2002) 007, [hep-th/0204226]. [38] N. Beisert, J. Minahan, M. Staudacher, and K. Zarembo, Stringing spins and spinning strings, JHEP 0309 (2003) 010, [hep-th/0306139]. [39] N. Beisert, S. Frolov, M. Staudacher, and A. A. Tseytlin, Precision spectroscopy of AdS / CFT, JHEP 0310 (2003) 037, [hep-th/0308117]. [40] S. Frolov and A. A. Tseytlin, Rotating string solutions: AdS / CFT duality in nonsupersymmetric sectors, Phys.Lett. B570 (2003) 96–104, [hep-th/0306143]. [41] B. Vicedo, Giant magnons and singular curves, JHEP 0712 (2007) 078, [hep-th/0703180]. [42] N. Gromov and P. Vieira, The $AdS_{5}\times S^{5}$ superstring quantum spectrum from the algebraic curve, Nucl.Phys. B789 (2008) 175–208, [hep-th/0703191]. [43] T. Bargheer, N. Beisert, and N. Gromov, Quantum Stability for the Heisenberg Ferromagnet, New J.Phys. 10 (2008) 103023, [arXiv:0804.0324].
Neutrino-Nuclear Coherent Scattering and the Effective Neutrino Charge Radius Departamento de Física Teòrica and IFIC Centro Mixto, Universidad de Valencia–CSIC, E-46100, Burjassot, Valencia, Spain E-mail:    Jose Bernabéu Departamento de Física Teòrica and IFIC Centro Mixto, Universidad de Valencia–CSIC, E-46100, Burjassot, Valencia, Spain E-mail: Jose.Bernabeu@uv.es    Massimo Passera Dipartimento di Fisica “G. Galilei”, Università di Padova and INFN, Sezione di Padova, I-35131, Padova, Italy E-mail: massimo.passera@pd.infn.it Abstract: We propose to extract the value of the effective neutrino charge radius from the coherent scattering of a neutrino against a heavy nucleus. In such an experiment the relevant quantity to measure is the kinetic energy distribution of the recoiling nucleus, which, in turn, may be directly related to the shift in the value of the effective weak mixing angle produced by the neutrino charge radius. This type of experiment has been proposed in order to observe the coherent elastic neutrino-nuclear scattering for the first time. If interpreted in the way suggested in this work, such an experiment would constitute the first terrestrial attempt to measure this intrinsic electromagnetic property of the neutrino. It is well-established by now that the difficulties associated with the definition [1] of the neutrino charge radius (NCR) have been conclusively settled in a series of papers [2, 3, 4], by resorting to the well-defined electroweak gauge-invariant separation of physical amplitudes into effective self-energy, vertex and box sub-amplitudes, implemented by the pinch technique formalism [5]. Thus, within the Standard Model, at one-loop order, the NCR, to be denoted by $\big{<}r^{2}_{\nu_{i}}\,\big{>}$, is given by $$\big{<}r^{2}_{\nu_{i}}\,\big{>}=\,\frac{G_{{\scriptscriptstyle F}}}{4\,{\sqrt{% 2}}\,\pi^{2}}\Bigg{[}3-2\log\Bigg{(}\frac{m_{{\scriptscriptstyle i}}^{2}}{M_{{% \scriptscriptstyle W}}^{2}}\Bigg{)}\Bigg{]}\,,$$ (1) where $i=e,\mu,\tau$, the $m_{i}$ denotes the mass of the charged iso-doublet partner of the neutrino under consideration, and $G_{{\scriptscriptstyle F}}$ is the Fermi constant. In addition, as has been demonstrated in [3], the NCR so defined can be expressed in terms of a judicious combination of physical cross-sections, a fact which promotes it into a genuine physical observable. This possibility has revived the interest in this quantity [6], and makes the issue of its actual experimental measurement all the more interesting. In this talk we will argue that upcoming experiments involving coherent neutrino-nuclear scattering [7, 8] may provide the first opportunity for measuring the NCR (or, at least, for placing bounds on its value). The notion of coherent nuclear scattering is well-known from electron scattering. In the neutrino case it was developed in connection with the discovery of weak neutral currents, with a component proportional to the number operator [9]. When a projectile (e.g. a neutrino) scatters elastically from a composite system (e.g. a nucleus), the amplitude $F({\bf p^{\prime}},{\bf p})$ for scattering from an incoming momentum ${\bf p}$ to an outgoing momentum ${\bf p^{\prime}}$ is given as the sum of the contributions from each constituent, $$F({\bf p^{\prime}},{\bf p})=\sum_{j=1}^{A}f_{j}({\bf p^{\prime}},{\bf p})e^{i{% \bf q}\cdot{\bf x}_{j}}\,\,,$$ (2) where ${\bf q}={\bf p^{\prime}}-{\bf p}$ is the momentum transfer and the individual amplitudes $f_{j}({\bf p^{\prime}},{\bf p})$ are added with a relative phase-factor, determined by the corresponding wave function. The differential cross-section is then $$\frac{d\sigma}{d\Omega}=|F({\bf p^{\prime}},{\bf p})|^{2}=\sum_{j=1}^{A}|f_{j}% ({\bf p^{\prime}},{\bf p})|^{2}+\sum_{j,i}^{i\neq j}f_{i}({\bf p^{\prime}},{% \bf p})f_{j}^{\dagger}({\bf p^{\prime}},{\bf p})e^{i{\bf q}\cdot({\bf x}_{j}-{% \bf x}_{i})}\,.$$ (3) In principle, due to the presence of the phase factors, major cancellations may take place among the $A(A-1)$ terms in the second (non-diagonal) sum. This happens for $qR\gg 1$, where $R$ is the size of the composite system, and the scattering would be incoherent. On the contrary, under the condition that $qR\ll 1$, then all phase factors may be approximated by unity, and the terms in (3) add coherently. If there were only one type of constituent, i.e. $f_{j}({\bf p^{\prime}},{\bf p})=f({\bf p^{\prime}},{\bf p})$ for all $j$, then (3) would reduce to $$\frac{d\sigma}{d\Omega}=A^{2}\left|f({\bf p^{\prime}},{\bf p})\right|^{2}$$ (4) Evidently, in that case, the coherent scattering cross-section would be enhanced by a factor of $A^{2}$ compared to that of a single constituent. In the realistic case of a nucleus with $Z$ protons and $N$ neutrons, and assuming zero nuclear spin, the corresponding differential cross-section reads [9] $$\frac{d\sigma}{d\Omega}=\frac{G^{2}_{{\scriptscriptstyle F}}}{4(2\pi)^{2}}E^{2% }(1+\cos\theta)\left[(1-4s_{\scriptscriptstyle W}^{2})Z-N\right]^{2},$$ (5) where $s_{\scriptscriptstyle W}$ is the sine of the weak mixing angle, $d\Omega=d\phi\,d(\cos\theta)$, and $\theta$ is the scattering angle. For elastic scattering, the scattering angle is related to the nuclear recoil, so that the kinetic energy distribution of the recoiling nucleus is written as [9] $$\frac{d\sigma}{dy}=\frac{G^{2}_{{\scriptscriptstyle F}}}{2\pi}\frac{M(M+2E)^{2% }}{[M+2E(1-y)]^{3}}E^{2}(1-y)\left[(1-4s_{\scriptscriptstyle W}^{2})Z-N\right]% ^{2},$$ (6) where $M$ is the mass of the nucleus, $y=T/T_{{\scriptscriptstyle max}}$, $y\in[0,1]$ , and $T_{{\scriptscriptstyle max}}={2E^{2}}/{(M+2E)}$. For $2E\ll M$, $T_{{\scriptscriptstyle max}}\simeq 2E^{2}/M$, and, to an excellent approximation, (5) simplifies to the rather compact expression, $$\frac{d\sigma}{dy}\simeq\frac{G^{2}_{{\scriptscriptstyle F}}}{2\pi}E^{2}(1-y)% \left[(1-4s_{\scriptscriptstyle W}^{2})Z-N\right]^{2}.$$ (7) The one-loop interactions between a nucleon $N$ and a neutrino $\nu$ are shown in Fig.1; of course, when the nucleon is a neutron, both (c) and (d) vanish. It is important to realize that, in the kinematic limit considered (i.e. $|q^{2}|\ll M^{2}$), all one-loop contributions may be absorbed into shifts of the original parameters appearing in the Born amplitude (7), giving rise to a Born-improved amplitude. In particular, as has been explained in detail in [3], diagrams (a), (b), and (c), combine to form two renormalization-group-invariant quantities, $$\bar{R}_{{\scriptscriptstyle Z}}(q^{2})=\frac{\alpha_{\scriptscriptstyle W}}{c% _{\scriptscriptstyle W}^{2}}\bigg{[}q^{2}-M_{\scriptscriptstyle Z}^{2}+\Re e\,% \{\widehat{\Sigma}_{{\scriptscriptstyle Z}{\scriptscriptstyle Z}}(q^{2})\}% \bigg{]}^{-1},\,\,\,\,\bar{s}_{\scriptscriptstyle W}^{2}(q^{2})=s_{% \scriptscriptstyle W}^{2}\Biggl{(}1-\frac{c_{\scriptscriptstyle W}}{s_{% \scriptscriptstyle W}}\,\Re e\,\{\widehat{\Pi}_{{\scriptscriptstyle A}{% \scriptscriptstyle Z}}(q^{2})\}\Biggr{)}\,,$$ (8) where $\alpha_{\scriptscriptstyle W}=g_{\scriptscriptstyle W}^{2}/4\pi$, $\Re e\,\{...\}$ denotes the real part, and $\widehat{\Sigma}_{{\scriptscriptstyle A}{\scriptscriptstyle Z}}(q^{2})=q^{2}% \widehat{\Pi}_{{\scriptscriptstyle A}{\scriptscriptstyle Z}}(q^{2})$. $\bar{R}_{{\scriptscriptstyle Z}}(q^{2})$ can be directly related to the effective electroweak (running) coupling, and eventually be interpreted as a shift to $G^{2}_{\scriptscriptstyle F}$. Similarly, $\bar{s}_{\scriptscriptstyle W}^{2}(q^{2})$ defines the effective (running) weak mixing angle. It turns out that the UV-finite contribution from the NCR, contained in diagram (d), may be also absorbed into a shift of $s_{\scriptscriptstyle W}^{2}$. In fact, a detailed analysis based on the methodology developed in [10], reveals that, in the kinematic range of interest, the numerical impact of $\bar{R}_{{\scriptscriptstyle Z}}(q^{2})$ and $\bar{s}_{\scriptscriptstyle W}^{2}(q^{2})$ is negligible, i.e. these quantities do not run appreciably. Instead, the contribution from the NCR amounts to a correction of few percents to $s_{\scriptscriptstyle W}^{2}$, given by an expression of the form $s_{\scriptscriptstyle W}^{2}\longrightarrow s_{\scriptscriptstyle W}^{2}\bigg{% (}1-\frac{2}{3}\,M_{\scriptscriptstyle W}^{2}\,\big{<}r^{2}_{\nu_{i}}\,\big{>}% \bigg{)}$. Finally, the contributions of the boxes are to be included. One can show that the sum of (e) and (f) vanishes in the relevant kinematic limit, whereas graph (g) gives a contribution proportional to $g_{\scriptscriptstyle W}^{4}/M_{\scriptscriptstyle W}^{2}$, whose impact is currently under investigation. Finally we would like to point out that if one were to consider the differences in the cross-sections between two different neutrino species scattering coherently off the same nucleus, as proposed by Sehgal two decades ago [11], one would eliminate all unwanted contributions, such as boxes, thus measuring the difference between the two corresponding charge radii. Such a difference would also contribute to a difference for the neutrino index of refraction in nuclear matter [12]. Acknowledgments This work was supported by the MCyT grant FPA2002-00612 and by the European Program MRTN-CT-2004-503369. References [1] J. L. Lucio, A. Rosado and A. Zepeda, Phys. Rev. D 29, 1539 (1984); N. M. Monyonko and J. H. Reid, Prog. Theor. Phys.  73, 734 (1985); A. Grau and J. A. Grifols, Phys. Lett. B 166, 233 (1986); P. Vogel and J. Engel, Phys. Rev. D 39, 3378 (1989); M. J. Musolf and B. R. Holstein, Phys. Rev. D 43, 2956 (1991); G. Degrassi, A. Sirlin and W. J. Marciano, Phys. Rev. D 39, 287 (1989). [2] J. Bernabeu, L. G. Cabral-Rosetti, J. Papavassiliou and J. Vidal, Phys. Rev. D 62, 113012 (2000). [3] J. Bernabeu, J. Papavassiliou and J. Vidal, Phys. Rev. Lett.  89, 101802 (2002) [Erratum-ibid.  89, 229902 (2002)]; Nucl. Phys. B 680, 450 (2004). [4] J. Bernabeu, J. Papavassiliou and D. Binosi, Nucl. Phys. B 716, 352 (2005). [5] J. M. Cornwall, Phys. Rev. D 26, 1453 (1982); J. M. Cornwall and J. Papavassiliou, Phys. Rev. D 40, 3474 (1989); J. Papavassiliou, Phys. Rev. D 41, 3179 (1990); G. Degrassi and A. Sirlin, Phys. Rev. D 46, 3104 (1992); M. Passera and K. Sasaki, Phys. Rev. D 54, 5763 (1996); A. Pilaftsis, Nucl. Phys. B 487, 467 (1997); D. Binosi and J. Papavassiliou, Phys. Rev. D 66, 111901 (2002). [6] See, for example, S. Eidelman et al. [Particle Data Group], Phys. Lett. B 592 (2004), page 441. [7] I. Giomataris et al., arXiv:hep-ex/0502033; Y. Giomataris and J. D. Vergados, arXiv:hep-ex/0503029. [8] J. Barranco, O. G. Miranda and T. I. Rashba, arXiv:hep-ph/0508299. [9] J. Bernabeu, Lett. Nuovo  Cimento 10, 329 (1974); Astron. Astrophys. 47, 375 (1976); D. Z. Freedman, Phys. Rev. D 9, 1389 (1974); D. Z. Freedman, D. N. Schramm and D. L. Tubbs, Ann. Rev. Nucl. Part. Sci.  27, 167 (1977); A. Drukier and L. Stodolsky, Phys. Rev. D 30, 2295 (1984). [10] K. Hagiwara, S. Matsumoto, D. Haidt and C. S. Kim, Z. Phys. C 64, 559 (1994). [11] L. M. Sehgal, Phys. Lett. B 162, 370 (1985). [12] F. J. Botella, C. S. Lim and W. J. Marciano, Phys. Rev. D 35, 896 (1987); E. K. Akhmedov, C. Lunardini and A. Y. Smirnov, Nucl. Phys. B 643, 339 (2002)
Viscous-like forces control the impact response of dense suspensions Marc-Andre Brassard\aff1    Neil Causley\aff1    Nasser Krizou\aff1    Joshua A. Dijksman\aff2       Abram H. Clark\aff1 \corresp abe.clark@nps.edu \aff1Department of Physics, Naval Postgraduate School, Monterey, CA USA \aff2Physical Chemistry and Soft Matter, Wageningen University & Research, Wageningen, The Netherlands (December 9, 2020) Abstract We experimentally and theoretically study impacts into dense cornstarch and water suspensions. We vary impact speed as well as intruder size, shape, and mass, and we characterize the resulting dynamics using high-speed video and an onboard accelerometer. We numerically solve previously proposed models, most notably the added-mass model as well as a class of models where the viscous forces at the boundary of the jammed front are dominant. We find that our experimental data are inconsistent with the added mass model, but are consistent with the viscous model. Our results strongly suggest that the added-mass model, which is the dominant model for understanding the dynamics of impact into dense suspensions, should be updated to include these viscous-like forces. keywords: Authors should not enter keywords on the manuscript, as these must be chosen by the author during the online submission process and will then be added during the typesetting process (see http://journals.cambridge.org/data/relatedlink/jfm-keywords.pdf for the full list) 1 Introduction A dense suspension consists of solid particles, with sizes on the scale of 1-100 $\mu$m, placed into a Newtonian fluid such that the particles are crowded but not making solid-solid contact with each other. Such systems are common in a variety of engineering and geophysical contexts. The rheology of the suspension varies strongly with the volume fraction $\phi$ occupied by the particles. For small particle volume fraction $\phi$ (typically $\phi<0.4$), the suspension behaves as a Newtonian fluid, with a constant viscosity $\eta$ that increases with $\phi$. For $\phi>\phi_{J}$ (typically $\phi_{J}\approx 0.6$), the particles are jammed (van Hecke, 2009), and the material behaves as a yield-stress solid (Brown & Jaeger, 2014). In between these two limits (typically $0.4<\phi<0.6$), $\eta$ increases dramatically if the shear rate $\dot{\gamma}$ exceeds some critical shear rate $\dot{\gamma}_{c}$, the value of which depends on $\phi$ (Hoffman, 1972; Barnes, 1989; Brown & Jaeger, 2009; Fall et al., 2010; Brown & Jaeger, 2014) and other microscopic features. This behavior, called shear thickening or sometimes discontinuous shear thickening (DST), arises from some combination of granular effects, like jamming (Waitukaitis et al., 2013; van Hecke, 2009; Brown & Jaeger, 2014) and Reynolds dilatancy (Reynolds, 1885; Jerome et al., 2016); fluid-related phenomena, like Darcy flow (Darcy, 2012; Jerome et al., 2016) and lubrication (Wyart & Cates, 2014; Seto et al., 2013); and possibly surface chemistry of the particles (Oyarte Gálvez et al., 2017). Impact into a dense suspension by a foreign intruder can be similarly dramatic (Lee et al., 2003; Waitukaitis & Jaeger, 2012; Peters & Jaeger, 2014; Han et al., 2016; Mukhopadhyay et al., 2018; Han et al., 2019b). Yet, a simple application of DST cannot explain the impact response, as the stresses predicted by DST are far too small to, e.g., support a person running across a cornstarch-water suspension; see the Introduction of Ref. (Mukhopadhyay et al., 2018) for a complete discussion. This is likely due to two aspects of impact that make it distinct from DST: it is not a steady-state process, and it involves both compression (varying $\phi$) and shear. Experiments have repeatedly shown that the sudden compression beneath the intruder causes an increase in $\phi$, leading to a dynamically jammed region that grows rapidly away from the impact point (Waitukaitis & Jaeger, 2012; Peters & Jaeger, 2014; Han et al., 2016; Mukhopadhyay et al., 2018). Thus, DST is still likely relevant, but it must be considered within the context of this inherently transient, compression-induced jamming that occurs during impact. However, previous work on explaining the impact response has been seemingly decoupled from DST or from any discussion of the large viscosity of dense suspensions for rapid driving. Instead, the dominant theory to explain the impact response assumes that the intruder deceleration is dominated by momentum conservation due to the growing “added mass” of this dynamically jammed region (Waitukaitis & Jaeger, 2012). The added-mass model however neglects any viscous-like forces at the boundary of the jammed region, where DST may be most relevant due to large shear rates (Han et al., 2016). Added mass alone was sufficient to explain experimental data in Peters & Jaeger (2014), but these experiments were two-dimensional (2D), meaning that viscous forces would only act over a thin, quasi-1D boundary. In 3D, the relative role of viscous drag acting on a surface and inertial effects from changing volumes is significantly different. We return to this discussion in our conclusions, Sec. 4. Here we show via theoretical analysis and impact experiments that viscous-like forces at the boundary of the growing mass likely play a dominant role in the dynamics of the intruder. Drawing from previous work, we theoretically analyze the case of an intruder impact into a suspension, where the dynamics include added-mass forces as well as viscous forces at the boundary of the jammed region. We find that the original added-mass model as well as modified versions robustly predict that the maximum force $F_{\rm max}$ achieved during impact scales with the impact velocity $v_{0}$ as $F_{\rm max}\propto{v_{0}}^{2}$ and the time $t_{\rm max}$ that the maximum force is reached scales with the impact velocity as $t_{\rm max}\propto{v_{0}}^{-1}$. These predictions are inconsistent with the data from our experiments as well as those from Waitukaitis & Jaeger (2012). These data show $F_{\rm max}\propto{v_{0}}^{\alpha}$ and $t_{\rm max}\propto{v_{0}}^{\beta}$, but with $\alpha\approx 1.5$ and $\beta\approx-0.5$ (instead of $2$ and $-1$, respectively). We find that we can better predict these observed scalings by assuming that viscous-like forces at the boundary of the dynamically jammed region are dominant. In addition, we consider how $F_{\rm max}$ and $t_{\rm max}$ depend on intruder size, mass, and shape, and we again find that models dominated by viscous forces at the boundary perform better than models based on added-mass. Our results suggest that the added-mass model is incomplete and should be updated to include viscous-like forces at the boundary of the dynamically jammed region. Such a theory also has the advantage of conceptually unifying the impact dynamics with steady-state rheological descriptions, like DST. 2 Theoretical Analysis Prior experiments have repeatedly demonstrated that impact into a dense suspensions results in a dynamically jammed, solid-like region that propagates away from the impact point. The formation and dynamics of the propagating front are primarily related to volume conservation, where volume swept out by the intruder must be compensated for by compaction of particle phase. Additionally, shear can induce the formation of a solid region Han et al. (2018, 2019a). 2.1 Quasi one-dimensional front development If the front propagation process is quasi-one-dimensional, meaning the compacted region grows only downward in a column with depth $z_{f}$ and not laterally, then the results of Waitukaitis et al. (2013) can be easily applied. The volume occupied by particles in the compacted region is $\phi_{J}\pi D^{2}(z_{f}-z)$, with $D$ the impactor diameter and $z$ the penetration depth. Before compaction, these same particles occupied volume $\phi_{0}\pi D^{2}z_{f}/4$, where $\phi_{0}$ is the initial packing fraction. Conservation of particle volume means that these two expressions must be equal, which yields $z_{f}/z=\phi_{J}/(\phi_{J}-\phi_{0})$, or $$k\equiv\frac{v_{f}}{v}=\frac{\phi_{J}}{\phi_{J}-\phi_{0}}.$$ (1) where $v_{f}=dz_{f}/dt$ is the characteristic speed of the front and $v=dz/dt$ is the intruder’s speed. For a schematic, see Fig. 1a. This dependence on $\phi_{0}$ and $\phi_{J}$ was corroborated by Peters & Jaeger (2014) and Han et al. (2016) for 2D and 3D impacts, where $\phi_{J}\approx 0.51$ is the jamming packing fraction for the cornstarch particles, low due to swelling (Chen et al., 2019). This implies $k\approx 10$ when $\phi_{0}=0.46$. 2.2 Including lateral front growth However, several key observations suggest that the jammed region below the intruder does not grow strictly downward in a quasi-1D column but spreads out laterally as well, albeit at a smaller speed. Experiments with small impacting objects (Peters & Jaeger, 2014; Han et al., 2016) typically find that the transverse dimension is about half of $z_{f}$, meaning that the volume of the jammed region still scales as ${z_{f}}^{d}$, where $d$ is the dimensionality of the system (2 or 3). Additionally, experimental data show that the front slows down as it moves; see Fig. 4 of Peters & Jaeger (2014) and Fig. 4 of  Han et al. (2016). The data from these papers appears consistent with $z_{f}\propto z^{\gamma}$ with $\gamma<1$. Such behavior could arise if the lateral expansion of the solidified region were caused by a combination of jamming due to compression and due to shear (Han et al., 2018). If compression-induced jamming were the sole cause, the volume of the jammed region scales as ${z_{f}}^{d}$ while the volume swept out by the intruder is linear $z$, meaning that $z_{f}$ scales as $z^{1/d}$. If shear jamming were the sole cause, then $z_{f}\propto z$ still, since the downward growth is the same as before and the lateral growth arises from a new mechanism. In reality, both effects likely contribute, which could explain the apparent experimental observation that $z_{f}\propto z^{\gamma}$ with $1/d<\gamma<1$. 2.3 General equation of motion To understand how the front growth, including its shape, affects the resulting dynamics, we consider a generic equation of motion that describes the dynamics of the impactor. Assuming that the growing solidlike region is rigidly connected to the intruder, then the total momentum of both is $p=[m+m_{a}(t)]v(t)$, where $m$ is the constant intruder mass, $m_{a}(t)$ is the added mass, and $v(t)$ is the velocity of the intruder and solidlike region. The shape of the jammed region and its growth rate will set $m_{a}(t)$. To complete the equation of motion, there are three external forces to consider. Two relate to gravity: the weight of the intruder, $F_{g}=mg$, and a buoyant force $F_{b}$ from the displaced suspension (this term is typically negligible). The third, which is not included in the added-mass model, is any viscous-like forces $F_{v}$ that act at the boundary of the jammed region. Newton’s second law can then be written as $$(m+m_{a})\frac{dv}{dt}+v\frac{dm_{a}}{dt}=F_{b}+F_{g}+F_{v}.$$ (2) Before Eq. (2) can be solved for $z(t)$, $v(t)$, and $a(t)$, assumptions must be made about the mathematical form of $m_{a}$, $F_{b}$, and $F_{v}$. Based on our front dynamics discussion, we now consider a few scenarios, shown in Fig. 1, and solve Eq. (2) numerically or, if possible, exactly. The original added mass model, shown in panel (a), assumed that a solid, cylindrical plug grows straight down, but that the total added mass is some proportion of an inverted cone-shaped region that grows downward and outward at the same rate. An alternative, shown in panel (b) is to consider drag force $F_{v}$ on the growing solid plug through shear in a boundary layer with thickness $\delta$. A version of this model was proposed in Appendix D of Waitukaitis (2014). Finally, several 2D and 3D imaging experiments suggest that the solid-like region grows laterally. In this case, the solid region experiences a drag force that grows with its surface area, as depicted in panel (c). We first describe the first two cases in Sections 2.4 and 2.5. We then discuss scaling laws predicted by these models in Sections 2.6 and 2.7. Finally, we discuss the third case and its scaling laws in Section 2.8. 2.4 Case 1: added-mass model, no viscous drag First, we consider the original added-mass model (Waitukaitis & Jaeger, 2012), which assumed that the solidified region is an inverted cone whose height and radius grow at the same rate $v_{f}=kv$, yielding $m_{a}=C_{m}\rho(1/3)\pi(D/2+kz)^{2}kz$, where $C_{m}$ is an added mass coefficient found experimentally to be $C_{m}\approx 0.37$. The fact that $C_{m}<1$ means that the entirety of the added mass region is not perfectly rigidly connected to the intruder. They assumed that viscous drag was negligible or absent and set $F_{v}=0$ and that $F_{b}$ comes from displaced fluid in a conical depression near the intruder, $F_{b}=1/3\pi\rho gz(D/2+kz)^{2}$. Numerical solutions to this model are qualitatively similar to experimental trajectories, as shown in Fig. 2 (thick black dashed line) and in Waitukaitis & Jaeger (2012). 2.5 Case 2: cylindrical jammed region with only viscous drag While the added-mass model provides qualitative features that can be matched to experiments, it is not unique in this respect: other choices for $F_{v}$ and $m_{a}$ yield similar results and can also be calibrated to match experimental trajectories, as also shown in Fig. 2 (thinner black dot-dashed line). In particular, models where $F_{v}$ is dominant yield similar results and have the advantage of matching other features of the dynamics, as discussed below. As shown in Fig. 2 of Han et al. (2016), the growing jammed region moves at approximately the same speed $v$ as the intruder, and it is surrounded by a thin layer of thickness $\delta\approx 5$ mm where the shear rate is $v/\delta$. Thus, on dimensional grounds, we can approximate viscous force as $$F_{v}=-C_{v}\eta_{s}S\frac{v}{\delta}.$$ (3) Here $C_{v}$ is a dimensionless drag coefficient, $\eta_{s}$ is the effective (constant) viscosity of the suspension, and $S$ is the surface area of the jammed region. This is a generalized form of a model appearing in Appendix D of Waitukaitis’ Ph.D. thesis (Waitukaitis, 2014), which involves a columnar, solid-like front growing beneath the intruder with height $h_{f}=kz$ and thus $S=\pi Dkz+\pi D^{2}/4$. We note that their dimensional analysis used $D$ in place of $S/\delta$, which then required $\eta_{s}$ to be unphysically large, $\eta_{s}\approx 2000$. This situation is sketched in Fig 1(b). If the second term is dropped on the grounds that $D$ is much smaller than $kz$ or that the viscous-like forces only act on the sides of the growing cylinder, then the resulting dynamics are exactly solvable. Equation (3) becomes $$F_{v}=-C_{v}\pi D\eta_{s}kz\frac{v}{\delta}.$$ (4) Assuming other forces can be neglected and $m_{a}$ is negligible, then Eq. (2) can be exactly solved, yielding: $$\displaystyle v(t)$$ $$\displaystyle=v_{0}{\rm sech}^{2}(t/\tau),$$ (5) $$\displaystyle a(t)$$ $$\displaystyle=-\sqrt{\frac{2C_{v}\pi Dk\eta_{s}}{m\delta}}{v_{0}}^{3/2}{\rm sech% }^{2}(t/\tau){\rm tanh}(t/\tau),$$ (6) where $\tau=\sqrt{C_{v}\pi Dk\eta_{s}v_{0}/2m\delta}$. These functions are plotted in Fig. 2 and agree well with experiments. The viscous model solution in Fig. 2 use $C_{v}=0.42$, $\delta=0.5$ cm, and $\eta_{s}=100$ Pa$\cdot$s. This agrees well with the viscosity of cornstarch and water-CsCl suspensions in the shear-thickening regime with similar values of $\phi$, as shown in Fig. 11 of Fall et al. (2012). This comparison demonstrates that reasonable parameter values can be used in matching to experiments, although there is some flexibility and therefore uncertainty in the values of these parameters. 2.6 Scaling laws for the added mass model Since both the added mass and the viscous drag model can be reasonably matched to experimentally observed intruder trajectories, some further validation can come from comparing how $F_{\rm max}$ and $t_{\rm max}$ scale with $v_{0}$, $m$, and $D$. Such scalings have been previously used in the case of impact to connect macroscale dynamics with the microscale mechanisms that give rise to them (Walsh et al., 2003; Uehara et al., 2003; Goldman & Umbanhowar, 2008; Clark et al., 2014; Zhao et al., 2015; Krizou & Clark, 2020). For all experiments and theoretical models, we find that these scalings can be well approximated by $$\displaystyle F_{\rm max}$$ $$\displaystyle=A{v_{0}}^{\alpha}$$ (7) $$\displaystyle t_{\rm max}$$ $$\displaystyle=B{v_{0}}^{\beta}.$$ (8) The prefactors $A$ and $B$ can vary with intruder properties, and we will examine how they depend on $m$, $D$, and, for conical intruders, cone angle $\theta$. The added-mass model was solved numerically by Mukhopadhyay et al. (2018), finding that $F_{\rm max}\propto{v_{0}}^{2}m^{2/3}$. We also numerically solve the added-mass model and find the same result, along with $t_{\rm max}\propto{v_{0}}^{-1}m^{1/3}$. We find $F_{\rm max}$ and $t_{\rm max}$ to be nearly independent of $D$ in the range of parameters studied here, in agreement with Mukhopadhyay et al. (2018). Thus, for the added-mass model, $F_{\rm max}\propto{v_{0}}^{2}m^{2/3}D^{0}$ and $t_{\rm max}\propto{v_{0}}^{-1}m^{1/3}D^{0}$ 2.7 Scaling law for viscous models In the viscous model, the time at which the peak acceleration can be directly calculated by differentiating $a(t)$ in Eq. (6), setting the result to zero, and solving for $t$. This time $t_{\rm max}$ can be substituted back into Eq. (6) to calculate $a_{\rm max}=a(t_{\rm max})$. By this method, the peak force $F_{\rm max}=ma_{\rm max}$ and $t_{\rm max}$ found to be: $$\displaystyle F_{\rm max}$$ $$\displaystyle=\sqrt{2C_{v}Dk\eta_{s}m_{r}/\delta}{v_{0}}^{3/2}{\rm sech}^{2}(% \beta)\tanh(\beta),$$ (9) $$\displaystyle t_{\rm max}$$ $$\displaystyle=\sqrt{\frac{2m_{r}\delta}{C_{v}Dk\eta_{s}v_{0}}}\beta,$$ (10) where $\beta=\frac{1}{2}{\rm log}(2+\sqrt{3})$. We also solve the viscous model numerically and find the same result. Thus, for the viscous model where the jammed region is a cylindrical column, $F_{\rm max}\propto{v_{0}}^{3/2}m^{1/2}D^{1/2}$ and $t_{\rm max}\propto{v_{0}}^{-1/2}m^{1/2}D^{1/2}$. 2.8 Case 3: hybrid models and scaling laws The simple viscous model discussed above does not include several features that may make a comparison with experimental data more complicated. First, the solidified region grows in all three dimensions, not just straight down in a cylindrical column. This is particularly relevant when the contact point for the impacting object is point-like. Second, momentum is being transferred to the solidified region, so added-mass terms must be included generally. Third, the rate at which the front is growing tends to decrease with time (i.e., $\gamma<1$). Finally, many of the physical parameters in these models have been previously studied, so there are physical bounds on, e.g., $\eta_{s}$ based on experimental measurements. To check how sensitive the scaling laws are for varying shapes of the growing jammed region, we study a case where the volume of the solidified region grows as ${z_{f}}^{3}$ and the surface area grows as ${z_{f}}^{2}$, where $z_{f}=kz^{\gamma}$. For simplicity, we approximate the growing jammed region as a hemisphere where the volume and surface area are $2\pi{z_{f}}^{3}/3$ and $2\pi{z_{f}}^{2}$, respectively. This situation is sketched in Fig 1(c). We note that this overestimates the volume (and thus the added mass) in particular, since  Han et al. (2016) showed that the mass is better approximated as a half-ellipsoid with semi-minor axes of $z_{f}$, $z_{f}/2$, and $z_{f}/2$, meaning that the volume is $1/4$ of the hemisphere. This yields $$\displaystyle m_{a}$$ $$\displaystyle=C_{m}\rho\frac{2\pi}{3}(kz^{\gamma})^{3},$$ (11) $$\displaystyle F_{v}$$ $$\displaystyle=-2C_{v}\eta_{s}\pi(kz^{\gamma})^{2}\frac{v}{\delta}.$$ (12) We first consider the case where the added-mass term is dominant. We set $C_{m}=0.2$, $C_{v}=0$, $F_{b}=0$, and, by numerically solving Eq. (2), we find that $F_{\rm max}\propto{v_{0}}^{2}$ and $t_{\rm max}\propto{v_{0}}^{-1}$ persist for all $1/d<\gamma<1$, which is inconsistent with experimental data shown below. Next, we consider the case where the viscous term is dominant, setting $C_{v}=0.5$, $C_{m}=0$, and $F_{b}=0$. For $\gamma=1$, corresponding to the front moving at a constant speed in all directions, we find $F_{\rm max}\propto{v_{0}}^{1.63}m^{0.69}$ and $t_{\rm max}\propto{v_{0}}^{-0.67}m^{0.33}$. For $\gamma=1/3$, corresponding to the case where compression-induced jamming is dominant, we find $F_{\rm max}\propto{v_{0}}^{1.40}m^{0.40}$ and $t_{\rm max}\propto{v_{0}}^{-0.40}m^{0.60}$. The behavior smoothly varies between these limits as $\gamma$ is varied. For example, when $\gamma=0.7$, we find $F_{\rm max}\propto{v_{0}}^{1.58}m^{0.59}$ and $t_{\rm max}\propto{v_{0}}^{-0.60}m^{0.42}$. We also find that, if added-mass and viscous terms are both present, then the values of the exponents fall in between the predictions of each model, depending on the relative strength of each term. To illustrate this, Fig. 4 shows numerical solutions using Eqs. (11) and (12). We choose $C_{m}=0.1$, $\rho=1630$ kg/m${}^{3}$, $k=4$, $\gamma=0.7$, $C_{v}=0.5$, $\eta_{s}=20$ Pa$\cdot$s, and $\delta=2$ mm, all values that are based on previous experiments or within reasonable physical bounds. This yields $F_{\rm max}\propto{v_{0}}^{1.61}m^{0.59}$ and $t_{\rm max}\propto{v_{0}}^{-0.90}m^{0.46}$. 2.9 Summary of theoretical considerations Table 1 shows a summary of all the theoretical predictions from these models, assuming the forms $F_{\rm max}\propto{v_{0}}^{\alpha}m^{\zeta}D^{\lambda}$ and $t_{\rm max}\propto{v_{0}}^{\beta}m^{\kappa}D^{\mu}$. These scalings can be derived analytically only for case 2, as shown above and in Waitukaitis (2014), as well as case 1 in the limit of small $D$ (Mukhopadhyay et al., 2018). We also solve these models numerically to confirm the analytical scalings, including the nonzero values of $D$ we use in experiments for case 1. All results from case 3 are obtained numerically. All added mass models (case 1 and case 3 with $C_{v}=0$) predict $\alpha=2$ and $\beta=-1$. Viscous models (case 2 and case 3 with $C_{v}=0$) predict $1.4<\alpha<1.63$ and $-0.67<\beta<-0.4$. The exponents associated with $m$, $\zeta$ and $\kappa$, are similar for all models. The exponents associated with $D$, $\lambda$ and $\mu$, are zero for all added mass models, but are nonzero for the viscous model, case 2, with $\lambda=1/2$ and $\mu=-1/2$. This is due to the fact that the size of the jammed region (and therefore the surface area experiencing viscous-like drag) scales with the intruder diameter in this model. 3 Experiments To compare to the scaling laws summarized in Section 2.9, we perform experiments of intruders falling under gravity to impact a free surface of a suspension of food-grade cornstarch particles in tap water. The density of the cornstarch was 46% by volume. We also tested impacts with 49% by volume and found only a very slight upward shift in the forces, in agreement with Waitukaitis & Jaeger (2012). The packing fraction of cornstarch was inferred by weighing both the water and cornstarch added and assuming a specific gravity of 1.6 for the cornstarch (Han et al., 2017). Intruders of varying shapes (cylinders, spheres, and cones) and diameters $D$ were attached to threaded rods and held by an electromagnet. They were held at variable heights and then released, yielding impact speeds of up to $v_{0}\approx 4$ m/s. The mass $m$ of the intruder was varied by adding additional weights on the rod. The impacts were recorded by high-speed video using a Phantom V711 at frame rates between 175,000 and 230,000 frames per second. A ball was attached to the threaded rod and tracked using MATLAB, yielding the position of the intruder at each frame. Discrete differentiation and a lowpass filter (Clark et al., 2012) were used to obtain the velocity and acceleration. An accelerometer with sample rate of 5000 Hz (Sparkfun ADXL377) was connected to the rod, showing good agreement with the acceleration obtained from video tracking. The accelerometer data had better time resolution, since two lowpass filters were applied to the video acceleration data. Therefore, all velocity data shown is from video tracking and all acceleration data is from the accelerometer. 3.1 Experimental Results Figure 5 shows $F_{\rm max}=-ma_{\rm max}$ and $t_{\rm max}$ plotted as a function of $v_{0}$ for four representative experiments of cylinders impacting cornstarch suspensions: three from our experiments (one cylinder, one sphere, and one cone) as well as the experimental data from Waitukaitis & Jaeger (2012). These quantities appear to scale with $v_{0}$ according to power law relations, Eqs. (7) and (8). Comparison with the fit line that is shown strongly suggests that $\alpha\approx 1.5$; linear fits to the data from our experiments confirm this, returning $1.3<\alpha<1.6$ for all intruders we study. This is consistent with the range predicted by viscous models discussed in Sec. 2. The data for $t_{\rm max}$ are more scattered, making clear determination of $\beta$ more difficult. Additionally $v_{0}>3$ m/s, the $t_{\rm max}$ data from all intruders appear to flatten out and even curve upward slightly, which is not predicted by any of the theoretical models. However, best fits for impact velocities $0.5<v_{0}<3$ m/s give $\beta\approx-0.5$. These values, $\alpha=1.5$ and $\beta=-0.5$, agree with the viscous model well, but do not agree with the predictions of the added-mass models discussed above, $\alpha=2$ and $\beta=-1$. This strongly suggests that viscous terms at the boundary of the dynamically jammed region play an important, and likely dominant, role in the deceleration of the intruder. To further examine the consistency of these models with the experimental data, Fig. 6 shows how the prefactors $A$ and $B$ scale with $m$, $D$, and cone angle $\theta$. We measure $A$ and $B$ as the mean of $F_{\rm max}/{v_{0}}^{3/2}$ and $t_{\rm max}/{v_{0}}^{-1/2}$. Figure 6(a) and (b) show $A$ and $B$ versus $m$ for three cylindrical intruders with the same diameter $D=25$ mm but with varied mass, $m\approx 80$, 150, and 230 g. Power-law fit lines are shown in black for the predictions of the original added-mass model, $A\propto m^{2/3}$ and $B\propto m^{1/3}$, and the viscous model involving a quasi-1D cylindrical dynamically jammed region, $A\propto m^{1/2}$ and $B\propto m^{1/2}$. The data appear more consistent with the viscous model predictions, especially for $B$. However, we note that the details of the shape of the added mass, as well as how dramatically the propagating front slows down as it moves, can cause these exponents to vary somewhat. 3.2 Intruder size scaling Figure 6(c) and (d) show data from cylindrical and spherical intruders of similar $m$ but varying $D$. The cylinders have $m\approx 190$ g and $D=12.5$, 25, and 50 mm, and the spheres have $m\approx 200$ g and $D=20$, 30 and 50 mm. The added-mass model predicts that, for these sizes and weights, there is very little dependence of $A$ or $B$ on $D$, i.e., $A\propto D^{0}$ and $B\propto D^{0}$. The first viscous model predicts $A\propto D^{1/2}$ and $B\propto D^{-1/2}$. Overall, the experimental data show that $A$ increases with $D$ and $B$ decreases with $D$, which is inconsistent with the added-mass model. We note that increase is clearer for the cylindrical intruders (square symbols) than for the spherical intruders (circular symbols). The cylindrical intruders appear to follow the predictions of the first viscous model, $A\propto D^{1/2}$ and $B\propto D^{-1/2}$. For spherical intruders, the hybrid model from Sec. 2.8 may be more relevant, where the impact is more point-like instead of a circular surface that makes simultaneous contact with the fluid. These models still predicted $\alpha\approx 1.5$ and $\beta\approx-0.5$, but they had no $D$ dependence. 3.3 Cone shaped intruders Intruder shape affects $A$ and $B$ somewhat, as can be observed from dynamics of cone-shaped intruders. Figure 6(e) and (f) show conical intruder data whose mass and diameter are constant, $m\approx 195$ g and diameter $D=30$ mm, but with varied angles $\theta=0^{\circ}$, 20${}^{\circ}$, 30${}^{\circ}$, 45${}^{\circ}$, 55${}^{\circ}$, and 70${}^{\circ}$. Here, $\theta=0^{\circ}$ corresponds to a flat cylinder and $\theta=90$ is the maximum possible value. We observe $F_{\rm max}\propto{v_{0}}^{1.5}$ and $t_{\rm max}\propto{v_{0}}^{-0.5}$ for all cone angles, suggesting that viscous-like forces are again dominant. However, $A$ decreases with increasing $\theta$, while $B$ increases with increasing $\theta$. One hypothesis for this behavior is that larger $\theta$ corresponds to a smaller contact area, equivalent to smaller $D$. Another explanation could come from the fact that increasing $\theta$ means that the dynamically jammed region transitions from being generated primarily by normal compression (for $\theta=0$) to being generated primarily through shear jamming. As shown by Han et al. (2018), the value of $k$ is smaller for fronts created by shear jamming. Our data is inconclusive on this question, except for the fact that we consistently find $\alpha\approx 1.5$ and $\beta\approx-0.5$ for all values of $\theta$, which is consistent with the class of viscous models discussed in Sec. 2. 3.4 Relaxation after peak deceleration Finally, we consider the intruder dynamics after the peak deceleration, which also provides information on the microscopic physics of suspension dynamics. While the dynamics of impact before peak deceleration are highly sensitive to various experimental control parameters, the post-peak dynamics are not. This is shown in Fig. 7. Figure 7(a) and (b) show impacts with $1<v_{0}<1.5$ m/s and $3<v_{0}<3.5$ m/s, respectively, both with a wide variety of intruder properties. $t_{\rm max}$ varies dramatically with $v_{0}$ and intruder properties, which has been the subject of our analysis so far. However, Fig. 7(c) and (d) show that forces decay quasi-exponentially with a time scale between 2 and 3 ms; this behavior is largely independent of speed and intruder properties. This implies that, e.g., the time scale $\tau=\sqrt{C_{v}k\eta_{s}v_{0}/2m}$ in Eqs. (5) and (6) might capture the buildup to peak force but not the decay. This suggests that the relaxation dynamics are dominated by the more microscopic material composition in a way that is not sensitive to the intruder. Thus, these dynamics appear to lie outside the description of either the added mass or viscous models. Our observations are consistent with Peters & Jaeger (2014), who found that changing the viscosity of the suspending fluid affected the dynamics of the relaxation of the jammed front but not of its growth. 4 Conclusion Here we have theoretically and experimentally studied the problem of impact of an intruder into a dense suspension. In agreement with previous authors such as Mukhopadhyay et al. (2018), we find that the added-mass model (Waitukaitis & Jaeger, 2012), which has been the dominant model used to explain the dynamics of impacts into dense suspensions, predicts $F_{\rm max}\propto{v_{0}}^{2}$ and $t_{\rm max}\propto{v_{0}}^{-1}$. In contrast, the experimental data show $F_{\rm max}=A{v_{0}}^{1.5}$ and $t_{\rm max}=B{v_{0}}^{-0.5}$. These exponents are consistent with a class of models where the dominant force is not added mass but viscous-like forces at the boundary of the jammed suspension. We have also studied how the prefactors $A$ and $B$ depend on intruder mass, size, and shape. These results are either consistent with both added-mass and viscous models (e.g., in the case of varying intruder mass) or more consistent with viscous models (e.g., in the case of cylindrical intruders with varying diameter). Our results suggest that the added-mass model should be revised to include viscous-like terms at the boundary, since these forces may play a dominant role. These results do not change certain aspects of the underlying physical picture for impact into dense suspensions: a solid-like region grows outward from the point of impact and dominates the intruder dynamics. If large, viscous-like forces are dominant, this has the advantage of conceptually unifying impact with steady-state rheology descriptions like DST. Finally, as mentioned in Sec. 1, we note that Fig. 6 of Peters & Jaeger (2014) shows that added mass is sufficient to explain the forces measured by an external sensor for velocity controlled impact into a 2D layer of dense suspension. However, in a 2D experiment, the viscous-like forces that we propose would act over a 1D boundary between the jammed region and the uncompressed suspension; in a 3D experiment, the surface area of the 3D jammed solid is much bigger, leading to significantly larger viscous forces. Thus, their findings (that added mass was sufficient to explain the resisting force on the intruding object in a 2D situation) are consistent with the results we have shown for 3D impacts. Future work is needed to better characterize the relative roles of added-mass and viscous forces, as well as to better characterize the magnitude of and the length scale over which the viscous forces act and to understand how the solidified mass relaxes back into a fluid-like state. Acknowledgements.This article was made possible by the Office of Naval Research under Grant No. N0001419WX01519 and by the Office of Naval Research Global Visiting Scientist Program VSP 19-7-001. We thank Scott Waitukaitis for sharing his data and for helpful discussions on his PhD thesis. Declaration of Interests: the authors report no conflict of interest. References Barnes (1989) Barnes, H. A. 1989 Shear‐thickening (“dilatancy”) in suspensions of nonaggregating solid particles dispersed in newtonian liquids. Journal of Rheology 33 (2), 329–366, arXiv: https://doi.org/10.1122/1.550017. Brown & Jaeger (2009) Brown, Eric & Jaeger, Heinrich M. 2009 Dynamic jamming point for shear thickening suspensions. Phys. Rev. Lett. 103, 086001. Brown & Jaeger (2014) Brown, Eric & Jaeger, Heinrich M 2014 Shear thickening in concentrated suspensions: phenomenology, mechanisms and relations to jamming. Reports on Progress in Physics 77 (4), 046602. Chen et al. (2019) Chen, David Z, Zheng, Hu, Wang, Dong & Behringer, Robert P 2019 Discontinuous rate-stiffening in a granular composite modeled after cornstarch and water. Nature communications 10 (1), 1–6. Clark et al. (2012) Clark, Abram H., Kondic, Lou & Behringer, Robert P. 2012 Particle scale dynamics in granular impact. Phys. Rev. Lett. 109, 238302. Clark et al. (2014) Clark, A. H., Petersen, A. J. & Behringer, R. P. 2014 Collisional model for granular impact dynamics. Phys. Rev. E 89, 012201. Darcy (2012) Darcy, Henry 2012 Les fontaines publiques de dijon ed 1856. Hachette Livre-Bnf. Fall et al. (2012) Fall, A., Bertrand, F., Ovarlez, G. & Bonn, D. 2012 Shear thickening of cornstarch suspensions. J. Rheol. 56 (3), 575–591. Fall et al. (2010) Fall, A., Lemaître, A., Bertrand, F., Bonn, D. & Ovarlez, G. 2010 Shear thickening and migration in granular suspensions. Phys. Rev. Lett. 105, 268303. Goldman & Umbanhowar (2008) Goldman, D. I. & Umbanhowar, P. 2008 Scaling and dynamics of sphere and disk impact into granular media. Phys. Rev. E 77, 021308. Han et al. (2019a) Han, Endao, James, Nicole M. & Jaeger, Heinrich M. 2019a Stress controlled rheology of dense suspensions using transient flows. Phys. Rev. Lett. 123, 248002. Han et al. (2016) Han, E., Peters, I. R. & Jaeger, H. M. 2016 High-speed ultrasound imaging in dense suspensions reveals impact-activated solidification due to dynamic shear jamming. Nature communications 7 (1), 1–8. Han et al. (2017) Han, Endao, Van Ha, Nigel & Jaeger, Heinrich M 2017 Measuring the porosity and compressibility of liquid-suspended porous particles using ultrasound. Soft Matter 13 (19), 3506–3513. Han et al. (2018) Han, Endao, Wyart, Matthieu, Peters, Ivo R. & Jaeger, Heinrich M. 2018 Shear fronts in shear-thickening suspensions. Phys. Rev. Fluids 3, 073301. Han et al. (2019b) Han, Endao, Zhao, Liang, Van Ha, Nigel, Hsieh, S. Tonia, Szyld, Daniel B. & Jaeger, Heinrich M. 2019b Dynamic jamming of dense suspensions under tilted impact. Phys. Rev. Fluids 4, 063304. van Hecke (2009) van Hecke, M. 2009 Jamming of soft particles: geometry, mechanics, scaling and isostaticity. Journal of Physics: Condensed Matter 22 (3), 033101. Hoffman (1972) Hoffman, R. L. 1972 Discontinuous and dilatant viscosity behavior in concentrated suspensions. i. observation of a flow instability. Transactions of the Society of Rheology 16 (1), 155–173, arXiv: https://doi.org/10.1122/1.549250. Jerome et al. (2016) Jerome, J. J. S., Vandenberghe, N. & Forterre, Y. 2016 Unifying impacts in granular matter from quicksand to cornstarch. Phys. Rev. Lett. 117 (9), 098003. Krizou & Clark (2020) Krizou, N. & Clark, A. H. 2020 Power-law scaling of early-stage forces during granular impact. Phys. Rev. Lett. 124, 178002. Lee et al. (2003) Lee, Y. S., Wetzel, E. D. & Wagner, N. J. 2003 The ballistic impact characteristics of kevlar® woven fabrics impregnated with a colloidal shear thickening fluid. Journal of Materials Science 38 (13), 2825–2833. Mukhopadhyay et al. (2018) Mukhopadhyay, Shomeek, Allen, Benjamin & Brown, Eric 2018 Testing constitutive relations by running and walking on cornstarch and water suspensions. Phys. Rev. E 97, 052604. Oyarte Gálvez et al. (2017) Oyarte Gálvez, Loreto, de Beer, Sissi, van der Meer, Devaraj & Pons, Adeline 2017 Dramatic effect of fluid chemistry on cornstarch suspensions: Linking particle interactions to macroscopic rheology. Phys. Rev. E 95, 030602. Peters & Jaeger (2014) Peters, I. R. & Jaeger, H. M. 2014 Quasi-2d dynamic jamming in cornstarch suspensions: visualization and force measurements. Soft Matter 10 (34), 6564–6570. Reynolds (1885) Reynolds, Osborne 1885 Lvii. on the dilatancy of media composed of rigid particles in contact. with experimental illustrations. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 20 (127), 469–481, arXiv: https://doi.org/10.1080/14786448508627791. Seto et al. (2013) Seto, R., Mari, R., Morris, J. F & Denn, M. M. 2013 Discontinuous shear thickening of frictional hard-sphere suspensions. Phys. Rev. Lett. 111 (21), 218301. Uehara et al. (2003) Uehara, J. S., Ambroso, M. A., Ojha, R. P. & Durian, D. J. 2003 Low-speed impact craters in loose granular media. Phys. Rev. Lett. 90, 194301. Waitukaitis (2014) Waitukaitis, Scott R 2014 Impact-activated solidification of cornstarch and water suspensions. Springer. Waitukaitis & Jaeger (2012) Waitukaitis, S. R. & Jaeger, H. M. 2012 Impact-activated solidification of dense suspensions via dynamic jamming fronts. Nature 487 (7406), 205. Waitukaitis et al. (2013) Waitukaitis, S. R., Roth, L. K., Vitelli, V. & Jaeger, H. M. 2013 Dynamic jamming fronts. EPL (Europhysics Letters) 102 (4), 44001. Walsh et al. (2003) Walsh, A. M., Holloway, K. E., Habdas, P. & de Bruyn, J. R. 2003 Morphology and scaling of impact craters in granular media. Phys. Rev. Lett. 91, 104301. Wyart & Cates (2014) Wyart, M. & Cates, M. E. 2014 Discontinuous shear thickening without inertia in dense non-brownian suspensions. Phys. Rev. Lett. 112, 098302. Zhao et al. (2015) Zhao, R., Zhang, Q., Tjugito, H. & Cheng, X. 2015 Granular impact cratering by liquid drops: Understanding raindrop imprints through an analogy to asteroid strikes. Proc. Natl. Acad. Sci. 112 (2), 342–347.
BI-TP-96/50 November 1996 TOPOLOGY IN 4D SIMPLICIAL QUANTUM GRAVITY S. Bilke11footnotemark: 1, Z. Burda22footnotemark: 2 and B. Petersson11footnotemark: 1 11footnotemark: 1 Fakultät für Physik Universität Bielefeld, Postfach 10 01 31, Bielefeld 33501, Germany 22footnotemark: 2 Institute of Physics Jagellonian University, ul. Reymonta 4, PL-30 059, Kraków 16, Poland Abstract We simulate $4d$ simplicial gravity for three topologies $S^{4}$, $S^{3}\times S^{1}$ and $S^{1}\times S^{1}\times S^{1}\times S^{1}$ and show that the free energy for these three fixed topology ensembles is the same in the thermodynamic limit $N_{4}\rightarrow\infty$. We show, that the next-to-leading order corrections, at least away from the critical point, can be described by kinematic sources. 1 Introduction It is not a priori clear, whether in a path-integral formulation of quantum gravity the sum over metrics should also run over topologies. In a theory containing topology fluctuations only those topologies contribute to the sum in the thermodynamic limit, which maximize the extensive part of the free energy. Other contributions are exponentially suppressed. If one believes that all topology excitations should be present in the continuum theory, the bulk volume contribution to the free energy should be independent of topology. An explicit example of such a theory is provided by two dimensional quantum gravity. Here the Einstein-Hilbert action is of purely topological nature. It can be shown explicitly that the coefficent $\mu_{c}$ of the leading (extensive) part of the entropy does not depend on the topology. The coefficient of the logarithmic correction is $\gamma-3$, where $\gamma$ is the so-called surface susceptibility exponent. The exponent $\gamma$, does, however, depend linearly on the genus of the surface. The linear dependence leads to the double scaling limit at which one can reduce the number of non-perturbative modes of the theory with fluctuating topology to the solutions of the Painlevé II equation. In four dimensions the situation is more complicated. At present, no classification of topologies is known. Therefore the topological part in the action is unknown. Simplicial quantum gravity allows us to sum over geometries with fixed topology. We investigate numerically three different topologies and show that in these cases, up to the leading order, the free energy does not depend on the topology. We observe a topology dependence in the next to leading order. We study those volume corrections and analyze their sources at the kinematic bounds. Some of our results have already been presented at Lattice 96 [1]. 2 Definitions The partition function of simplicial quantum gravity in the grand canonical ensemble is $$Z(\kappa_{2},\kappa_{4})=\sum_{\cal T}\frac{1}{C(T)}e^{-\kappa_{4}N_{4}+\kappa% _{2}N_{2}},$$ (1) where the first summation is over all $4d$-simplicial manifolds $\cal T$ with fixed topology [2]. The parameter $\kappa_{4}$ is proportional to the cosmological constant and $\kappa_{2}$ is a linear combination of the inverse of the Newton constant and the cosmological constant in the naive continuum limit. The prefactor $1/C(T)$ is a remnant of the invariance group and divides out the internal symmetry factor of the triangulation. The free energy $F$ in the canonical ensemble is defined as: $$e^{F(\kappa_{2},N_{4})}=\sum_{T\in{\cal T}(N_{4})}\frac{1}{C(T)}e^{\kappa_{2}N% _{2}(T)}$$ (2) The sum runs over fixed topology $4d$ simplicial manifolds with a fixed number $N_{4}$ of $4$-simplices. In the large volume limit $N_{4}\rightarrow\infty$ the free energy is assumed to have the form : $$F(\kappa_{2},N_{4})=N_{4}f(\kappa_{2})+\delta(\kappa_{2},N_{4}),$$ (3) where the function $\delta$ is a finite size correction, ie for any $\kappa_{2}$ $$\lim_{N_{4}\rightarrow\infty}\frac{\delta(\kappa_{2},N_{4})}{N_{4}}=0.$$ (4) To recover some basic properties of the theory it is convenient to study the derivatives of the free energy. The action density $$r(\kappa_{2},N_{4})=\frac{1}{N_{4}}\frac{\partial F}{\partial\kappa_{2}}=\frac% {\langle N_{2}\rangle}{N_{4}}$$ (5) is normalized such that it becomes an intensive quantity in the thermodynamic limit $N_{4}\rightarrow\infty$. It is related to the average bare Regge curvature $R_{av}$ by $r=R_{av}+\alpha$ where $\alpha=\frac{10}{2\pi}\mbox{arccos}(1/4)\approx 2.09784$. The average is taken in the canonical ensemble (2). The derivative with respect to $N_{4}$ $$\tilde{\kappa}_{4}(\kappa_{2},N_{4})=\frac{\partial F}{\partial N_{4}}.$$ (6) will be called the critical value of the parameter $\kappa_{4}$. Our definition differs slightly from the one proposed in [3]. This quantity is a measure of finite size dependence of the free energy, ie how fast it approaches the thermodynamic limit $\tilde{\kappa}_{4}\rightarrow\tilde{\kappa}_{4}(\kappa_{2})$ for $N_{4}\rightarrow\infty$. In the thermodynamic limit $N_{4}\rightarrow\infty$ the value $\tilde{\kappa}_{4}(\kappa_{2})$ defines a critical line of the model corresponding to the radius of convergence of the series (2). Taking the second derivative, we see that $\tilde{\kappa}_{4}$ is related to the action density $r$ in the following way : $$\frac{\partial\tilde{\kappa}_{4}}{\partial\kappa_{2}}=r+N_{4}\frac{\partial r}% {\partial N_{4}}$$ (7) It is important to note that in the large $N_{4}$ limit the second term on the right hand side of (7) goes to zero, and the critical parameter $\tilde{\kappa}_{4}$ becomes an integral of $r$. Thus, if $r$ is independent of topology in the thermodynamic limit, so is $\tilde{\kappa}_{4}$, unless the integration constant depends on the topology. To fix the integration constant, one just has to measure $\tilde{\kappa}_{4}$ for one particular value of $\kappa_{2}$. 3 Methods The average action density (5) can easily be measured in canonical simulations. The quantity $\tilde{\kappa}_{4}$ (6) requires non-conventional methods. Some of them, like those based on the sum rules [4] or an analysis of the baby universe distributions [5], permit to directly extract the next-to-leading volume corrections. These methods are unfortunately limited to the elongated phase. Following [3], we adopt here a more general method based on multi canonical simulations which works equally well in the entire range of the coupling $\kappa_{2}$. To learn about how the free energy depends on $N_{4}$ one lets the volume fluctuate in the external potential $U(N_{4})$. Measuring the resulting $N_{4}$ distribution and combining it with the known form of $U$ one gets the dependence of the free energy on $N_{4}$. The freedom one has in choosing of the potential $U$ should be used to minimize the error for the quantity one wants to measure. In this particular case one wants to measure $\partial F/\partial N_{4}$ for the function $F$, which is expected to smoothly approach a function $N_{4}f(\kappa_{2})$ linear in $N_{4}$ for large $N_{4}$ as given in eq. (3), (4). A Gaussian term controlling fluctuations around a fixed volume $V_{4}$ is well suited for this problem [3] : $$U=-\kappa_{4}N_{4}+\frac{\gamma}{2}(N_{4}-V_{4})^{2}$$ (8) but other terms can be used as well [2]. The multicanonical partition function for this potential reads : $$Z=\sum_{T}e^{\kappa_{2}N_{2}+U(N_{4})}=\sum_{N_{4}}e^{F(\kappa_{2},N_{4})-% \kappa_{4}N_{4}+\frac{\gamma}{2}(N_{4}-V_{4})^{2}}$$ (9) Expanding $F(\kappa_{2},N_{4})$ around $N_{4}=V_{4}$ one sees that the $N_{4}$ distribution becomes Gaussian : $$P(x=N_{4}-V_{4})\sim\exp(-\frac{\Gamma}{2}(x-x_{0})^{2}+...)$$ (10) when $\kappa_{4}$ is tuned to be close to the derivative $\partial F/\partial V_{4}$ and $\gamma$ is much larger than the second derivative $\partial^{2}F/\partial V_{4}^{2}$. This means that the range of the distribution is much smaller than the typical scale for changes in $F$. The parameters $\Gamma$ and $x_{0}$ are related to $F$ and $U$ : $$\Gamma=\gamma-\partial^{2}F/\partial V_{4}^{2},\quad x_{0}=\big{(}\partial F/% \partial V_{4}-\kappa_{4}\big{)}/\Gamma.$$ (11) Both these quantities can be measured in the simulations, namely $\Gamma$ from the width of the distribution and $x_{0}$ from the shift of the maximum from $N_{4}=V_{4}$. From the numerical results one gets an estimator for the critical coupling at $V_{4}$ : $$\tilde{\kappa}_{4}=\partial F/\partial V_{4}=\Gamma x_{0}+\kappa_{4}.$$ (12) In this formula, $\kappa_{4}$ is the coupling used in the simulation, from which one extracted $x_{0}$ and $\Gamma$. The estimator can be improved by minimizing $x_{0}$. This can be done recursively by setting $\kappa_{4}=\tilde{\kappa}_{4}$ in the potential . After performing measurements one can justify the validity of the assumptions used to write the Gaussian approximation (10). The value for $\gamma$ should not be too large, because suppressing volume-fluctuations can spoil the mobility of the algorithm. On the other hand, it should not be too small either, because the average number of sweeps between two configurations with canonical volume $V_{4}$ grows with decreasing $\gamma$. In particular we checked that for $\gamma=0.0001$, $\Gamma$ computed from the width of the distribution, $\Gamma=1/\sqrt{\sigma^{2}(N_{4})}$, i.e. $\Delta N_{4}=100$, was equal to $\gamma$ within the error bars, meaning that $\gamma\gg\partial^{2}F/\partial^{2}V_{4}$. Therefore the free energy changes very slowly in the range of the distribution width as needed for the approximation (10). In this investigation we have performed simulations with $V_{4}=$ 4000, 8000, 16000, 32000, 64000. Because the local moves [6] used in the Monte Carlo simulation preserve the topology $t$ of the manifold, $t$ is given by the topology of the starting configuration. In our simulation we choose to simulate a spherical $S^{4}$ topology and two different tori, namely $S^{3}\times S^{1}$ and $S^{1}\times S^{1}\times S^{1}\times S^{1}$. The starting configuration for the sphere is, as usual, the $4d$-boundary of a 5-simplex. The manifold $S^{3}\times S^{1}$ can be produced from a spherical manifold by taking away two separate 4-simplices and gluing together the boundaries $b_{1},b_{2}$, which are created when removing the two 4-simplices. However, two vertices in a 4d simplicial manifold may have at most one common link. To avoid creation of a double connection in the gluing procedure, the distance between the vertices on $b_{1}$ to those on $b_{2}$ has to be at least three links. One can ensure this without going through a tedious check, by gluing cyclically together three copies of the original (spherical) manifold, i.e. $b_{1}/b_{2}^{\prime},b_{1}^{\prime}/b_{2}^{\prime\prime},b_{1}^{\prime\prime}/% b_{2}$. The (double-) prime is used to distinguish the different copies. The boundaries $b_{1},b_{2}$ are chosen such that they have no common vertex. The $S^{1}\times S^{1}\times S^{1}\times S^{1}$ manifold can be built out of the regular $3^{4}$ square torus by dividing each elementary $4d$-cube into 4-simplices in the following way. For each cube we mark two points $p_{0}=(0,0,0,0)$ and $p_{4}=(1,1,1,1)$ lying on the opposite ends of the main diagonal and connect them by one of the shortest paths going along the edges of the cube. The shortest path goes through $4$ edges, each in a different direction, and through three points, say $p_{1},p_{2},p_{3}$. There are $24$ such paths. Each set of points $p_{0},p_{1},p_{2},p_{3},p_{4}$ forms a 4-simplex. 4 Finite Size Analysis and Results The elongated phase of simplicial gravity is well described as an ensemble of branched polymers [7], [8], [9]. This means that in the large volume limit one can use the Ansatz $$F(\kappa_{2},N_{4})=N_{4}f_{0}(\kappa_{2})+(\gamma-3)\log N_{4}+f_{1}(\kappa_{% 2})$$ (13) for the free energy. The corrections are expected to be of order ${\cal O}(1/N_{4})$. The correction coefficient $\gamma$ is assumed to depend only on the genus $g$ of the underlying branched polymer structure. Differentiating (13) with respect to $\kappa_{2}$ one sees that for the action density $r$ (5) $$r=f_{0}^{\prime}(\kappa_{2})+\frac{f_{1}^{\prime}(\kappa_{2})}{N_{4}}$$ (14) the logarithmic corrections disappear. Therefore one should take the next corrections, namely $1/N_{4}$, into account. They can appear in $r$ for purely kinematic reasons. To understand their origin consider the limit of large positive $\kappa_{2}$, in which only triangulations maximizing $N_{2}$ contribute to the sum (2). Such triangulations can be obtained from barycentric subdivisions of 4-simplices applied successively to a minimal starting configuration, possibly mixed with micro-canonical transformation, which do not change $N_{2}$ and $N_{4}$. By the minimal configuration we mean the minimal volume triangulation which maximizes $N_{2}$. For the barycentric subdivisions one gets the relation $N_{2}=5/2N_{4}+c^{0}$, where the constant $$c^{0}=N_{2}^{0}-5/2N_{4}^{0}$$ (15) characterizes the initial minimal configuration (indicated by the index 0). The number $N_{2}$ of triangles is related to the action density $r=N_{2}/N_{4}=5/2+c^{0}/N_{4}$. This means, the constant $c^{0}$ leads to $1/N_{4}$ corrections of the action density. The contributions to the sum (2) from triangulations built from non-minimal ones (ie smaller $N_{2}$) are suppressed exponentially by $\exp-\kappa_{2}(c^{0}-c)$, where $c$ characterizes the non-minimal start configuration with $c(\kappa_{2})<c^{0}$. For the sphere, the minimal configuration is the surface of a $5$-simplex, and therefore $c^{0}$ is known. For the other topologies we extracted the number listed below by a numerical experiment. With the standard topology preserving Monte Carlo process we use a cooling procedure, in which we increase $\kappa_{4}$, to decrease $N_{4}$, and $\kappa_{2}$ to maximize $N_{2}$. In fact we have to increase $\kappa_{2}$ slowly compared to $\kappa_{4}$, because increasing $\kappa_{2}$ also increases the pseudo-critical coupling $\tilde{\kappa}_{4}(\kappa_{2})$. For the topologies studied, we found that the following configurations are minimal : $$\begin{array}[]{rllll}S^{4}&:&N_{4}^{0}=6,&N_{2}^{0}=20&\Rightarrow c^{0}=5,\\ &&&&\\ S^{3}\times S^{1}&:&N_{4}^{0}=110,&N_{2}^{0}=44&\Rightarrow c^{0}=0,\\ &&&&\\ S^{1}\times S^{1}\times S^{1}\times S^{1}&:&N_{4}^{0}=704,&N_{2}^{0}=1472&% \Rightarrow c^{0}=288.\\ \end{array}$$ (16) For the sphere $S^{4}$ the $1/N_{4}$ effect is very difficult to detect already for the volumes in the range of a few thousand 4-simplices and it would require extremely long runs to reduce the error bars below it. For manifolds $S^{3}\times S^{1}$ the effect is not present at all. The corrections are, however, two orders of magnitude larger for $S^{1}\times S^{1}\times S^{1}\times S^{1}$ and are measurable in the volume range used in the simulations. This estimation of the $1/N_{4}$ effect is exact for infinite positive $\kappa_{2}$ but one expects it to work although with a slowly varying coefficient $c(\kappa_{2})$ in the entire elongated phase. In fig.1 we show the action density $r$ measured in the elongated phase for $\kappa_{2}=2.0$ against $1/N_{4}$. On the same figure we display the curve $r_{\infty}+c^{0}/N_{4}$ with $c^{0}=288$, and $r_{\infty}=2.482$ which fits the data points very well. We note that that $r_{\infty}$ does not depend on topology, at least for those used in the simulation. We find, with the statistics available, no volume- or topology-dependence of $\tilde{\kappa_{4}}$. This is compatible with the Ansatz $$\tilde{\kappa}_{4}(\kappa_{2},N_{4})=f_{0}(\kappa_{2})+\frac{\gamma-3}{N_{4}},$$ (17) because $\gamma$ is known to be ${\cal O}(1)$. With the method used, the correction could only be separated from the statistical noise with an disproportionate amount of computer time. One could instead use the methods used in [5] to determine $\gamma$. We find $\tilde{\kappa}_{4}^{\infty}=5.659(4)$ for $\kappa_{2}=2.0$ in the infinite volume limit. In the crumpled phase, we use the power-law Ansatz $$F(\kappa_{2},N_{4})=N_{4}f_{0}(\kappa_{2})+f_{1}(\kappa_{2})N_{4}^{\delta}$$ (18) Taking the derivative with respect to $N_{4}$ one gets: $$\tilde{\kappa}_{4}(\kappa_{2},N_{4})=f_{0}(\kappa_{2})+\delta f_{1}(\kappa_{2}% )N_{4}^{\delta-1}.$$ (19) For the action density (5) one finds $$r=f_{0}^{\prime}(\kappa_{2})+f_{1}^{\prime}(\kappa_{2})N_{4}^{\delta-1}$$ (20) We checked the Ansatz by fitting our numerical data for $r(N_{4})$ (20): Topology $$\delta$$ $$r^{\infty}=f_{0}^{\prime}$$ $$\log(f_{1}^{\prime})$$ $$\chi^{2}/\mbox{dof}$$ $$S^{4}$$ 0.5 (2) 2.039 (+0.010 / -0.013) 1.9 (1.3) 0.78 $$(S^{1})^{4}$$ 0.6 (2) 2.028 (+0.008 / -0.016) 1.0 (0.9) 0.87 $$S^{1}\times S^{3}$$ 0.5 (2) 2.038 (+0.010 / -0.021) 1.8 (1.3) 0.31 and for $\kappa_{4}(N_{4})$ to (19): Topology $$\delta$$ $$\kappa_{4}^{\infty}=f_{0}$$ $$\log(\delta f_{1})$$ $$\chi^{2}/\mbox{dof}$$ $$S^{4}$$ 0.6 (2) 1.20 (2) 1.4 (1.3) 0.22 $$(S^{1})^{4}$$ 0.6 (3) 1.20 (2) 1.4 (1.5) 0.10 $$S^{1}\times S^{3}$$ 0.6 (2) 1.20 (2) 1.4 (1.3) 0.18 In the tables we give the logarithms of $f_{1}^{\prime}$ and $\delta f_{1}$, while these have approximately symmetric errors. One can see that the the values do not depend, within errors, on topology. In figure 2 we show the numerical data for $\kappa_{4}$ for the three topologies and $\delta=0.5$. The value $\delta=0.5$ can be understood by looking at the typical configurations, that dominate the ensemble for large negative $\kappa_{2}$. These are configurations, which minimize the free energy, i.e. which have for a fixed volume the minimal number of vertices and thereby the minimal number $N_{2}$ of triangles. For the three-dimensional case such configurations were constructed in [10]. This construction can easily be extended to the four-dimensional case. One starts with a $2d$ triangulation of the sphere with $t$ triangles. At each triangle one builds a $4d$-pancake neighborhood from $q$ 4-simplices lying around the triangle in such a way that the links opposite to this triangle form a circle. An opposite link is defined to be a link, which does not share a common vertex with the triangle. The next step is to put neighboring pancakes together by identifying these circles with the $3d$ faces of the neighboring pancakes. Each pancake has three such faces which lie between the circle and an edge of the basic triangle. It also has three neighboring pancakes. After this step one gets an $S^{4}$ sphere with $t*q$ 4-simplices and $2+t/2+q$ vertices. The highest connectivity $N_{4}\sim(N_{0}-2)^{2}/2$ is reached when $t=2q$. The number $N_{2}$ of triangles is a linear combination of $N_{0}$ and $N_{4}$. Using equation 5 one finds $r\propto 1//sqrt(N_{4})$ ie. $\delta=0.5$ for these configurations. We want to note, that in [11] an argument in favor of $\delta=0.75$ was given. This can not be excluded by our numerical data. In [12] evidence was given, that for spherical topology the phase transition is of first order. This was confirmed in [13], [14]. We could observe flip-flops in the action density also for the two tori under investigation. We interpret this as a hint, that the transition is of first order in these cases as well. Finally we want to comment on the behavior of the algorithm in the crumpled phase. The typical configuration of simplicial gravity in this regime has one so-called singular link. The local volume of a singular link, i.e. the number of four-simplices which contain this link, diverges, when the volume of the entire configuration goes to infinity. The relaxation time, i.e. the number of Monte Carlo sweeps required until the singular link appeared in the configuration, was extremely long for all topologies. This effect is even more pronounced if one starts with a branched-polymer like configuration. For the $(S^{1})^{4}$-torus and small volumes (less than 64k 4-Simplices) the situation is even worse. It was, at least for the run-length used in our numerical experiments, impossible to relax to such singular configurations. On the other hand, we know that they exist, because they could be reached by shrinking down larger configurations containing a singular link. It seems it is difficult for the algorithm to deal with two different defects, the singular link and the hole, at the same time. We will discuss this point, which is directly related to the question of practical ergodicity, more carefully in a forthcoming publication. 5 Discussion and Conclusions In this paper we have investigated the behavior of the entropy density and the curvature for three different topologies in four dimensional simplicial gravity. We employed manifolds consisting of between 4000 and 64000 simplices. We concentrated on two values of the gravitational coupling constant $\kappa_{2}$, one in the crumpled and one in the branched polymer phase. We found that in both the cases the value of the entropy density and curvature in the infinite volume limit are equal for these three topologies. This gives further support to the conjecture that these limits exist, a question which was discussed in the literature some time ago [15], [11], [16]. Furthermore, it gives support to the hypothesis that all topologies contribute to a sum over topologies, like in two dimensions. The value of the entropy exponent can be compared with the estimates given in [17], based on a summation over all distributions of curvature. This estimate, which surprisingly enough is exact for the leading term in two dimensions, does, however, not directly give results in agreement with our numerical data for $\tilde{\kappa}_{4}$. We further analyzed in detail the finite size effects, and compared them to estimates from simple kinematic arguments. This approach explain very well the finite size corrections. It may, of course still be possible that those effects are of a more complicated nature near the transition. Finally, as also seen in previous investigations, we observe that the algorithm has very long relaxation times in the crumpled phase. We even found that for the $(S_{1})^{4}$ torus, the ground state seemed not to accessible using the approximately fixed $N_{4}$ algorithm, but only by passing through states with much larger $N_{4}$-values. This breakdown of practical ergodicity is still under investigation. 6 Acknowledgments We are grateful to J. Ambjørn, P. Białas, and J. Jurkiewicz and A. Krzywicki for discussions. The work was supported by the Deutsche Forschungsgemeinschaft under grant Pe340/3. We thank the HLRZ Jülich, for the computer time on the Paragon on which the work was done. References [1] S. Bilke, Z. Burda, A. Krzywicki, B. Petersson, Phase transition and topology in 4-d simplicial quantum gravity, hep-lat/9608027 [2] J. Ambjørn and J. Jurkiewicz, Phys. Lett. B278 (1992) 42. [3] S. Catterall, J. Kogut and R. Renken, Phys. Rev. Lett 72 (1994) 4062. [4] D.V. Boulatov and V.A. Kazakov, Phys. Lett. B214 (1988) 581. [5] J. Ambjørn, J. Jain and G. Thorleifsson, Phys. Lett. B307 (1993) 34. [6] M. Gross, S. Varsted, Nucl. Phys. B 378 (1992) 367 [7] J. Ambjørn and J. Jurkiewicz, Nucl. Phys. B451 (1995) 643 [8] P. Bialas, Correlations in fluctuating geometries, hep-lat/9608029 [9] P. Bialas, Z. Burda, B. Petersson, J. Tabaczek, Appearence of Mother Universe and Singular Vertices in Random Geometries, hep-lat/9608030 [10] F. David Simplicial Quantum Gravity and Random Lattices, Les Houches Sum.Sch. 92 (1992) 679 [11] J. Ambjørn and J. Jurkiewicz, Phys. Lett. B335 (1994) 355. [12] P. Bialas, Z. Burda, A. Krzywicki, B. Petersson, Nucl. Phys. B472 (1996) 293 [13] B. V. de Bakker Further evidence that the transition of 4-D dynamical triangulation is first order, hep-lat/9603024 [14] S. Catterall, G. Thorleifsson, J. Kogut, R. Renken Simplicial Gravity In Dimensions Greater Than Two hep-lat/9608042 [15] S. Catterall, J. Kogut, R. Renken, Phys. Rev. Lett. 72 (1994) 4062 [16] B. Brügmann and E. Marinari, Phys. Lett. B349 (1995) 35. [17] C. Bartocci, U. Bruzzo, M. Carfora, A. Marzuoli, Entropy of random coverings and 4-D quantum gravity hep-th/9412097, J. Ambjørn, Private Communication.
‘‘Квантования’’ высших гамильтоновых аналогов уравнений Пенлеве I и II с двумя степенями свободы Б. И. Сулейманов Б.И. 111Работа выполнена при поддержке ФЦП (контракт 02.740.11.0612). Аннотация Построено решение аналога уравнения Шредингера, определяемого гамильтонианом $H_{I}(z,t,q_{1},q_{2},p_{1},p_{2})$ второго члена $P_{1}^{2}$ иерархии первого уравнения Пенлеве. После явной замены оно задается решением систем линейных уравнений, условием совместности которых является нелинейное обыкновенное дифференциальное уравнение $P_{1}^{2}$ по независимой переменной $z$. Результат этой замены удовлетворяет также аналогу уравнения Шредингера, определяемого гамильтонианом $H_{II}(z,t,q_{1},q_{2},p_{1},p_{2})$ гамильтоновой системы c независимой переменной $t$, которая совместна с уравнением $P_{1}^{2}$. Показано, что схожая ситуация имеет место для представителя $P_{2}^{2}$ иерархии второго уравнения Пенлеве. ‘‘ Quantization’’ of higher hamiltonian analogues of the Painleve I and Painleve II equations with two degrees of freedom B. I. Suleimanov Abstract We construct a solution of an analog of the Schrödinger equation for the Hamiltonian $H_{I}(z,t,q_{1},q_{2},p_{1},p_{2})$ corresponding to the second equation $P_{1}^{2}$ in the Painleve I hierarchy. This solution is produced by an explicit change of variables from a solution of the linear equations whose compatibility condition is the ordinary differential equation $P_{1}^{2}$ with respect to $z$. This solution also satisfies an analog of the Schrödinger equation corresponding to the Hamiltonian $H_{II}(z,t,q_{1},q_{2},p_{1},p_{2})$ of Hamiltonian system with respect to $t$ which is compatible with $P_{1}^{2}$. A similar situation occurs for the $P_{2}^{2}$ equation in the Painleve II hierarchy. Для всех шести обыкновенных дифференциальных уравнений (ОДУ) Пенлеве $q_{zz}=f(z,q,q_{z})$, решения соответствующих им пар линейных систем метода изомонодромных деформаций (ИДМ) из статьи Р.Гарнье [1] с помощью явной замены задают [2] решения уравнений $$\frac{\partial}{\partial z}\Psi=H(z,x,-\frac{\partial}{\partial x})\Psi$$ (1) (см. также начало Заключения данной статьи.) Эти уравнения определяются квадратичными по импульсам $p$ гамильтонианами $H=H(z,q,p)$ гамильтоновых систем $q^{\prime}_{z}=H^{\prime}_{p}(z,q,p)$, $p^{\prime}_{z}=-H^{\prime}_{q}(z,q,p),$ исключение из которых $p$ дает шесть ОДУ Пенлеве. Из уравнений Шредингера $$\varepsilon\frac{\partial}{\partial z}\Psi=H(z,x,-\varepsilon\frac{\partial}{% \partial x})\Psi,$$ (2) зависящих от постоянной Планка $h=2\pi\hbar=-2\pi i\varepsilon$, эти шесть эволюционных уравнений (1) получаются в результате формальной замены $\varepsilon=1$. Отметим, что для ОДУ, получающихся из уравнений Пенлеве после замен $z_{new}=\varepsilon z$, $q_{new}=\varepsilon q$, соответствующие уравнения ИДМ в результате еще одной замены $x_{new}=\varepsilon x$ переходят в cовместные системы линейных ОДУ, которые задают точные решения уравнений (2), определяемые гамильтонианами $H$, зависящими от параметра $\varepsilon$. Следуя терминологии [3], подобные эволюционные уравнения (с постоянными $\varepsilon$, не связанными с $h$) называются в этой статье ‘‘квантованиями’’. ( При $\varepsilon=1$ они используются, например, при описании фильтрации процессов диффузии [4].) Дальнейшее развитие результаты [2] получили в работах [3], [5]—[8]. Но вопрос о подобных ‘‘квантованиях’’ для высших аналогов уравнений Пенлеве до сих пор не изучался. Ниже такие ‘‘квантования’’ приводятся для совместных решений уравнений Кортевега — де Вриза (КдВ) $$u_{t}=-uu_{z}-u_{zzz}$$ (3) и Нелинейного уравнения Шредингера (НУШ) $$-iu_{t}=u_{zz}+2\delta|u|^{2}u\qquad(\delta=const\in R)$$ (4) с ОДУ, определяемых суммами стационарных частей первых высших автономной симметрий уравнений (3), (4) и их симметрий Галилея. (Эти высшие аналоги, соответственно, первого и второго уравнений Пенлеве, эквивалентны двум парам совместных гамильтоновых систем ОДУ с двумя степенями свободы по независимым переменным $z$ и $t$.) 1 Член $P_{1}^{2}$ иерархии уравнения Пенлеве I 1.1. Первым высшим представителем $P_{1}^{2}$ иерархии изомонодромных нелинейных ОДУ $P_{1}^{n}$ уравнения Пенлеве I, называемых также массивными $(2n+1,2)$ струнными уравнениями [9], [10], является уравнение $$u_{zzzz}+\frac{5}{3}uu_{zz}+\frac{5}{6}u^{2}_{z}+\frac{5}{18}(z-tu+u^{3})=0,% \qquad t=const.$$ (5) Интерес к ОДУ (5) в первую очередь связан с тем, что оно обладает специальным решением $u(z,t)$, возникающим при исследовании самых разных задач математической физики. В частности [11], это решение cовпадает с решением Гуревича—Питаевского (ГП) уравнения КдВ (3), введенным в рассмотрение в [12] в качестве функции, универсальным образом описывающая влияние малых дисперсионных добавок на опрокидывание простых волн в нелинейной гидродинамике. (В главном порядке асимптотика специального решения ГП при $t\to\infty$ описана в известной работе [13]. Результаты [13] уточнены в [11], [14]—[18], в [19] доказана его гладкость, в [17] поведение решения ГП промоделировано численно.) Подобную же роль эта специальная функция играет не только в случае простых волн, но и [14], [15], [20]—[22] для решений общего положения двухкомпонентных гидродинамических систем с малой дисперсией. Ряд конкретных двухкомпонентных систем (большей частью не интегрируемых), решения которых подтверждают справедливость последнего утверждения, описан в [15]— см. также [17], [21], [22]. А согласно гипотезе, сформулированной Б. А. Дубровиным [21], [22], решение ГП для задач с малой дисперсией должно играть еще более универсальную роль. Кроме того [11], [17], эта же нелинейная специальная функция изучалась в работах [23], [24] в связи с задачами квантовой теории гравитации. 1.2. Совместные решения уравнений (3) и (5) относятся к классу измононодромных решений уравнений нулевой кривизны [25]. Эти уравнения есть условие совместности линейных систем ( всюду далее нижний индекс при $u$ означает порядок производной $u$ по переменной $z$) $$\Phi_{z}=\left(\begin{array}[]{cc}0&1\\ \zeta-u/6&0\end{array}\right)\Phi,\Phi_{t}=\left(\begin{array}[]{cc}u_{1}/6&-u% /3-4\zeta\\ u_{2}/6+u^{2}/18+\zeta u/3-4\zeta^{2}&-u_{1}/6\end{array}\right)\Phi,$$ (6) $$\frac{5}{1728}\Phi_{\zeta}=(\frac{4\zeta u_{1}+u_{3}+uu_{1}}{96}\left(\begin{% array}[]{cc}-1&0\\ 0&1\end{array}\right)+(\zeta^{2}+\frac{\zeta u}{12}+\frac{u_{2}}{48}+\frac{u^{% 2}}{96}-\frac{5t}{288})\left(\begin{array}[]{cc}0&1\\ 0&0\end{array}\right)+$$ $$+(\zeta^{3}-\frac{\zeta^{2}}{12}-\zeta\frac{6u_{2}+u^{2}+5t}{288}+\frac{u_{2}u% }{288}-\frac{u_{1}^{2}}{576}+\frac{u^{3}}{864}+\frac{5z}{1728})\left(\begin{% array}[]{cc}0&0\\ 1&0\end{array}\right))\Phi.$$ (7) З а м е ч а н и е 1. Системы (6), (1), взятые из статьи [19], простыми заменами сводятся к трем линейных системам, выписанным ранее в [11]. При этом имеет место система ОДУ, состоящая из (3) и уравнений $$(u_{1})_{t}=\frac{2}{3}uu_{2}-\frac{1}{6}u_{1}^{2}+\frac{5}{18}(z-tu+u^{3}),$$ $$(u_{2})_{t}=\frac{2}{3}uu_{3}+\frac{1}{3}u_{1}u_{2}+\frac{5}{18}(1-tu_{1}+3u_{% 1}u^{2}),$$ (8) $$(u_{3})_{t}=u_{3}u_{1}+\frac{1}{3}(u_{2})^{2}-\frac{5}{18}u^{2}u_{2}+\frac{10}% {9}u(u_{1})^{2}-\frac{5}{27}(u^{4}-tu^{2}+zu)-\frac{5}{18}tu_{2}.$$ ОДУ (5) и данная система ОДУ эквивалентны гамильтоновым системам с двумя степенями свободы — соответственно, системам $$(q_{j})^{\prime}_{z}=(H_{I})^{\prime}_{p_{j}},\qquad(p_{j})^{\prime}_{z}=-(H_{% I})^{\prime}_{q_{j}}\qquad(j=1,2),$$ (9) $$(q_{j})^{\prime}_{t}=(H_{II})^{\prime}_{p_{j}},\qquad(p_{j})^{\prime}_{t}=-(H_% {II})^{\prime}_{q_{j}}\qquad(j=1,2),$$ (10) c квадратичными по импульсам $p_{j}$ гамильтонианами $$H_{I}(z,t,q_{1},q_{2},p_{1},p_{2})=p_{1}p_{2}-\frac{5q_{1}^{4}}{8}+\frac{5q_{2% }q_{1}^{2}}{2}-\frac{q_{2}^{2}}{2}-\frac{5tq_{1}^{2}}{36}+\frac{5zq_{1}}{108},$$ (11) $$H_{II}(z,t,q_{1},q_{2},p_{1},p_{2})=-\frac{p_{1}^{2}}{2}-q_{1}p_{1}p_{2}+(q_{2% }-\frac{5t}{36})p_{2}^{2}+\frac{5p_{2}}{108}+$$ $$+\frac{q_{1}^{5}}{2}-2q_{1}q_{2}^{2}-\frac{5tq_{1}^{3}}{36}+\frac{5tq_{1}q_{2}% }{18}-\frac{5zq_{1}^{2}}{216}-\frac{5zq_{2}}{108},$$ (12) где $q_{1}=u/6$, $q_{2}=u_{2}/6+5u^{2}/72$ $p_{1}=u_{3}/6+5uu_{1}/36$, $p_{2}=u_{1}/6$ (эти формулы получены, исходя из аналогий с формулами (5.1.7) — (5.1.10) статьи [26] для двухзонных решений уравнения КдВ). Ниже показано, что каждое решение (6), (1) задает решение уравнений ($\varepsilon=5/54$) $$\varepsilon\Psi_{z}=\varepsilon^{2}\Psi_{xy}+[-\frac{5x^{4}}{8}+\frac{5x^{2}y}% {2}-\frac{y^{2}}{2}-\frac{5tx^{2}}{36}+\frac{5zx}{108}]\Psi,$$ (13) $$\varepsilon\Psi_{t}=\varepsilon^{2}[-\frac{\Psi_{xx}}{2}-x\Psi_{xy}+\frac{1}{2% }\frac{\partial}{\partial y}((y-\frac{5t}{36})\Psi_{y})+(y-\frac{5t}{36})\frac% {\Psi_{yy}}{2}]-\varepsilon\frac{5\Psi_{y}}{108}+$$ $$+[\frac{x^{5}}{2}-2xy^{2}-\frac{15tx^{3}}{36}+\frac{5txy}{18}-\frac{5zx^{2}}{2% 16}-\frac{5zy}{108}]\Psi,$$ (14) являющихся ‘‘квантованиями’’, определяемыми гамильтонианами (11) и (1) — после формальных замен $q_{1}\to x,$ $q_{2}\to y$ и $p_{1}\to-\varepsilon\frac{\partial}{\partial x}$, $p_{2}\to-\varepsilon\frac{\partial}{\partial y}$ их можно символически записать в виде: $$\varepsilon\Psi_{z}=H_{I}(z,t,,x,y,-\varepsilon\frac{\partial}{\partial x},-% \varepsilon\frac{\partial}{\partial y})\Psi,\quad\varepsilon\Psi_{t}=H_{II}(z,% t,x,y,-\varepsilon\frac{\partial}{\partial x},-\varepsilon\frac{\partial}{% \partial y})\Psi.$$ (15) 1.3. Фундаментальные решения ОДУ (6), (1) задают $2\times 2$ матрицы $$M(z,t,\eta,\zeta)=\Phi^{-1}(z,t,\eta)\Phi(z,t,\zeta),$$ (16) которые удовлетворяют двум скалярным линейным уравнениям $$\frac{1728}{5}(\zeta-\eta)M_{z}=M_{\zeta\zeta}-M_{\eta\eta}-2\frac{M_{\zeta}+M% _{\eta}}{\zeta-\eta}-$$ $$-(\frac{1728}{5})^{2}[(\zeta^{5}-\eta^{5})-\frac{5t(\zeta^{3}-\eta^{3})}{144}+% \frac{5z(\zeta^{2}-\eta^{2})}{1728}+r_{1}(t,z)(\zeta-\eta)]M,$$ (17) $$\frac{1728}{20}(\zeta-\eta)M_{t}=\eta M_{\zeta\zeta}-\zeta M_{\eta\eta}-(\zeta% +\eta)\frac{M_{\zeta}+M_{\eta}}{\zeta-\eta}+$$ $$+(\frac{1728}{5})^{2}[\zeta(\eta^{5}-\frac{5t\eta^{3}}{144}+\frac{5z\eta^{2}}{% 1728}+r_{0}(t,z))-\eta(\zeta^{5}-\frac{5t\zeta^{3}}{144}+\frac{5z\zeta^{2}}{17% 28}+r_{0}(t,z))]M,$$ (18) зависимость которых от $u(z,t)$ содержится лишь в коэффициентах $$r_{1}(z,t)=\frac{1}{48^{2}}(2u_{3}u_{1}-u_{2}^{2}+\frac{5u_{1}^{2}u}{3}+\frac{% 5zu}{9}+\frac{5u^{4}}{36}-\frac{5tu^{2}}{18}+\frac{25t^{2}}{36}),$$ $$r_{0}(z,t)=\frac{1}{96^{2}}(u_{3}^{2}+2u_{3}u_{1}u+\frac{2u_{2}^{2}u}{3}-\frac% {u_{2}u_{1}^{2}}{3}+\frac{5u_{2}u^{3}}{3}+\frac{5u_{2}(z-tu)}{9}+\frac{5u_{1}^% {2}u^{2}}{6}+$$ $$+\frac{5tu_{1}^{2}}{36}+\frac{u^{5}}{9}-\frac{5tu^{3}}{27}+\frac{5zu^{2}}{18}-% \frac{25zt}{54}).$$ Заменой $$M=(\zeta-\eta)\exp{S(t,z)}W,$$ (19) где функция $S$ удовлетворяет непротиворечивым равенствам $$S_{z}=-\frac{1728}{5}[r_{1}(t,z)-\frac{25t^{2}}{(288)^{2}}],\qquad S_{t}=\frac% {1728}{5}[4r_{0}(t,z)+\frac{50zt}{(288)^{2}}],$$ уравнения (17), (18) переводятся в независящие от $u(z,t)$ уравнения $$\frac{1728}{5}W_{z}=\frac{W_{\zeta\zeta}-W_{\eta\eta}}{\zeta-\eta}-$$ $$-(\frac{1728}{5})^{2}[\frac{\zeta^{5}-\eta^{5}}{\zeta-\eta}-\frac{5t(\zeta^{3}% -\eta^{3})}{144(\zeta-\eta)}+\frac{5z(\zeta^{2}-\eta^{2})}{1728(\zeta-\eta)}+% \frac{25t^{2}}{(288)^{2}}]W,$$ (20) $$\frac{1728}{20}W_{t}=\frac{\eta W_{\zeta\zeta}-\zeta W_{\eta\eta}}{\zeta-\eta}% -\frac{W_{\zeta}-W_{\eta}}{\zeta-\eta}+(\frac{1728}{5})^{2}[\zeta(\eta^{5}-% \frac{5t\eta^{3}}{144}+\frac{5z\eta^{2}}{1728}-$$ $$-\frac{25zt}{(288)^{2}6})-\eta(\zeta^{5}-\frac{5t\zeta^{3}}{144}+\frac{5z\zeta% ^{2}}{1728}+\frac{25zt}{(288)^{2}6})]\frac{W}{\zeta-\eta}.$$ (21) После перехода от $\zeta$ и $\eta$ к независимым переменным $$x=-\frac{\zeta+\eta}{2},\quad y=-\frac{(\zeta-\eta)^{2}}{2}+\frac{5t}{144}$$ (22) уравнения (1), (21) принимают вид ‘‘квантований’’ (13) и (14). 2 Член $P_{2}^{2}$ иерархии уравнения Пенлеве II 2.1. Для вещественных значений $z$ и $t$ ниже описываются ‘‘квантования’’ совместных решений НУШ (4) и ОДУ $$\beta u_{3}-4tu_{1}+6\beta\delta|u|^{2}u_{1}+2izu=0.$$ (23) Наряду с решением ГП уравнения КдВ, такие решения также играют важную роль для задач математической физики с малым параметром: 1) они задают [27], [28] специальные решения Хабермана — Сана НУШ (4) из работы [29], которые в пределе при $\delta\to 0$ переходят в интеграл Пирси $Q=const\int\limits_{-\infty}^{\infty}\exp[-2i(\beta\lambda^{4}+2t\lambda^{2}+x% \lambda)]d\lambda$, и которые для довольно широкого ряда cитуаций описывают влияние малых нелинейностей на высокочастотные асимптотики около острия (клюва) каустики; 2) при $\delta>0$ с помощью других решений уравнений (4) и (23) описывается [30] влияние малой дисперсии на процессы провального самообострения импульса, которые характерны для приближений нелинейной геометрической оптики с нелинейностью общего положения; 3) вообще, ОДУ (23) есть первый высший представитель $P_{2}^{2}$ иерархии $P_{2}^{n}$ уравнения Пенлеве II, важность роли которой (наряду с важностью роли иерархии $P_{1}^{n}$) для широкого класса задач с малым параметром была предсказана А. В. Китаевым в [10], исходя из аналогий с иерархиями интегралов Фурье канонического вида, называемых [31, Гл. VI, §4] специальными функциями волновых катастроф. По поводу аналогий с уравнением Пенлеве $II$ после просмотра раздела 2.4 данной статьи cм. еще раздел 6.1 работы К. Окамото [32]. 2.2. Совместные решения уравнений (4) и (23) также изомонодромны. Они есть [27] условие совместности трех линейных уравнений $$\Phi_{z}=i\left(\begin{array}[]{cc}-\zeta&u\\ \delta u^{*}&\zeta\end{array}\right)\Phi,\quad\Phi_{t}=\left(\begin{array}[]{% cc}-i(2\zeta^{2}-\delta|u|^{2})&2i\zeta u-u_{1}\\ \delta(2i\zeta u^{*}+u^{*}_{1})&i(2\zeta^{2}-\delta|u|^{2})\end{array}\right)\Phi,$$ (24) $$\Phi_{\zeta}=(-[i(4\beta\zeta^{3}+4\zeta t-2\delta\beta|u|^{2}\zeta+z)+\delta% \beta(u_{1}u^{*}-u^{*}_{1}u)]\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right)+$$ $$+\left(\begin{array}[]{cc}0&4i\beta\zeta^{2}u-2\beta\zeta u_{1}+4itu-\beta u_{% t}\\ \delta(4i\beta\zeta^{2}u^{*}+2\beta\zeta u_{1}^{*}+4itu^{*}+\beta u^{*}_{t})&0% \end{array}\right))\Phi,$$ (25) где $u^{*}$ — комплексное сопряжение $u$. При этом $u$ задает [27] решения системы ОДУ по переменной $t$, определяемой НУШ (4) и уравнениями $$\beta u_{tt}=4itu_{t}+2i\beta\delta|u|^{2}u_{t}+(2i+8t\delta|u|^{2})u+2izu_{1}% +2\beta\delta u_{1}(u_{1}u^{*}-u_{1}^{*}u),$$ $$\beta(u_{1})_{t}=2zu+4itu_{1}-2i\beta\delta u(u_{1}u^{*}-u_{1}^{*}u).$$ (26) Из уравнений (4), (23)и (26) cледует постоянство по $z$ и $t$ выражения $$\frac{c}{4\delta\beta^{2}}=-u_{2}^{*}u-u_{2}u^{*}+|u_{1}|^{2}-3\delta|u|^{4}+% \frac{4t}{\beta}|u|^{2}=i(u_{t}u^{*}-u_{t}^{*}u)+|u_{1}|^{2}+\delta|u|^{4}+% \frac{4t}{\beta}|u|^{2}.$$ (27) 2.3. Формулой (16) фундаментальные решения ОДУ (24), (25) определяют $2\times 2$ матрицы $M(t,z,\zeta,\eta)$, которые удовлетворяют уравнениям $$4\beta(\zeta-\eta)M_{z}-M_{\zeta\zeta}+M_{\eta\eta}+2\frac{M_{\zeta}+M_{\eta}}% {\zeta-\eta}=[16\beta^{2}(\zeta^{6}-\eta^{6})+$$ $$+32\beta t(\zeta^{4}-\eta^{4})+8\beta z(\zeta^{3}-\eta^{3})+(16t^{2}+c)(\zeta-% \eta)^{2}+r_{1}(z,t)(\zeta-\eta)]M,$$ (28) $$2\beta(\zeta-\eta)M_{t}-\eta M_{\zeta\zeta}+\zeta M_{\eta\eta}+\frac{(\zeta+% \eta)(M_{\zeta}+M_{\eta})}{\zeta-\eta}=[\zeta(16\beta^{2}\eta^{6}+32\beta t% \eta^{4}+8\beta z\eta^{3}+$$ $$+(16t^{2}+c)\eta^{2}+r_{0}(z,t))-\eta(16\beta^{2}\zeta^{6}+32\beta t\zeta^{4}+% 8\beta z\zeta^{3}+(16t^{2}+c)\zeta^{2}+r_{0}(z,t))]M,$$ (29) где $c$ — постоянная, определяемая формулой (27), $$r_{1}(z,t)=2i\delta\beta^{2}(u_{2}u_{1}^{*}-u_{2}^{*}u_{1})-4\delta z|u|^{2}+8zt,$$ $$r_{0}(z,t)=16t^{2}\delta|u|^{2}-16\delta^{2}\beta t|u|^{4}-4\delta\beta t(u_{2% }^{*}u+u_{2}u^{*})+\delta\beta^{2}|u_{2}|^{2}+4\delta^{3}\beta^{2}|u|^{6}+$$ $$+2\delta^{2}\beta^{2}|u|^{2}(u_{2}^{*}u+u_{2}u^{*})-\delta^{2}\beta^{2}(u_{1}u% ^{*}-u_{1}^{*}u)^{2}-2i\delta\beta z(u_{1}u^{*}-u_{1}^{*}u)+z^{2}$$ Функции $r_{1}$ и $r_{2}$ связаны соотношением $(r_{1}(z,t))^{\prime}_{t}=2(r_{0}(z,t))^{\prime}_{z}+4z.$ Заменой (19), где функция $S$ такова, что $$4\beta S_{z}=r_{1}(z,t),\quad 2\beta S_{t}=r_{0}(t,z)+z^{2},$$ уравнения (28), (29) переводятся в пару уравнений $$4\beta W_{z}-\frac{W_{\zeta\zeta}-W_{\eta\eta}}{\zeta-\eta}=$$ $$=[16\beta^{2}(\zeta^{6}-\eta^{6})+32\beta t(\zeta^{4}-\eta^{4})+8\beta z(\zeta% ^{3}-\eta^{3})+(16t^{2}+c)(\zeta^{2}-\eta^{2})]\frac{W}{\zeta-\eta},$$ (30) $$2\beta W_{t}-\frac{\zeta W_{\eta\eta}-\eta W_{\zeta\zeta}}{\zeta-\eta}-\frac{W% _{\zeta}-W_{\eta}}{\zeta-\eta}=[\zeta(16\beta^{2}\eta^{6}+32\beta t\eta^{4}+8% \beta z\eta^{3}+(16t^{2}+c)\eta^{2}-$$ $$-z^{2})-\eta(16\beta^{2}\zeta^{6}+32\beta t\zeta^{4}+8\beta z\zeta^{3}+(16t^{2% }+c)\zeta^{2}-z^{2})]\frac{W}{\zeta-\eta},$$ (31) не содержащих зависимости от $u$. 2.4. H. Kimura в [33] привел список совместных пар гамильтоновых изомонодромных систем с двумя степенями свободы, получающийся в результате последовательных вырождений системы Р.Гарнье из статьи [1]. В их числе содержится пара, нумеруемая в [33] как $H_{5}$: $$(\lambda_{j})^{\prime}_{t_{1}}=(K_{I})^{\prime}_{\mu_{j}},\quad(\mu_{j})^{% \prime}_{t_{1}}=-(K_{I})^{\prime}_{\lambda_{j}}\qquad(j=1,2),$$ (32) $$(\lambda_{j})^{\prime}_{t_{2}}=(K_{II})^{\prime}_{p_{j}},\quad(\mu_{j})^{% \prime}_{t_{2}}=-(K_{II})^{\prime}_{\lambda_{j}}\qquad(j=1,2),$$ (33) c квадратичными по импульсам $\mu_{j}$ гамильтонианами $$K_{I}(t_{1},t_{2},\lambda_{1},\lambda_{2},\mu_{1},\mu_{2})=\frac{1}{2}\sum_{k=% 1}^{2}\frac{1}{\Lambda^{\prime}(\lambda_{k})}[\mu_{k}^{2}-P(t_{1},t_{2},% \lambda_{k})\mu_{k}-2\nu\lambda_{k}^{2}],$$ (34) $$K_{II}(t_{1},t_{2},\lambda_{1},\lambda_{2},\mu_{1},\mu_{2})=\frac{1}{2}\sum_{k% =1}^{2}\frac{Q(\lambda_{k})}{\Lambda^{\prime}(\lambda_{k})}[\mu_{k}^{2}-(P(t_{% 1},t_{2},\lambda_{k})+\frac{1}{Q(\lambda_{k})})\mu_{k}-2\nu\lambda_{k}^{2}],$$ (35) где $\nu$ — постоянная, $P(t_{1},t_{2},q)=2q^{3}+2t_{2}q+t_{1}$, $\Lambda(q)=(q-\lambda_{1})(q-\ \lambda_{2})$, $\Lambda^{\prime}(q)$ — производная $\Lambda(q)$ по $q$, $Q(q)=q-\lambda_{1}-\lambda_{2}$. (В [33] имеется опечатка, повторенная в [32]: гамильтониан $K_{II}$ приведен без множителя 1/2.) ОДУ (23) по переменной $z$ и система ОДУ по переменной $t$, определяемой НУШ (4) и уравнениями (26) после замен $$z=\frac{i\alpha t_{1}}{2},\quad t=\frac{\beta t_{2}}{\alpha},\quad\alpha=(4% \beta)^{1/4}\exp{(i\pi/8)},$$ (36) эквивалентны гамильтоновым системам (32) и, cоответственно, (33), определяемых гамильтонианами (34) и (35). При этом $$\lambda_{1}=\alpha(\frac{iu_{z}}{4u}-\sqrt{-\frac{(u_{z})^{2}}{16u^{2}}-\frac{% t}{\beta}+\frac{u_{zz}}{4u}+\frac{\delta|u|^{2}}{2}}),$$ $$\lambda_{2}=\alpha(\frac{iu_{z}}{4u}+\sqrt{-\frac{(u_{z})^{2}}{16u^{2}}-\frac{% t}{\beta}+\frac{u_{zz}}{4u}-\frac{\delta|u|^{2}}{2}}),$$ $$\mu_{1}=\frac{\delta\alpha^{3}}{2}(\frac{i}{4}(2u_{z}u^{*}-u^{*}_{z}u)-|u|^{2}% \sqrt{-\frac{(u_{z})^{2}}{16u^{2}}-\frac{t}{\beta}+\frac{u_{zz}}{4u}+\frac{% \delta|u|^{2}}{2}}),$$ $$\mu_{2}=\frac{\delta\alpha^{3}}{2}(\frac{i}{4}(2u_{z}u^{*}-u^{*}_{z}u)+|u|^{2}% \sqrt{-\frac{(u_{z})^{2}}{16u^{2}}-\frac{t}{\beta}+\frac{u_{zz}}{4u}+\frac{% \delta|u|^{2}}{2}}),$$ где постоянная $\alpha$ определена заменами (36), $\nu=ic/(8\beta)$. (Последние формулы были получены автором, исходя из уравнений ИДМ (24), (25) и формул (6.3), (6.4) работы [32]—cм. также [33]. ) 2.5. Совместна следующая пара эволюционных уравнений $$2(X-Y)\Gamma_{t_{1}}=\Gamma_{XX}-\Gamma_{YY}+2X^{2}(X\Gamma)_{X}+(2t_{2}X+t_{1% })\Gamma_{X}-2Y^{2}(Y\Gamma)_{Y}-$$ $$-(2t_{2}Y+t_{1})\Gamma_{Y}-2\nu(X^{2}-Y^{2})\Gamma,$$ (37) $$2(X-Y)\Gamma_{t_{2}}=X[\Gamma_{YY}+2Y^{2}(Y\Gamma)_{Y}+(2t_{2}Y+t_{1})\Gamma_{% Y}-2\nu Y^{2}\Gamma]-$$ $$-Y[\Gamma_{XX}+(2X^{2})(X\Gamma)_{X}+(2t_{2}X+t_{1})\Gamma_{X}-2\nu X^{2}% \Gamma]+\Gamma_{X}-\Gamma_{Y},$$ (38) которые есть ‘‘квантования’’ вида (15) с $\varepsilon=1$ гамильтонианами (34) и (35) — символически (37) и (38) можно записать в виде уравнений $$\Gamma_{t_{1}}=K_{I}(t_{1},t_{2},X,Y,-\frac{\partial}{\partial X},-\frac{% \partial}{\partial Y})\Gamma,\quad\Gamma_{t_{2}}=K_{II}(t_{1},t_{2},X,Y,-\frac% {\partial}{\partial X},-\frac{\partial}{\partial Y})\Gamma.$$ С помощью формулы $$\Gamma=\exp{(-\frac{X^{4}+Y^{4}}{4}-\frac{t_{2}^{2}+t_{2}(X^{2}+Y^{2})+t_{1}(X% +Y)}{2}-\frac{t_{1}^{2}t_{2}}{4})}W$$ ‘‘квантования’’ (37) и (38) сводятся к совместной паре уравнений $$2(X-Y)W_{t_{1}}=W_{XX}-W_{YY}-[X^{6}-Y^{6}+2t_{2}(X^{4}-Y^{4})+$$ $$+t_{1}(X^{3}-Y^{3})+(t_{2}^{2}+2\nu))(X^{2}-Y^{2})]W,$$ $$2(X-Y)W_{t_{2}}=XW_{YY}-YW_{XX}+W_{X}-W_{Y}-[X(Y^{6}+2t_{2}Y^{4}+t_{1}Y^{3}+$$ $$+(t_{2}^{2}+2\nu)Y^{2}-\frac{t_{1}^{2}}{4})-Y(X^{6}+2t_{2}X^{4}+t_{1}X^{3}+(t_% {2}^{2}+2\nu)X^{2}-\frac{t_{1}^{2}}{4})]W.$$ После перехода к независимым переменным (36), замен $$X=-(4\beta)^{1/4}\exp(i\pi/8)\zeta,\quad Y=-(4\beta)^{1/4}\exp(i\pi/8)\eta$$ и $c=-8i\beta\nu$ последняя пара сводится к паре (30), (31). З а м е ч а н и е 2. Скорее всего, аналогичным образом может быть описана связь между парой уравнений (1), (21) и ‘‘квантованиями’’, определяемыми совместной парой $H_{9/2}$ гамильтоновых систем статьи [33]. 3 Заключительные замечания 4.1. При $\varepsilon=1$ ‘‘квантования’’ (1) из [2] получаются не только из уравнений (2), но и из уравнений $\varepsilon\Psi_{z}=H(z,x,\varepsilon\frac{\partial}{\partial x})\Psi,$ для которых при произвольных значениях $\varepsilon$ и определенных редукциях уравнений Пенлеве в [7] были построены серии явных решений. В частности, в связи с этим результатом [7] подчеркнем следующее обстоятельство: при $\varepsilon=5/54$ (14) можно переписать как уравнение $$\varepsilon\Psi_{t}=\varepsilon^{2}[-\frac{\Psi_{xx}}{2}-\frac{1}{2}\frac{% \partial}{\partial x}(x\Psi_{y})-\frac{1}{2}x\Psi_{xy}+(y-\frac{5t}{36})\Psi_{% yy}]+\varepsilon\frac{5\Psi_{y}}{108}+$$ $$+[\frac{x^{5}}{2}-2xy^{2}-\frac{15tx^{3}}{36}+\frac{5txy}{18}-\frac{5zx^{2}}{2% 16}-\frac{5zy}{108}]\Psi.$$ Это значит, что наряду с представлениями в виде ‘‘квантований’’ (15) пары (13), (14) можно символически записать и как уравнения: $$\varepsilon\Psi_{z}=H_{I}(z,t,x,y,\varepsilon\frac{\partial}{\partial x},% \varepsilon\frac{\partial}{\partial y})\Psi,\quad\varepsilon\Psi_{t}=H_{II}(z,% t,x,y,\varepsilon\frac{\partial}{\partial x},\varepsilon\frac{\partial}{% \partial y})\Psi.$$ Возможно, подобную символическая запись допускают и какие-нибудь ‘‘квантования’’, определяемые гамильтонинами, эквивалентыми гамильтонианам (34) и (35). 4.2. При описании выше ‘‘квантований’’ двух высших аналогов уравнений Пенлеве ключевой явилась замена (16), которая ранее в близких ситуациях для несколько иных целей использовалась Д. П. Новиковым в [5] (cм. также формулу (2.3.36) работы [34]). Не факт, что лишь с помощью подобной замены также легко прояснится вопрос о справедливости ‘‘квантований’’ вида (15) для всех высших гамильтоновых изомонодромных аналогов Пенлеве с двумя степенями свободы, которым соответствуют уравнения ИДМ в матрицах размером $2\times 2$ ( в частности, для всех, рассматривавшихся в [33]). Пока не видно, как результаты данной работы могут быть обобщены на случаи изомонодромных гамильтоновых ОДУ с числом степеней свободы, больших 2, пусть даже для ситуаций, отвечающих уравнениям ИДМ в матрицах размером $2\times 2$ (например, для представителей иерархий $P_{1}^{n}$, $P_{2}^{n}$ c $n>2$). В связи с только что сказанным отметим, что в случае систем Шлезингера [35], соответствующих уравнениям ИДМ в матрицах размером $2\times 2$, эти линейные уравнения ИДМ задают [5] решения уравнений типа Белавина — Полякова — Замолодчикова [36] минимальной конформной теории поля с центральным зарядом алгебры Вирасоро, равным единице. Но не связаны ли данные уравнения ИДМ также и с какими-либо ‘‘квантованиями’’, определяемыми гамильтоновыми структурами соответствующих систем Шлезингера? Совсем неисследованным остается и вопрос о ‘‘квантованиях’’ изомонодромных гамильтоновых ОДУ, отвечающих уравнениям ИДМ в матрицах размером $m\times m$ при $m>2.$ Список литературы [1] R. Garnier, ‘‘Sur des equations differentielles du troisieme ordre dont l’integrale generale est uniforme et sur une classe d’equations nouvelles d’ordre superieur dont l’integrale generale a ses points critiques fixes’’, Ann. Sci. Ecole Normale Sup. (3), 29 (1912), 1–126 [2] Б. И. Сулейманов, ‘‘Гамильтоновocть уравнений Пенлеве и метод изомонодромных деформаций’’, Дифф. уравн., 30:5 (1994), 791–796 (translation: B. I. Suleimanov, ‘‘ The Hamilton property of Painleve’ equations and the method of isomonodromic deformations. Differential Equations’’, Differential equations, 30:5 1994, 726–732); Б. И. Сулейманов, ‘‘Гамильтонова структура уравнений Пенлеве и метод изомонодромных деформаций’’, Асимптотические свойства решений дифференциальных уравнений, И-т мат., Уфа, 1988, 93–102 [3] Б. И. Сулейманов, ‘‘"Квантования’’ второго уравнения Пенлеве и проблема эквивалентности его $L,A$ пар’’, ТМФ, 156:3 (2008), 364–378. (translation: B. I. Suleimanov. ‘‘ "Quantizations"of the second Painleve equation and the problem of the equivalence of its $L$-$A$ pairs’’, Theoretical and Mathematical Physics, 156:3 2008, 1280–1291) [4] А. И. Овсеевич, ‘‘Фильтр Калмана и квантование’’, ДАН, 414:6, (2007), 732–735 (translation: A. I. Ovseevich, ‘‘The Kalman filter and quantization’’, Dokl. Math., 75:3 2007, 436–-439) [5] Д. П. Новиков, ‘‘О системе Шлезингера с матрицами размера $2\times 2$ и уравнении Белавина — Полякова — Замолодчикова’’, ТМФ, 161:2 (2009), 191–203 (translation: D. P. Novikov, ‘‘The $2\times 2$ matrix Schlesinger system and the Belavin-Polyakov-Zamolodchikov system’’, Theoretical and Mathematical Physics, 161:2 (2009), 1485–1496) [6] D. P. Novikov, ‘‘A monodromy problem and some function connected with Painleve VI’’, Painleve Equations and Related Topic. Proceediings of the Internatinal Conference, Euler International Mathematical Institute Saint Peteburg, , St.-Peteburg, 2011, 118–121. [7] H. Nagoya, ‘‘Hypergeometric solutions to Schrödinger equation for the quantum Painleve equations’’, J. Math. Phys, 52:8 (2011), doi: 10/1063/1.36204/2 (16 pages). [8] A. Zabrodin, A. Zotov, ‘‘Quantum Painleve-Calodgero correspondence’’, arXiv:1107.5672v.2 [math-ph] 26 aug 2011. [9] G. Moore, ‘‘Geometry of the string equations’’, Comm. Math. Phys., 133:2 (1990), 261–304 [10] А. В. Китаев, ‘‘Точки поворота линейных систем и двойные асимптотики трансцендентов Пенлеве’’, Записки ЛОМИ 187 (1991), 53–74 (translation: A. V. Kitaev, ‘‘ Turning points of linear systems and double asymptotics of the Painleve transcendents’’, J. Math. Sci., 73:2, (1995), 446–459) [11] Б. И. Сулейманов, ‘‘Возникновение бездиссипативных ударных волн и “непертурбативная” квантовая теория гравитации’’, ЖЭТФ, 105:5 (1994), 1089–1099 (translation: B. I. Suleimanov, ‘‘ Onset of nondissipative shock waves and the nonperturbative quantum theory of gravitattion’’ J. Eksperiment. Theoret. Phys. 78:5, 583–587) [12] А. В. Гуревич, Л. П. Питаевский, ‘‘Опрокидывание простой волны в кинетеке разреженной плазмы’’, ЖЭТФ 60:6 (1971), 2155–2174 (translation:A. V. Gurevich, L. P. Pitaevskii, ‘‘ Breaking a simple wave in the kinetics of a Rarefied Plasma’’ Sov. Phys. JETP. 33:2 (1971), 1159-) [13] А. В. Гуревич, Л. П. Питаевский, ‘‘Нестационарная структура бесстолкновительной ударной волны’’, ЖЭТФ 65:2 (1973), 590–604 (translation:A. V. Gurevich, L. P. Pitaevskii, ‘‘Non stationare structure of collisionless shock wave’’Sov. Phys. JETP 38:2 1974. V. 38, \No 2. Р. 291–297) [14] V. Kudashev, B. Suleimanov, ‘‘A soft mechanism for generation the dissipationless shock waves’’, Phys. Letters A, 221:3,4 (1996), 204–208 [15] В. Р. Кудашев, Б. И. Сулейманов, ‘‘Мягкий режим формирования бездиссипативных ударных волн’’, В:Комплексный анализ, дифференциальные уравнения, численные методы и приложения. III, Институт математики с ВЦ: Уфа, 1996, с.98–108; http://matem.anrb.ru/e-lib/preprints/BS/bs22/html [16] Г. В. Потемин, ‘‘Алгебро—геометрическое построение решений уравнений Уизема’’, УМН, 43:5(263) (1988), 211–213 translation: G. V. Potemin, ‘‘Algebro-geometric construction of self-similar solutions of the Whitam equations’’, Russian Math. Surveys 43:5 (1998), 252–253. [17] R. Garifullin, B. Suleimanov, N. Tarkhanov, ‘‘Phase Shift in the Whitam Zone for the Gurevich—Pitaevsky Special Solution of the Korteveg—de Vries Equation’’, Phys. Letters A, 374:13, 14 (2010), 1420–1424 [18] T. Claeys, ‘‘Asymptotics for a special solutions to the second member of the Painleve I hierarhy’’, J. of Physics A., 43 (2010), 434012 (18 pp) [19] T. Claeys, M. Vanlessen, ‘‘The existence of a reale pole-free solution of the fourth order analogue of the Painleve I equation’’, Nonlinearity, 20:5 (2007), 1163–1184 [20] В. Р. Кудашев, Б. И. Cулейманов, ‘‘Влияние малой диссипации на процессы зарождения одномерных ударных волн’’, ПММ, 2001. 65:3 (2001), 456–466 (translation: V. R. Kudashev, B. I. Suleimanov ‘‘The effect of small dissipation on the origin of one-dimensional shock waves’’ J. Appl. Math. Mech., 65:3 2001, 441–451) [21] B. Dubrovin, ‘‘On universality of critical behaviour in hamiltonian PDEs’’ Geometry, topology, and mathematical physics, 59–109, Amer. Math. Soc. Transl. Ser. 2, 224, Amer. Math. Soc., Providence, RI, 2008. //arXiv:0804.3790. (2008). [22] B. Dubrovin, ‘‘On Hamiltonian perturbations of hyperbolic systems of conservation laws, II: Universality of critical behaviour’’, Commun. Math. Phys., 267 (2006), 117–139 [23] E. Bresin, E. Marinari, G. Parisi, ‘‘A nonperturbative ambigity free solution of a string model’’, Phys. Letters B 242:1 (1990), 35–38 [24] M. Douglas, N. Seiberg, S. Shenker , ‘‘Flow and unstability in quantum gravity’’, Phys. Letters A, 244:3,4 (1990), 381–385 [25] А. Р. Итc, ‘‘"Изомонодромные’’ решения уравнений нулевой кривизны’’, Изв. АН СССР, cер. матем., 45:3 (1985), 330–365 (translation: A. R. Its, ‘‘"isomonodromy"solutions of equations of zero curvature’’ Mathematics of the USSR-Izvestiya, 26:3 (1986), 497–529) [26] Б. А. Дубровин, ‘‘Тэта-функции и нелинейные уравнения’’, УМН, 36:2(218) (1981), 11–80 (translation: B. A. Dubrovin, ‘‘Theta functions and non-linear equations’’, Russian Mathematical Surveys, 36:2 (1981), 11–92) [27] Б. И. Cулейманов, ‘‘Второе уравнение Пенлеве в одной задаче о нелинейных эффектах вблизи каустики’’, Записки ЛОМИ, 187 (1991), 53–74 (translation: B. I. Suleimanov, ‘‘The second Painleve’ equation in a problem on nonlinear effects near caustics’’, J. Math. Sci., 73:4 (1995), 482–493) [28] A. V. Kitaev, ‘‘Caustics in $1+1$ integrable systems’’, J. Math. Phys., 35:6 (1994), 2934–2954 [29] R. Haberman, Ren-ji Sun, ‘‘Nonlinear cusped caustics for dispersive waves’’, Stud. Appl.Math., 72:1 (1985), 1–37 [30] В. Р. Кудашев, Б. И. Сулейманов, ‘‘Некоторые типичные особенности падения интенсивности в неустойчивых средах’’, Письма в ЖЭТФ, 62:4 (1995), 358–362 (translation: V. R. Kudashev, B. I. Suleimanov ‘‘Characteristic features of some typical spontaneous intensivity collapse processes in unstable media’’, JETP Lett., 62:4 (1995), 382–388) [31] М. В. Федорюк, Асимптотика: интегралы и ряды, Наука, M., 1987. [32] K. Okamoto, ‘‘The Hamiltonian Associated to the Painleve equations’’, In: The Painleve property: one century later. Springer-Verlag, New York, Berlin, Heidelberg, 1999 [33] H. Kimura, ‘‘The degenaration of the two-dimensional Garnier system and the polinomial Hamiltonian structure’’, Ann. Mat. Pura Appl. 155 (1989), 25–74 [34] M. Sato, T. Miwa, M. Jimbo, ‘‘Holonomic quantum fields’’, Publ. Rims Kyoto Uiv. 15 (1979), 201–278 [35] L. Schlesinger, ‘‘Über eine Klasse von Differentialsystem belibeger Ordnung mit festen kritischen Punkten’’, Reine u. Angew. Math 141 (1912), 96–145 [36] A. A. Belavin, A. M. Polyakov, A. B. Zamolodchikov, ‘‘Infinite conformal symmetry in two-dimensional quantum field theory’’, Nucl. Phys. 241 (1984), 333–380
Dynamics of dipoles and vortices in nonlinearly-coupled three-dimensional harmonic oscillators R. Driben${}^{1,2}$, V. V. Konotop${}^{3}$, B. A. Malomed${}^{4}$, and T. Meier${}^{2}$ ${}^{1}$ITMO University, 49 Kronverskii Ave., St. Petersburg 197101, Russian Federation ${}^{2}$Department of Physics and CeOPP, University of Paderborn, Warburger Str. 100, D-33098 Paderborn, Germany ${}^{3}$Centro de Física Teórica e Computacional and Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, Edifício C8, Lisboa 1749-016, Portugal ${}^{4}$Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel Abstract The dynamics of a pair of three-dimensional matter-wave harmonic oscillators (HOs) coupled by a repulsive cubic nonlinearity is investigated through direct simulations of the respective Gross-Pitaevskii equations (GPEs) and with the help of the finite-mode Galerkin approximation (GA), which represents the two interacting wave functions by a superposition of $3+3$ HO p-wave eigenfunctions with orbital and magnetic quantum numbers $l=1$ and $m=1,0,-1$. First, the GA very accurately predicts a broadly degenerate set of the system’s ground states in the p-wave manifold, in the form of complexes built of a dipole coaxial with another dipole or vortex, as well as complexes built of mutually orthogonal dipoles. Next, pairs of non-coaxial vortices and/or dipoles, including pairs of mutually perpendicular vortices, develop remarkably stable dynamical regimes, which feature periodic exchange of the angular momentum and periodic switching between dipoles and vortices. For a moderately strong nonlinearity, simulations of the coupled GPEs agree very well with results produced by the GA, demonstrating that the dynamics is accurately spanned by the set of six modes limited to $l=1$. I Introduction The creation of multidimensional (two- and three-dimensional, 2D and 3D) self-trapped modes in nonlinear media, which are often considered as solitons, and the study of the dynamics of such modes, is one general topic of nonlinear physics, as demonstrated in numerous original works and reviews Townes -RMP . In particular, this topic is highly relevant to nonlinear optics and matter-wave dynamics in Bose-Einstein condensates (BECs). In addition to the great significance to fundamental studies in these fields, 3D matter-wave solitons are expected to provide a basis for efficient interferometry interferometry . The first problem which one needs to resolve for the creation of stable multidimensional solitons is posed by the fact that such states, usually supported by the cubic self-focusing nonlinearity, are subject to instability caused by the wave collapse (critical and supercritical collapse in 2D and 3D, respectively) collapse . For this reason, the first example of solitons which was introduced theoretically in nonlinear optics, Townes’ solitons Townes , i.e., 2D self-trapped modes in a medium with a cubic self-focusing nonlinearity, have never been created in the experiment. Multidimensional vortex solitons, alias vortex rings, in cubic media are vulnerable to a still stronger splitting instability induced by modulational perturbations in the azimuthal direction general-reviews . Thus, an issue of great significance is the stabilization of multidimensional fundamental and vortex solitons in cubic media. A generic method for this is provided by the use of periodic potentials (lattices), as was predicted in various settings lattice . In the experiment, photonic-lattice potentials were demonstrated to stabilize 2D vortex solitons photorefr . A more recent result is the creation of 2D plasmon-polariton solitons in microcavities with a lattice structure 30 . Assuming the dominance of the cubic nonlinearity, such physical media are modeled by the nonlinear Schrödinger equation, alias Gross-Pitaevskii equation (GPE) Pit , for the corresponding photonic or atomic mean-field wave function $\Phi\left(\mathbf{r},t\right)$, where $\mathbf{r}=(x,y,z)$ and $t$ are appropriately scaled spatial coordinates and time: $$i\frac{\partial\Phi}{\partial t}=-\nabla^{2}\Phi+U\left(\mathbf{r}\right)\Phi+% \sigma|\Phi|^{2}\Phi.$$ (1) In this equation, $\nabla^{2}$ is the 3D Laplacian, $U\left(\mathbf{r}\right)$ is the trapping potential, and $\sigma=-1$ or $+1$ corresponds to the self-focusing or defocusing (repulsive) sign of the cubic interaction. In particular, in the case of axially symmetric potentials (including spherically isotropic ones), $U=U\left(\rho,z\right)$, where $\left(\rho,z,\varphi\right)$ is the set of cylindrical coordinates, Eq. (1) admits vortical modes in the form of $$\Phi=\exp\left(-i\mu t+il\varphi\right)u(\rho,z),$$ (2) with real chemical potential $\mu$, integer vorticity $l$, and real amplitude function $u\left(\rho,z\right)$ satisfying the stationary equation, $$\mu u=-\left(\frac{\partial^{2}}{\partial\rho^{2}}+\frac{1}{\rho}\frac{% \partial}{\partial\rho}+\frac{\partial^{2}}{\partial z^{2}}-\frac{l^{2}}{\rho^% {2}}\right)u+U\left(\rho,z\right)u+\sigma u^{3}.$$ (3) As concerns the stabilization of 2D states (Townes’ solitons) against the critical collapse, the instability of self-trapped modes in the 2D free space with cubic self-focusing is related to the specific scaling invariance of this setting, which makes the family of the Townes’ solitons and their vortical counterparts Minsk degenerate in terms of their norm collapse . In fact, the external potential, if added to the setting, breaks the scaling invariance and thus pushes the norm of the soliton below the collapse-onset threshold, thus securing their stability. A recent unexpected result is that the linear spin-orbit coupling (SOC), if added to two-component (spinor) systems with the cubic attractive interactions in the 2D free space, stabilizes 2D semi-vortex solitons by breaking the scaling invariance of the system without breaking the spatial uniformity Fukuoka . Moreover, the linear SOC is able to make semi-vortex spinor solitons metastable states in the free 3D space with the cubic self- and cross-attraction HP . Stable multipole, vortex, and half-vortex solitons of the gap type in BEC under the action of SOC, in a combination with attractive or repulsive inter-atomic interactions, were predicted in Ref. LKK . A novel approach for the creation of self-trapped modes was proposed in Ref. ICFO and further elaborated in various settings further : A self-defocusing (repulsive) cubic nonlinearity, whose local strength in the space of dimension $D$ grows from the center to periphery faster than $r^{D}$ ($r$ is the radial coordinate), supports extremely robust and diverse families of solitons, multipoles, fundamental and composite solitary vortices, and hopfions (toroidal modes carrying overall vorticity and independent intrinsic twist of the toroidal core) for $D=1,2,3$. This type of the nonlinearity modulation belongs to the general class of the nonlinear pseudopotentials, which can be induced by various techniques in optics for $D=1,2$, and in BEC for $D=3$ too RMP (in optics, the nonlinearity cannot be made a function of the temporal variable, which plays the role of one of the transverse coordinates in Eq. (1), while $t$ is actually the propagation distance in that case). On the other hand, it was demonstrated in detail theoretically that the usual 2D and 3D settings, which combine the spatially uniform cubic self-repulsion and isotropic harmonic-oscillator (HO) trapping potential, readily give rise to various modes, including trapped vortices vortices as well as vortex clusters and dipoles vort-clusters . In particular, it was predicted that a specially designed toroidally-shaped trapping potential may support stable hopfions hopfion . The formation of vortices in optics and BEC was reported in many experimental works vort-exper (see also reviews vort-rev ). The stability of the vortices in these settings is secured by the fact that the repulsive nonlinearity does not give rise to modulational instability. Multi-component BECs with repulsive self- and cross-interactions (usually, they are realized as mixtures of different hyperfine atomic states of the same species) may also give rise to stable vortices 2comp-vort and, furthermore, to vortex lattices MiMoDu , pseudo-magnetic monopoles monopole , and skyrmions (modes which, similar to the above-mentioned hopfions, are characterized by two independent topological charges, but require at least two interacting complex fields to be built) skyrmions . Matter-wave skyrmions in binary BEC were created in experiments skyrmion-exper . The studies of two-component systems suggest that two identical 3D GPEs coupled by the repulsive interaction, each with an isotropic HO trapping potential, may be used as a simple model for the analysis of nontrivial dynamical regimes, such as interaction of non-coaxial (or, more specifically, mutually perpendicular) trapped vortices or dipoles initially created in the two wave fields. This possibility was recently proposed in Ref. arxiv were stable vortices having orthogonal vortex lines and trapped in a HO trap were found. If initially the modes deviate from the stationary state the nonlinear repulsive interactions lead to smooth dynamics representing torque-free precession with nutations. The model is remarkably well described by a simple mechanical model of two interacting angular momenta and is robust with respect to parameter changes (of the intra- and inter-species interactions) and with respect to the dissipation. In all the studied cases it was supposed that inter-species interactions are weaker than the intra-species ones. The objective of the present work is twofold. First we explore the similar dynamical setup, which, however, is dominated by intra-species interactions and predict experimentally observable results by means of systematic simulations of the coupled GPEs. Second, we developed a semi-analytical approach, in the framework of a relatively simple finite-mode Galerkin approximation (GA), which unlike the integral description in arxiv explores details of the mode dynamics. The GA approach developed here is similar to its well-known applications in hydrodynamics GA , i.e., the GA projects the coupled GPEs for two mean-field wave functions onto a finite-mode (truncated) dynamical system. In the present setting, the truncation keeps six degrees of freedom, which corresponds to approximating each wave function by a combination of three p-wave eigenfunctions of the 3D HO, with orbital quantum number $l=1$ and magnetic quantum numbers $m=-1,0,+1$, assuming that the axes which define the two triplets of the eigenfunctions are mutually perpendicular. The accuracy provided by the low-dimensional GA turns out to be surprisingly high, provided that the nonlinear interaction is not too strong (for very strong nonlinearity, a deviation of the GPE dynamics from the GA is observed, due to generation of components with $l>1$). In particular, fixed points (FPs) of the GA provide a good approximation for quasi-stationary states of the coupled GPE system. Taking sets of non-coaxial dipoles or vortices in the two wave functions as initial states, their nonlinear coupling leads, in the framework of the GPEs and GA alike, to remarkably stable dynamical regimes which are characterized by a periodic exchange of the angular momentum between the wave functions and a periodic switch of the structure of the two components, with the dipoles transforming into vortices and vice versa. The model and GA are introduced in Sec. II, which is followed by the analysis of the GA’s FPs in Sec. III. In the same section, we also produce (energy-degenerate) ground states (GSs) of the coupled GPEs in the p-wave manifold, which are very accurately predicted by the FPs of the GA. In Sec. IV we present systematic results for the dynamics of pairs of non-coaxial nonlinearly coupled dipoles and/or vortices. The paper is concluded by Sec. V. II The model and the Galerkin approximation The scaled form of the underlying system of the coupled 3D GPEs with the repulsive interaction between the two wave functions, $\Phi$ and $\Psi$, and the isotropic HO potential, represented by terms $\sim r^{2}$, is $$\displaystyle i\frac{\partial\Phi}{\partial t}$$ $$\displaystyle=$$ $$\displaystyle-\nabla^{2}\Phi+r^{2}\Phi+|\Psi|^{2}\Phi,$$ (4) $$\displaystyle i\frac{\partial\Psi}{\partial t}$$ $$\displaystyle=$$ $$\displaystyle-\nabla^{2}\Psi+r^{2}\Psi+|\Phi|^{2}\Psi.$$ (5) Unlike Ref. arxiv , we here consider the simplest version of the system, with negligible self-repulsion of each component in comparison with the repulsive interaction between the components. In the experiment, this situation may be achieved using the Feshbach resonance (FR) induced by external magnetic field to strengthen the inter-component repulsion Feshbach+rf ; Feshbach2 , thus making this interaction much stronger than the intra-component interactions. The effect of the FR may be additionally enhanced if applied to atomic states “dressed” by a radio-frequency field Feshbach+rf ; Shlyap ). In particular, it was demonstrated experimentally Feshbach2 that the FR can make the scattering length accounting for the repulsion between atomic states of ${}^{87}$Rb with quantum numbers $\left|F=1,m_{F}=+1\right\rangle$ and $\left|F=2,m_{F}=-1\right\rangle$, at magnetic field $9.10$ G, by a factor $\simeq 15$ larger than the scattering length which represents the intra-component self-repulsion, cf. Ref. Australia . In fact, simulations of equations obtained from Eqs. (4), (5) by adding self-repulsion terms, i.e., $\epsilon|\Phi|^{2}\Phi$ and $\epsilon|\Psi|^{2}\Psi$, respectively, with small $\epsilon>0$, produce results (not shown in detail) which are very close to those reported in this work for $\epsilon=0$. Equations (4) and (5) conserve two norms, $$N_{\Phi}=\int\left|\Phi(\mathbf{r}))\right|^{2}d\mathbf{r},\qquad N_{\Psi}=% \int\left|\Psi(\mathbf{r})\right|^{2}d\mathbf{r},$$ (6) the total vectorial angular momentum, $\mathbf{M}=\mathbf{M}_{\Phi}+\mathbf{M}_{\Psi}$ where $$\mathbf{M}_{\Phi}=-i\int\Phi^{\ast}\left(\mathbf{r}\times\nabla\right)\Phi\,d% \mathbf{r},\qquad\mathbf{M}_{\Psi}=-i\int\Psi^{\ast}\left(\mathbf{r}\times% \nabla\right)\Psi\,d\mathbf{r},$$ (7) and the Hamiltonian, $$H=\int\left[\left|\nabla\Phi\right|^{2}+\left|\nabla\Psi\right|^{2}+r^{2}\left% (|\Phi|^{2}+|\Psi|^{2}\right)\right]\,d\mathbf{r}+E_{\mathrm{int}},$$ (8) which includes the interaction energy, $$E_{\mathrm{int}}=\int\left|\Phi(\mathbf{r})\right|^{2}\left|\Psi(\mathbf{r})% \right|^{2}\,d\mathbf{r}.$$ (9) Note that, for stationary vortical modes, with $$\left\{\Phi,\Psi\right\}=\exp\left(-i\mu t+il\varphi\right)\left\{u(\rho,z),v% \left(\rho,z\right)\right\}$$ (10) (cf. Eq. (2)), it follows from Eq. (7) that the absolute value of the total angular momentum is a multiple of the norm, $$M_{\Phi,\Psi}=lN_{\Phi,\Psi},$$ (11) irrespective of the particular structure of modal functions $u\left(\rho,z\right)$ and $v\left(\rho,z\right)$ in Eq. (10). As said above, the GA represents the two wave functions, $\Phi$ and $\Psi$, as superpositions of three p-wave HO eigenstates, $\left(1/\sqrt{C_{lm}}\right)re^{-r^{2}/2}Y_{l}^{m}\left(\mathbf{r}\right)$, where $Y_{l}^{m}$ are spherical harmonics (written in terms of the Cartesian coordinates; recall that $r=\sqrt{x^{2}+y^{2}+z^{2}}$) with quantum numbers $l=1$, $m=1,0,-1$, and normalization constants $C_{lm}=\int\mathbf{dr~{}}r^{2}e^{-r^{2}}\left|Y_{l}^{m}\left(\mathbf{r}\right)% \right|^{2}$. We define the GA so that the vorticity axes for the two triplets of the eigenfunctions are aligned with perpendicular coordinate directions, $z$ and $y$ (cf. Ref. arxiv ). Thus we use the following Ansätze, in which the normalized HO eigenfunctions are substituted in their explicit forms, and $a_{1,0,-1}(t)$ and $b_{1,0,-1}(t)$ are the expansion amplitudes: $$\displaystyle\Phi\left(\mathbf{r},t\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\pi^{3/4}}e^{-5it}re^{-r^{2}/2}\left[a_{1}(t)\frac{x+iy}% {r}+\sqrt{2}a_{0}(t)\frac{z}{r}+a_{-1}(t)\frac{x-iy}{r}\right],$$ (12) $$\displaystyle\Psi\left(\mathbf{r},t\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\pi^{3/4}}e^{-5it}re^{-r^{2}/2}\left[b_{1}(t)\frac{x-iz}% {r}+\sqrt{2}b_{0}(t)\frac{y}{r}+b_{-1}(t)\frac{x+iz}{r}\right].$$ (13) Then, the evolution equations for the amplitudes are produced by the projection of the GPEs (4) and (5) onto the set of the eigenfunctions. After some algebra, the following dynamical system with six degrees of freedom is thus derived [$\tau\equiv t/\left(16\sqrt{2}\pi^{3/2}\right)$]: $$\displaystyle i\frac{d{a}_{1}}{d\tau}=2|b_{0}|^{2}(2a_{1}-a_{-1})+(|b_{1}|^{2}% +|b_{-1}|^{2})(3a_{1}+a_{-1})+$$ $$\displaystyle+a_{0}\left[b_{0}(b_{1}^{\ast}-b_{-1}^{\ast})+b_{0}^{\ast}(b_{-1}% -b_{1})\right]+(a_{1}+a_{-1})(b_{1}b_{-1}^{\ast}+b_{1}^{\ast}b_{-1})-$$ $$\displaystyle-i\sqrt{2}\left[a_{0}(b_{-1}^{\ast}b_{1}-b_{-1}b_{1}^{\ast})+a_{-% 1}(b_{0}b_{-1}^{\ast}+b_{0}b_{1}^{\ast}+b_{0}^{\ast}b_{-1}+b_{0}^{\ast}b_{1})% \right],$$ $$\displaystyle i\frac{d{a}_{0}}{d\tau}=2a_{0}(|b_{0}|^{2}+2|b_{-1}|^{2}+2|b_{1}% |^{2})-2a_{0}\left(b_{1}b_{-1}^{\ast}+b_{1}^{\ast}b_{-1}\right)-$$ $$\displaystyle-(a_{1}-a_{-1})(b_{0}(b_{1}^{\ast}-b_{-1}^{\ast})-b_{0}^{\ast}(b_% {1}-b_{-1}))-\sqrt{2}i(a_{-1}+a_{1})\left(b_{1}b_{-1}^{\ast}-b_{1}^{\ast}b_{-1% }\right),$$ $$\displaystyle i\frac{d{a}_{-1}}{d\tau}=2|b_{0}|^{2}(2a_{-1}-a_{1})+(|b_{1}|^{2% }+|b_{-1}|^{2})(3a_{-1}+a_{1})-$$ $$\displaystyle-a_{0}\left[b_{0}(b_{1}^{\ast}-b_{-1}^{\ast})+b_{0}^{\ast}(b_{-1}% -b_{1})\right]+(a_{1}+a_{-1})(b_{1}b_{-1}^{\ast}+b_{1}^{\ast}b_{-1})+$$ $$\displaystyle+i\sqrt{2}\left[a_{0}(b_{-1}b_{1}^{\ast}-b_{-1}^{\ast}b_{1})+a_{1% }(b_{0}b_{-1}^{\ast}+b_{0}b_{1}^{\ast}+b_{0}^{\ast}b_{-1}+b_{0}^{\ast}b_{1})% \right];$$ (14) $$\displaystyle i\frac{d{b}_{1}}{d\tau}=2|a_{0}|^{2}(2b_{1}-b_{-1})+(|a_{1}|^{2}% +|a_{-1}|^{2})(3b_{1}+b_{-1})+$$ $$\displaystyle+b_{0}\left[a_{0}(a_{1}^{\ast}-a_{-1}^{\ast})+a_{0}^{\ast}(a_{-1}% -a_{1})\right]+(b_{1}+b_{-1})(a_{1}a_{-1}^{\ast}+a_{1}^{\ast}a_{-1})+$$ $$\displaystyle+i\sqrt{2}\left[b_{0}(a_{-1}^{\ast}a_{1}-a_{-1}a_{1}^{\ast})+b_{-% 1}(a_{0}a_{-1}^{\ast}+a_{0}a_{1}^{\ast}+a_{0}^{\ast}a_{-1}+a_{0}^{\ast}a_{1})% \right],$$ $$\displaystyle i\frac{d{b}_{0}}{d\tau}=2b_{0}(|a_{0}|^{2}+2|a_{-1}|^{2}+2|a_{1}% |^{2})-2b_{0}\left(a_{1}a_{-1}^{\ast}+a_{1}^{\ast}a_{-1}\right)-$$ $$\displaystyle-(b_{1}-b_{-1})(a_{0}(a_{1}^{\ast}-a_{-1}^{\ast})-a_{0}^{\ast}(a_% {1}-a_{-1}))+\sqrt{2}i(b_{-1}+b_{1})\left(a_{1}a_{-1}^{\ast}-a_{1}^{\ast}a_{-1% }\right),$$ $$\displaystyle i\frac{d{b}_{-1}}{d\tau}=2|a_{0}|^{2}(2b_{-1}-b_{1})+(|a_{1}|^{2% }+|a_{-1}|^{2})(3b_{-1}+b_{1})-$$ $$\displaystyle-b_{0}\left[a_{0}(a_{1}^{\ast}-a_{-1}^{\ast})+a_{0}^{\ast}(a_{-1}% -a_{1})\right]+(b_{1}+b_{-1})(a_{1}a_{-1}^{\ast}+a_{1}^{\ast}a_{-1})-$$ $$\displaystyle-i\sqrt{2}\left[b_{0}(a_{1}^{\ast}a_{-1}-a_{-1}^{\ast}a_{1})+b_{1% }(a_{0}a_{-1}^{\ast}+a_{0}a_{1}^{\ast}+a_{0}^{\ast}a_{-1}+a_{0}^{\ast}a_{1})% \right].$$ (15) It is straightforward to check that Eqs. (14) and (15) conserve the two norms, which exactly correspond to expressions (6), $$N_{a}=|a_{-1}|^{2}+|a_{0}|^{2}+|a_{1}|^{2},\quad N_{b}=|b_{-1}|^{2}+|b_{0}|^{2% }+|b_{1}|^{2},$$ (16) and the equations can be written in the canonical Hamiltonian form $$i\frac{da_{j}}{d\tau}=\frac{\partial H}{\partial a_{j}^{\ast}},\quad i\frac{db% _{j}}{d\tau}=\frac{\partial H}{\partial b_{j}^{\ast}},$$ (17) $j=1,0,-1$, where the conserved Hamiltonian can be cast in the following form, taking into account the definition of the norms (16): $$\displaystyle H=3N_{a}N_{b}+|b_{0}|^{2}N_{a}+|a_{0}|^{2}N_{b}+$$ $$\displaystyle+(|a_{-1}|^{2}+|a_{1}|^{2}-2|a_{0}|^{2})(b_{1}b_{-1}^{\ast}+b_{1}% ^{\ast}b_{-1})+(|b_{-1}|^{2}+|b_{1}|^{2}-2|b_{0}|^{2})(a_{1}a_{-1}^{\ast}+a_{1% }^{\ast}a_{-1})-$$ $$\displaystyle+\left[a_{0}^{\ast}(a_{-1}-a_{1})+a_{0}(a_{1}^{\ast}-a_{-1}^{\ast% })\right]\left[b_{0}(b_{1}^{\ast}-b_{-1}^{\ast})+b_{0}^{\ast}(b_{-1}-b_{1})% \right]+$$ $$\displaystyle+(a_{1}a_{-1}^{\ast}+a_{-1}a_{1}^{\ast})(b_{1}b_{-1}^{\ast}+b_{-1% }b_{1}^{\ast})-$$ $$\displaystyle-i\sqrt{2}\left[a_{0}(a_{1}^{\ast}+a_{-1}^{\ast})(b_{-1}^{\ast}b_% {1}-b_{-1}b_{1}^{\ast})+b_{0}(b_{1}^{\ast}+b_{-1}^{\ast})(a_{-1}a_{1}^{\ast}-a% _{-1}^{\ast}a_{1})\right.+$$ $$\displaystyle\left.+a_{0}^{\ast}(a_{1}+a_{-1})(b_{-1}b_{1}^{\ast}-b_{-1}^{\ast% }b_{1})+b_{0}^{\ast}(b_{1}+b_{-1})(a_{-1}a_{1}^{\ast}-a_{-1}^{\ast}a_{1})% \right].$$ (18) This Hamiltonian corresponds to the substitution of the Ansätze (12), (13) into the interaction energy (9) (the non-interaction terms in the Hamiltonian (8) are removed from its GA counterpart (18) through the definition of the GA Ansätze, which eliminates the dynamics determined by those terms). III Fixed points of the Galerkin approximation and ground states of the coupled Gross-Pitaevskii equations for $l=1$ It is easy to find FPs of Eqs. (14) and (15) as the following stationary solutions. The first solution is $$\left(a_{1},a_{0},a_{-1},b_{1},b_{0},b_{-1}\right)=\left(0,a,0,\frac{a}{\sqrt{% 2}},0,\frac{a}{\sqrt{2}}\right)e^{-2ia^{2}t}.$$ (19) Ansätze (12), (13) demonstrate, as plotted in the left column of Fig. 1, that this FP corresponds, in terms of the full wave functions, to a coaxial dipole-dipole (CDD) mode, although the respective dipoles in the $\Phi$ and $\Psi$ wave functions are of different types: the former one is represented by the HO eigenfunction, with $l=1,m=0$, defined as a p-wave with respect to vorticity axis $z$, while the latter one is a combination of two p-waves with $l=1,m=\pm 1$, defined with respect to axis $y$. The substitution of (19) into (12) and (13) yields: $$\Phi\left(\mathbf{r},t\right)=\frac{\sqrt{2}a}{\pi^{3/4}}e^{-5it}ze^{-r^{2}/2}% ,~{}\Psi\left(\mathbf{r},t\right)=\frac{\sqrt{2}a}{\pi^{3/4}}e^{-5it}xe^{-r^{2% }/2}.$$ These expression clearly represent orthogonal dipoles, with one oriented along $z$-axis and the other one oriented along the $x$-axis as shown in the left column of Fig. 1(a). The value of Hamiltonian (18) for this FP is $$H_{\mathrm{CDD}}=2a^{4}.$$ (20) The second FP is $$\left(a_{1},a_{0},a_{-1},b_{1},b_{0},b_{-1}\right)=\left(0,a,0,0,a,0\right)e^{% -2ia^{2}t}.$$ (21) According to the Ansätze (12), (13), this FP corresponds to a mode displayed in Fig. 1(b). It is composed of mutually orthogonal dipoles (ODD), each one being represented by the HO p-wave with $l=1,m=0$, defined with respect to the vorticity axes $z$ and $y$. The Hamiltonian of this FP is $$H_{\mathrm{ODD}}=5a^{4}.$$ (22) The third FP is $$\left(a_{1},a_{0},a_{-1},b_{1},b_{0},b_{-1}\right)=\left(a,0,0,\frac{a}{\sqrt{% 2}},0,-\frac{a}{\sqrt{2}}\right)e^{-2ia^{2}t},$$ (23) Through Ansätze (12), (13), it corresponds to a coaxial combination of the vortex in $\Phi$ (the single nonzero amplitude $a_{1}$ clearly represents a vortex with respect to axis $z$) and dipole in $\Psi$, as shown in Fig. 1(c). The Hamiltonian of this coaxial vortex-dipole (CVD) FP is $$H_{\mathrm{CVD}}=2a^{4}.$$ (24) The calculation of eigenfrequencies of small perturbations around all these FPs, in the framework of the linearized version of Eqs. (14) and (15) (i.e., the respective Bogoliubov - de Gennes equations Pit ), demonstrates that all the FPs are stable, as all eigenfrequencies are real. Furthermore, the norms (16) of all the three FPs given by Eqs. (19) (21) and (23), with fixed $a$, are equal: $N_{a}=N_{b}=a^{2}$. Therefore, it makes sense to compare the respective values of the Hamiltonian, given by Eqs. (20), (22) and (24), to identify modes with smaller energies, which have a chance to play the role of the GS in the manifold of p-wave states. The comparison demonstrates that the CDD and CVD modes, represented by FPs (19) and (23), may have a chance to realize degenerate (in terms of the energy) GSs, while the ODD mode, which corresponds to FP (21), definitely represents an excited state. The FPs found above are stationary solutions with three or two nonzero amplitudes, out of the six which constitute the GA, and a single frequency. In addition to them, a more general stationary solution of Eqs. (14), (15) can be found, with four nonvanishing amplitudes and two different frequencies: $$a_{0}=b_{0}=0,\quad a_{\pm 1}(t)=a_{\pm}e^{-4ib^{2}t},\quad b_{\pm 1}(t)=\pm be% ^{-2i\left(a_{+}^{2}+a_{-}^{2}\right)t},$$ (25) where $a_{\pm}$ and $b$ are three arbitrary real constants. There is also a mirror image of FP (25) produced by swapping $a\rightleftarrows b$. The norms (6) corresponding to FP (25) are $$N_{a}=a_{+}^{2}+a_{-}^{2},~{}N_{b}=2b^{2}.$$ (26) In the particular case of $a_{+}\equiv a,a_{-}=0,b=a/\sqrt{2}$, the FP (25) goes over into the above one (23), which represents the coaxial vortex-dipole complex. Another particular case corresponds to $$a_{+}=a_{-}=b.$$ (27) It represents to a complex built of two orthogonal dipoles oriented along the $x$ and $y$ axes, as shown in Fig. 1(d). The investigation of the stability of FP (25) against small perturbations by means of the Bogoliubov - de Gennes equations is too cumbersome to be performed analytically. On the other hand, the energy stability can be compared to that of the above FPs, for a particular case of FP (25) with norms (26) equal to those of the solutions obtained above: $N_{a}=N_{b}=a^{2}$. For this purpose, FP (25) is taken as $$\displaystyle a_{+}^{2}+a_{-}^{2}=a^{2},$$ (28) $$\displaystyle a_{\pm 1}(t)=a_{\pm}e^{-2ia^{2}t}\quad b_{\pm 1}(t)=\pm\frac{a}{% \sqrt{2}}e^{-2ia^{2}t},$$ (29) which includes FP (27), as a particular case. As follows from Eqs. (12) and (13), this FP is built of a dipole in the $\Psi$ field oriented along the $z$ axis, and a mixed vortex-antivortex state in the $\Phi$ field. The calculation of the respective value of Hamiltonian (18) for the four-amplitude (4A) FP (29) yields $$H_{\mathrm{4A}}=2a^{4}.$$ (30) Note that expression (30) does not depend on ratio $a_{+}/a_{-}$, see Eq. (28). Thus, Eq. (30) demonstrates very broad degeneracy of the GS, in the framework of the GA: the same minimum of $H$ is realized by the FPs given by Eqs. (19), (23), and (29), including the continuous degeneracy in the 4A manifold with respect to the arbitrary value of parameter $a_{+}/a_{-}$. Finally, simulations of the coupled GPEs (4) and (5) in imaginary time, which is a well-known numerical method for constructing stationary states of GPEs Tosi , made it easy to produce stationary solutions, which are stable in real time too, and almost exactly correspond to the FPs of the coaxial dipole-dipole and vortex-dipole types, as predicted by the FPs (19) and (23), respectively. Shapes of these solutions are very close to those displayed, in terms of Ansätze (12), (13), in the left and right columns of Fig. 1. On the other hand, stationary solutions close to the orthogonal-dipole mode, corresponding to FP (21), could not be obtained by means of the imaginary- (or real-) time simulations. This negative result is not surprising, as such methods cannot produce excited states whose energies exceeds the GS energy, as shown in the present case by Eq. (22). It is relevant to mention that evident solutions with identical coaxial p-waves (vortices) in components $\Phi$ and $\Psi$, both taken as per Eq. (2), cannot be produced by the GA adopted above (because axes of the respective sets of the p-waves were chosen to be mutually orthogonal, rather than parallel). For this reason, the coaxial vortices are not considered here, but it is obvious that they are tantamount to vortex states in the single GPE with the HO trapping potential and self-repulsive nonlinearity, which were studied in detail before vortices . Concerning the absolute GS of the model based on the coupled GPEs, with Hamiltonian (8), additional calculations (perturbative analytical and numerical) demonstrate that the obvious isotropic s-wave solution, with $l=0$, namely, $\Phi=\Psi=\exp\left(-i\mu t\right)u(r)$ (in the case of equal norms of the two components, $N_{\Phi}=N_{\Psi}$), remains the state which realizes the minimum of the energy for given norms. IV Numerical results: Persistent time-periodic nonlinear dynamics Numerical solutions of the coupled GPEs which generate the (energy-degenerate) GSs of the $l=1$ manifold are presented in the previous section. Here, our objective is to explore generic dynamical regimes produced by the coupled GPE system and, in parallel, by its GA counterpart. IV.1 The interaction of obliquely oriented dipoles As said above, pairs of mutually parallel or orthogonal dipoles, which correspond to FPs (19) or (21), respectively, form stationary states with the shapes displayed in the left and middle columns of Fig. 1 (recall that, while both these FPs are stable against small perturbations, the stationary solution of the coupled GPEs corresponding to FP (21) is difficult to find in the numerics as it corresponds to an excited state, as per Eq. (22)). The evolution of a pair of dipoles with mutual orientation different from 0 or 90 degrees is initiated by the input $$\Phi\left(\mathbf{r},t=0\right)=Ax\exp\left(-r^{2}/2\right),~{}\Psi\left(% \mathbf{r},t=0\right)=Ax^{\prime}\exp\left(-r^{2}/2\right),$$ (31) with amplitude $A$ and oblique angle $\theta$ between axes $x$ and $x^{\prime}$. Typical results are displayed in Fig. 2 for $\theta=40^{\circ}$ and a norm (6) of each component of $N_{\Phi}=N_{\Psi}=5.5$. Simulations of both the coupled GPEs (4), (5), and of GA equations (14), (15) reveal periodic rocking oscillations of the two dipoles, as illustrated in Fig. 2(b) by plots showing the exchange of the $y$-component of the angular momentum (the one which is different from zero in the  present case) between the two wave-function components ($\Phi$ and $\Psi$), see Eq. (7). The oscillations exhibit periodic transformations of the two components from the obliquely oriented dipoles into coaxial vortices and back, as shown in Fig. 2(a), where the periodically emerging parallel vortices (in both components) are oriented along the $y$-axis. The period of the oscillations is $T=94$ in this case. As mentioned above, pure vortical modes with $l=1$ have the absolute value of their angular momentum equal to the norm, $M=N$. In the present case, the largest value of the angular momentum, attained in the vortex-like configurations at $t=23.6$ and $71$ in Fig. 2(a), is $\left(\left|\left(M_{y}\right)_{\Phi,\Psi}\right|\right)_{\max}~{}=2.3$, while, as said above, the norms are $N_{\Phi,\Psi}=5.5$, which implies that the periodic conversion of the wave-field configurations into the vortices is incomplete. Additional simulations demonstrate that the ratio $\left(\left|\left(M_{y}\right)_{\Phi,\Psi}\right|\right)_{\max}/N_{\Phi,\Psi}$ increases, following the growth of the norms, i.e., the conversion into the vortices is more complete for a more nonlinear system. Finally, Fig. 2(c) clearly shows that the GA provides a rather accurate fit to the simulations of the full GPE system. In this connection, additional computations demonstrate that the share of the total norm carried by components of the full numerical solution with $l\geq 2$, which are omitted in the framework of the GA based on Eqs. (12), (13), remains $\lesssim 2\%$ in the case of $N_{\Phi}=N_{\Psi}=5.5$, thus justifying the application of the GA. Stronger nonlinearity, i.e., larger values of the total norm, gradually leads to an increase of the discrepancy of the full numerical solution from the GA (as an illustration, see Fig. 3(a) below, which shows the discrepancy in the oscillation period for a much stronger nonlinearity). The dependence of the basic features of the dynamical regime on initial angle $\theta$ between the orientations of the two dipoles in the input configuration (31) is displayed in Fig. 3 for a system with stronger nonlinearity, measured by the norms, $N_{\Phi,\Psi}=22.5$, which are larger by a factor of $5$ than the case shown in Fig. 2. The results of Figs. 3(a) and (b) show a strong dependence of the oscillation period and of the efficiency of the periodic conversion of the wave-function configuration into the set of coaxial vortices, which is measured by the above-mentioned ratio $\left(\left|\left(M_{y}\right)_{\Phi,\Psi}\right|\right)_{\max}/N_{\Phi,\Psi}$, on the initial mutual-orientation angle $\theta$. It is seen that both $T$ and $\left(\left|\left(M_{y}\right)_{\Phi,\Psi}\right|\right)_{\max}/N_{\Phi,\Psi}$ attain a sharp maximum at $\theta\approx 40^{\circ}$ (the GA predicts the maximum of $T$ exactly at $\theta=45^{\circ}$, with a symmetric dependence of $T$ on $\left(\theta-45^{\circ}\right)$, in Fig. 3(a)). In fact, the difference between the full GPE dynamics and its GA-truncated version is largest precisely at $\theta=45^{\circ}$, which explains the large discrepancy in the maximum values of the GPE- and GA-predicted periods, observed in Fig. 3(a). Note also that the largest value of the momentum may even exceed the norm, viz.,$\left(\left|\left(M_{y}\right)_{\Phi,\Psi}\right|\right)_{\max}/N_{\Phi,\Psi}% \approx 1.11$ at $\theta=40^{\circ}$, as can be seen in Fig. 3(b). In this connection, it is relevant to mention that the dynamical equations of the GA (14) and (15) feature an exact scaling invariance (unlike the GPEs (4), (5), in which the scaling invariance is broken by fixing the HO length to be $1$; in that case, the precession and nutation periods are more complex functions of $1/N$, as found for cross-vortices in Ref. arxiv ). In the case of $N_{\Phi}=N_{\Psi}\equiv N/2$, this implies that the oscillation period scales as $T\sim 1/N$. Accordingly, the increase of the norm by the factor of $5$ in the case displayed in Fig. 3 for $\theta=40^{\circ}$, in comparison with the case of Fig. 2, leads to the reduction of the GA-predicted period from $T=94$ in Fig. 2 to $T\approx 19$ in Fig. 3(a). IV.2 The interaction of mutually orthogonal dipole and vortex The analysis of the FPs presented in the previous section has produced stable stationary solutions for the coaxial vortex-dipole complex, which corresponds to FP (23) and the right column in Fig. 1. Below we consider the dynamics initiated by an input composed of mutually perpendicular vortex in one component and dipole in the other, which are oriented in the $z$ and $y$ directions, respectively $$\Phi\left(\mathbf{r},t=0\right)=A\left(x+iy\right)\exp\left(-r^{2}/2\right),~{% }\Psi\left(\mathbf{r},t=0\right)=Ax\exp\left(-r^{2}/2\right),$$ (32) cf. Eq. (31). Simulations of these configurations exhibit exchange of the angular momentum between the two wave functions. Initially, the vortex donates its $z$-component of the angular momentum to the dipole, which initially carries no angular momentum. As a result, the dipole starts rotating around the $z$ axis. After a quarter of the period of the resulting oscillations (at $t=12.8$), the dipole absorbs all the angular momentum from the vortex, itself becoming a fully shaped vortex, which features the above-mentioned respective condition, $\left|M_{z}\right|=N$. Simultaneously, the original vortex component, devoid of the angular momentum, acquires a dipole-mode shape. These metamorphoses are clearly seen in the second column of snapshots displayed Fig. 4(a). At the time corresponding to a half-period, $t=25.6$, the wave functions return to their original shapes, with the dipole rotated by 90 degrees relative to its original direction (the third column in Fig. 4(a)). A nearly perfect recovery of the initial configuration in both components is observed at the end of the full period, $t=50$. Fig. 2(b) tracks the periodic exchange of the angular momentum between the two wave functions, while Fig. 2(c) demonstrates the evolution of the amplitudes of the eigenfunctions, the projection onto which corresponds to the GA. Close agreement between the GA results and their counterparts generated by simulations of the full GPE system is obtained again. IV.3 The interaction of non-coaxial vortices The analysis of the FPs performed in the previous section did not produce any stationary state composed of two non-coaxial vortices. Here, we explore the dynamics initiated by a pair of identical vortices with mutually perpendicular orientations, aligned with the $z$- and $y$-axes: $$\Phi\left(\mathbf{r},t=0\right)=A\left(x+iy\right)\exp\left(-r^{2}/2\right),~{% }\Psi\left(\mathbf{r},t=0\right)=A\left(z+ix\right)\exp\left(-r^{2}/2\right),$$ (33) fixing the amplitude as $A=1$ and norms as $N_{\Phi}=N_{\Psi}=5.56$ (this case of non-coaxial vortices was studied in arxiv , however, for a case of dominating self-phase interactions). The resulting evolution of the binary system is displayed in Fig. 5, which shows that, unlike the interacting dipoles (cf. Figs. 2 and 4), the motion does not amount to rotation about a particular axis. Instead, the vortices undergo complex deformation, related to periodic generation of all the three components of the angular momentum in each wave function, as shown in Fig. 5(c) (the net angular momentum, defined as per Eq. (7), remains conserved). The evolution of the angular momenta of each vortex is shown in vectorial (Fig. 5(b)) and scalar (Fig. 5(c)) representations. The vectorial representation (Fig. 5(b)) demonstrates the motion of the locus of each vortex axis in 3D space. The red trajectory pertains to the vortex initially aligned with the $z$-axis, while the blue trajectory corresponds to the initially $y$-oriented vortex. The trajectories features precession with nutations qualitatively similar to the case of mixed inter- and intra-species interactions reported in arxiv . Scalar representation (Fig. 5(c)) separates each component of the angular momentum and shows its evolution. Green curves solid and dashed pertain to the $z$-component of the angular momenta of each vortex respectively. Blue curves (solid and dashed) represent the evolution of the $x$-components of the angular momenta of the two vortices and finally the red curves (solid and dashed) represent the evolution of the $y$-components of the angular momenta for each of the vortices. Moreover, we present the evolution of the total angular momenta of both condensates which perfectly overlap by the violet curve. This violet curve demonstrates that in the case of the two vortices unlike the case of vortex-dipole we do not have an exchange of angular momenta between the two fields. The horizontal black line in Fig. 5(c) shows the sum of the two angular momenta which is an integral of motion. As above, Fig. 5(d) demonstrates that the GA provides an accurate description of the present dynamical regime in terms of the finite-mode truncation. Lastly, the analysis was also performed for inputs similar to the one defined by Eq. (33), but with the angle between the angular-momentum vectors of the two vortices different from $90^{\circ}$. These results (not shown) are similar to those presented here. In particular, the evolution of the two angular momenta in the vectorial form again exhibits precession-like motion combined with nutations. V Conclusion In this work, we introduce a simple 3D nonlinear-wave system, based on two isotropic HO (harmonic oscillators) coupled by a cubic repulsion. The system can be implemented in the form of a binary BEC, with the interaction dominated by the inter-component repulsion, that can be enhanced by means of the Feshbach resonance. The realization of the model in a binary Fermi gas, with repulsion between the up and down components, is possible too. Parallel to the respective system of two coupled GPEs (Gross-Pitaevskii equations), we have introduced a finite-mode truncation, which amounts to the six-mode GA (Galerkin approximation), based on the triplets of HO eigenstates in each component, with quantum numbers $l=1$ and $m=1,0,-1$. The first result is that FPs (fixed points) of the GA almost exactly predict the GS (ground-state) manifold for $l=1$, which features unusually broad degeneracy, including various complexes in the form of a dipole coaxial with a vortex or another dipole. Dynamical regimes were initiated by inputs built as pairs of non-coaxial dipoles and/or vortices, including pairs of orthogonally oriented vortices. As a result, the system gives rise to stable dynamics, characterized by periodic conversions between dipoles and vortices. In this case too, results produced by the GA are well corroborated by direct simulations of the coupled GPEs. As a development of the present system, it is relevant to expand it to a three-component spinor model, in which inter-component interactions may include four-wave mixing, in addition to the mutual repulsion spinor . In particular, a natural possibility will be to consider the interaction between three mutually orthogonal vortices in the three-component system trapped in a common isotropic 3D HO potential. Acknowledgements. RD and TM acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) via GRK 1464, and computation time provided by the Paderborn Center for Parallel Computing (PC${}^{2}$), RD is supported, in a part, by Russian Federation Grant 074-U01 through ITMO Early Career Fellowship scheme. VVK acknowledges support from FCT (Portugal) under grant UID/FIS/00618/2013. BAM appreciated hospitality of the Department of Physics at the University of Paderborn. References (1) R. Y. Chiao, E. Garmire, and C. H. Townes, Self-trapping of optical beams, Phys. Rev. Lett. 13, 479 (1964). (2) Y. Silberberg, Collapse of optical pulses, Opt. Lett. 15, 1282 (1990); R. McLeod, K. Wagner, and S. Blair, (3+1)-dimensional optical soliton dragging logic, Phys. Rev. A 52, 3254 (1995); N. R. Cooper, Propagating magnetic vortex rings in ferromagnets, Phys. Rev. Lett. 82, 1554 (1999); E. Babaev, Dual Neutral Variables and Knot Solitons in Triplet Superconductors, Phys. Rev. Lett. 88, 177002 (2002); E. L. Falcão-Filho, C. B. de Araújo, G. Boudebs, H. Leblond, and V. Skarka, Robust two-dimensional spatial solitons in liquid carbon disulfide, Phys. Rev. Lett. 110, 013901 (2013). (3) Yu. S. Kivshar and D. E. Pelinovsky, Self-Focusing and Transverse Instabilities of Solitary Waves, Phys. Rep. 331, 117 (2000); Y. S. Kivshar and G. P. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Academic Press, San Diego, 2003); M. Bender, P. H. Heenen, and P. G. Reinhard, Self-consistent mean-field models for nuclear structure, Rev. Mod. Phys. 75, 121 (2003); B. A. Malomed, D. Mihalache, F. Wise, and L. Torner, Spatiotemporal optical solitons, J. Optics B: Quant. Semicl. Opt. 7, R53-R72 (2005); A. S. Desyatnikov, L. Torner, and Y. S. Kivshar, Optical Vortices and Vortex Solitons, Progr. Opt. 47, 1 (2005); E. Radu and M. S. Volkov, Stationary ring solitons in field theory – Knots and vortons, Phys. Rep. 468, 101 (2008); D. Mihalache, Linear and nonlinear light bullets: recent theoretical and experimental studies, Rom. J. Phys. 57, 352-371 (2012). (4) Y. V. Kartashov, B. A. Malomed, and L. Torner, Solitons in nonlinear lattices, Rev. Mod. Phys. 83, 247-306 (2011). (5) A. D. Martin and J. Ruostekoski, Quantum dynamics of atomic bright solitons under splitting and recollision, and implications for interferometry, New J. Phys. 14, 043040 (2012); J. Cuevas, P. G. Kevrekidis, B. A. Malomed, P. Dyke, and R. G. Hulet, Interactions of solitons with a Gaussian barrier: Splitting and recombination in quasi-1D and 3D, New J. Phys. 15, 063006 (2013); J. H. V. Nguyen, P. Dyke, D. Luo, B. A. Malomed, and R. G. Hulet, Collisions of matter-wave solitons, Nature Phys. 10, 918-922 (2014); G. D. McDonald, C. C. N. Kuhn, K. S. Hardman, S. Bennetts, P. J. Everitt, P. A. Altin, J. E. Debs, J. D. Close, and N. P. Robins, Bright solitonic matter-wave interferometer, Phys. Rev. Lett. 113, 013002 (2014). (6) L. Bergé, Wave collapse in physics: principles and applications to light and plasma waves, Phys. Rep. 303, 259-370 (1998); C. Sulem and P. L. Sulem, The nonlinear Schrödinger equation: self-focusing and wave collapse (Springer: Berlin, 1999); E. A. Kuznetsov and F. Dias, Bifurcations of solitons and their stability, Phys. Rep. 507, 43-105 (2011). (7) N. K. Efremidis, S. Sears, D. N. Christodoulides, J. W. Fleischer, and M. Segev, Discrete solitons in photorefractive optically induced photonic lattices, Phys. Rev. E 66, 046602 (2002); B. B. Baizakov, B. A. Malomed, and M. Salerno, Multidimensional solitons in periodic potentials, Europhys. Lett. 63, 642 (2003); J. Yang and Z. H. Musslimani, Fundamental and vortex solitons in a two-dimensional optical lattice, Opt. Lett. 28, 2094 (2003); D. Mihalache, D. Mazilu, F. Lederer, Y. V. Kartashov, L.-C. Crasovan, and L. Torner, Stable three-dimensional spatiotemporal solitons in a two-dimensional photonic lattice, Phys. Rev. E 70, 055603(R) (2004). (8) D. Neshev, T. J. Alexander, E. A. Ostrovskaya, Y. S. Kivshar, H. Martin, I. Makasyuk, and Z. Chen, Observation of discrete vortex solitons in optically induced photonic lattices, Phys. Rev. Lett. 92, 123903 (2004); J. W. Fleischer, G. Bartal, O. Cohen, O. Manela, M. Segev, J. Hudock, and D. N. Christodoulides, Observation of vortex-ring “discrete”solitons in 2D photonic lattices, ibid. 92, 123904 (2004). (9) E. A. Cerda-Méndez, D. Sarkar, D. N. Krizhanovskii, S. S. Gavrilov, K. Biermann, M. S. Skolnick, and P. V. Santos, Exciton-polariton gap solitons in two-dimensional lattices, Phys. Rev. Lett. 111, 146401 (2013). (10) L. P. Pitaevskii and S. Stringari, Bose-Einstein Condensation, Oxford University Press (Oxford, 2003). (11) V. I. Kruglov, Yu. A. Logvin, and V. M. Volkov, The theory of spiral laser beams in nonlinear media, J. Mod. Opt. 39, 2277-2291 (1992). (12) H. Sakaguchi, B. Li, and B. A. Malomed, Creation of two-dimensional composite solitons in spin-orbit-coupled self-attractive Bose-Einstein condensates in free space, Phys. Rev. E 89, 032920 (2014). (13) Y.-C. Zhang, Z.-W. Zhou, B. A. Malomed, and H. Pu, Stable solitons in three dimensional free space without the ground state: Self-trapped Bose-Einstein condensates with spin-orbit coupling, Phys. Rev. Lett. 115, 253902 (2015). (14) V. E. Lobanov, Y. V. Kartashov, and V. V. Konotop, . Fundamental, Multipole, and Half-Vortex Gap Solitons in Spin-Orbit Coupled Bose-Einstein Condensates, Phys. Rev. Lett. 112, 180403 (2014). (15) O. V. Borovkova, Y. V. Kartashov, B. A. Malomed, and L. Torner, Algebraic bright and vortex solitons in defocusing media, Opt. Lett. 36, 3088-3090 (2011); O. V. Borovkova, Y. V. Kartashov, L. Torner, and B. A. Malomed, Bright solitons from defocusing nonlinearities, Phys. Rev. E 84, 035602 (R) (2011). (16) Q. Tian, L. Wu, Y. Zhang, and J.-F. Zhang, Vortex solitons in defocusing media with spatially inhomogeneous nonlinearity, Phys. Rev. E 85, 056603 (2012); Y. Wu, Q. Xie, H. Zhong, L. Wen, and W. Hai, Algebraic bright and vortex solitons in self-defocusing media with spatially inhomogeneous nonlinearity, ibid. A 87, 055801 (2013); R. Driben, Y. V. Kartashov, B. A. Malomed, T. Meier, and L. Torner, Soliton gyroscopes in media with spatially growing repulsive nonlinearity, Phys. Rev. Lett. 112, 020404 (2014); Y. V. Kartashov, B. A. Malomed, Y. Shnir, and L. Torner, Twisted toroidal vortex-solitons in inhomogeneous media with repulsive nonlinearity, ibid. 113, 264101 (2014); R. Driben, Y. Kartashov, B. A. Malomed, T. Meier, and L. Torner, Three-dimensional hybrid vortex solitons, New J. Phys. 16, 063035 (2014); R. Driben, N. Dror, B. Malomed, and T. Meier, Multipoles and vortex multiplets in multidimensional media with inhomogeneous defocusing nonlinearity, ibid. 17, 083043 (2015); R. Driben, T. Meier, and B. A. Malomed, Creation of vortices by torque in multidimensional media with inhomogeneous defocusing nonlinearity, Sci. Rep. 5, 9420 (2015); N. Dror and B. A. Malomed, Solitons and vortices in nonlinear potential wells, J. Optics 16, 014003 (2016). (17) S. Sinha, Semiclassical analysis of collective excitations in Bose-Einstein condensate, Phys. Rev. A 55, 4325 (1997); R. J. Dodd, K. Burnett, M. Edwards, and C. W. Clark, Excitation spectroscopy of vortex states in dilute Bose-Einstein condensed gases, ibid. 56, 587 (1997); T. Isoshima, M. Nakahara, T. Ohmi, and K. Machida, Creation of a persistent current and vortex in a Bose-Einstein condensate of alkali-metal atoms, ibid. 61, 063610 (2000); A. A. Svidzinsky and A. L. Fetter, Stability of a vortex in a trapped Bose-Einstein condensate, Phys. Rev. Lett. 84, 5919 (2000); C.-H. Hsueh, S.-C. Gou, T.-L. Horng, and Y.-M. Kao, Vortex-ring solutions of the Gross-Pitaevskii equation for an axisymmtrically trapped Bose-Einstein condensate, J. Phys. B: At. Mol. Opt. Phys. 40, 4561-4571 (2007); V. M. Lashkin, Stable three-dimensional spatially modulated vortex solitons in Bose-Einstein condensates, Phys. Rev. A 78, 033603 (2008); T. P. Simula, T. Mizushima, and K. Machida, Vortex waves in trapped Bose-Einstein condensates, ibid. 78, 053604 (2008); R. N. Bisset, W. Wang, C. Ticknor, R. Carretero-González, D. J. Frantzeskakis, L. A. Collins, and P. G. Kevrekidis, Robust vortex lines, vortex rings, and hopfions in three-dimensional Bose-Einstein condensates, Phys. Rev. A 92, 063611 (2015). (18) L. C. Crasovan, G. Molina-Terriza, J. P. Torres, L. Torner, V. M. Pérez-García, and D. Mihalache, Globally linked vortex clusters in trapped wave fields, Phys. Rev. E 66, 036612 (2002); L. C. Crasovan, V. Vekslerchik, V. M. Pérez-García, J. P. Torres, D. Mihalache, and L. Torner, Stable vortex dipoles in nonrotating Bose-Einstein condensates. Phys. Rev. A 68, 063609 (2003); V. M. Lashkin, Two-dimensional multisolitons and azimuthons in Bose-Einstein condensates, ibid. 77, 025602 (2008); S. Middelkamp, P. J. Torres, P. G. Kevrekidis, D. J. Frantzeskakis, R. Carretero-González, P. Schmelcher, D. V. Freilich, and D. S. Hall, Guiding-center dynamics of vortex dipoles in Bose-Einstein condensates, ibid. 84, 011605 (R) (2011). (19) Y. M. Bidasyuk, A. V. Chumachenko, O. O. Prikhodko, S. I. Vilchinskii, M. Weyrauch, and A. I. Yakimenko, Stable Hopf solitons in rotating Bose-Einstein condensates, Phys. Rev. A 92, 053603 (2015). (20) G. A. Swartzlander, Jr. and C. T. Law, Optical vortex solitons observed in Kerr nonlinear media, Phys. Rev. Lett. 69, 2503 (1992); M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman, and E. A. Cornell, Vortices in a Bose-Einstein condensate, Phys. Rev. Lett. 83, 2498 (1999); B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, Watching dark solitons decay into vortex rings in a Bose-Einstein condensate, ibid. 86, 2926 (2001); A. E. Leanhardt, A. Görlitz, A. P. Chikkatur, D. Kielpinski, Y. I. Shin, D. E. Pritchard, and W. Ketterle, Imprinting vortices in a Bose-Einstein condensate using topological phases, ibid. 89, 190403 (2002); V. Bretin, P. Rosenbusch, F. Chevy, G. V. Shlyapnikov, and J. Dalibard, Quadrupole oscillation of a single-vortex Bose-Einstein condensate: Evidence for Kelvin modes, ibid. 90, 100403 (2003). (21) P. G. Kevrekidis, R. Carretero-González, D. J. Frantzeskakis, and I. G. Kevrekidis, Vortices in Bose-Einstein condensates: some recent developments. Mod. Phys. Lett. B 18, 1481-1505 (2004); A. L. Fetter, Rotating trapped Bose-Einstein condensates, Rev. Mod. Phys. 81, 647 (2009). (22) J. J. García-Ripoll and V. M. Pérez-García, Stable and unstable vortices in multicomponent Bose-Einstein condensates, Phys. Rev. Lett. 84, 4264 (2000); K. Kasamatsu, M. Tsubota, and M. Ueda, Vortices in multicomponent Bose-Einstein condensates, Int. J. Mod. Phys. B 19, 1835-1904 (2005); K. M. Mertes, J. W. Merrill, R. Carretero-González, D. J. Frantzeskakis, P. G. Kevrekidis, and D. S. Hall, Nonequilibrium dynamics and superfluid ring excitations in binary Bose-Einstein condensates, Phys. Rev. Lett. 99, 190402 (2007); K. J. H. Law, P. G. Kevrekidis, and L. S. Tuckerman, Stable Vortex-Bright-Soliton Structures in Two-Component Bose-Einstein Condensates, ibid. 105, 160405 (2010); R. Zamora-Zamora, M. Lozada-Hidalgo, S. F. Caballero-Benítez, and V. Romero-Rochín, Vortices on demand in multicomponent Bose-Einstein condensates, Phys. Rev. A 86, 053624 2012); M. Tylutki, L. P. Pitaevskii, A. Recati, S. Stringari, Confinement and precession of vortex pairs in coherently coupled Bose-Einstein condensates, arXiv:1601.03695. (23) M. P. Mink, C. M. Smith, and R. A. Duine, Vortex-lattice pinning in two-component Bose-Einstein condensates, Phys. Rev. A 79, 013605 (2009). (24) M. W. Ray, E. Ruokokoski, S. Kandel, M. Möttönen, and D. S. Hall, Observation of Dirac monopoles in a synthetic magnetic field, Nature 505, 657-660 (2014). (25) J. Ruostekoski and J. R. Anglin, Creating vortex rings and three-dimensional skyrmions in Bose-Einstein condensates, Phys. Rev. Lett. 86, 3934 (2001); R. A. Battye, N. R. Cooper and P. M. Sutcliffe, Stable skyrmions in two-component Bose-Einstein condensates, ibid. 88, 080401 (2002); C. M. Savage and J. Ruostekoski, Energetically stable particlelike skyrmions in a trapped Bose-Einstein condensate, ibid. 91, 010403 (20032); J. Ruostekoski, and J. R. Anglin, Monopole core instability and Alice rings in spinor Bose-Einstein condensates, ibid. 91, 190402 (2003). (26) L. S. Leslie, A. Hansen, K. C. Wright, B. M. Deutsch, and N. P. Bigelow, Creation and detection of skyrmions in a Bose-Einstein condensate, Phys. Rev. Lett. 103, 250401 (2009); J. Y. Choi, W. J. Kwon, and Y. I. Shin, Observation of topologically stable 2D skyrmions in an antiferromagnetic spinor Bose-Einstein condensate, ibid. 108, 035301 (2012). (27) R. Driben, V. V. Konotop, and T. Meier, Precession and nutation dynamics of nonlinearly coupled non-coaxial three-dimensional matter wave vortices, arXiv:1505.04113. (28) S. Brenner and R. L. Scott, The Mathematical Theory of Finite Element Methods, Springer-Verlag (New York, 2002); J. L. Guermond, P. Minev, and J. Shen, An overview of projection methods for incompressible flows, Comp. Methods Appl. Mech. Engineering 195, 6011-6045 (2006). (29) A. M. Kaufman, R. P. Anderson, T. M. Hanna, E. Tiesinga, P. S. Julienne, and D. S. Hall, Radio-frequency dressing of multiple Feshbach resonance, Phys. Rev. A 80, 050701(R) (2009). (30) S. Tojo, Y. Taguchi, Y. Masuyama, T. Hayashi, H. Saito, and T. Hirano, Controlling phase separation of binary Bose-Einstein condensates via mixed-spin-channel Feshbach resonance, Phys. Rev. A 82, 033609 (2010). (31) D. J. Papoular, G. V. Shlyapnikov, and J. Dalibard, Microwave-induced Fano-Feshbach resonances, Phys. Rev. A 81, 041603(R) (2010); T. V. Tscherbul, T. Calarco, I. Lesanovsky, R. V. Krems, A. Dalgarno, and J. Schmiedmayer, rf-field-induced Feshbach resonances, Phys. Rev. A 81, 050701(R) (2010). (32) M. Egorov, B. Opanchuk, P. Drummond, B. V. Hall, P. Hannaford, and A. I. Sidorov, Measurement of s-wave scattering lengths in a two-component Bose-Einstein condensate, Phys. Rev. A 87, 053614 (2013). (33) B. D. Esry, C. H. Greene, J. P. Burke, Jr., and J. L. Bohn, Hartree-Fock theory for double condensates, Phys. Rev. Lett. 78, 3594-3597 (1997); M. L. Chiofalo, S. Succi, and M. P. Tosi, Ground state of trapped interacting Bose-Einstein condensates by an explicit imaginary-time algorithm. Phys. Rev. E 62, 7438-7444 (2000); W. Bao and Q. Du, Computing the ground state solution of Bose-Einstein condensates by a normalized gradient flow, SIAM J. Sci. Comput. 25, 1674-1697 (2004). (34) T. Isoshima, M. Nakahara, T. Ohmi, and K. Machida, Creation of a persistent current and vortex in a Bose-Einstein condensate of alkali-metal atoms, Phys. Rev. A 61, 063610 (2000); Y. Kawaguchi and M. Ueda, Spinor Bose-Einstein condensates, Phys. Rep. 520, 253-381 (2012).
Torque-luminosity correlation and possible evidence for core-crust relaxation in the X-ray pulsar GX 1+4 B. Paul Tata Institute of Fundamental Research Homi Bhabha Road, Mumbai(Bombay) 400 005, India    A.R. Rao Tata Institute of Fundamental Research Homi Bhabha Road, Mumbai(Bombay) 400 005, India    and K.P. Singh Tata Institute of Fundamental Research Homi Bhabha Road, Mumbai(Bombay) 400 005, India (Received October ; accepted , 1996) Abstract We present the detection of a positive correlation between spin-down rate $\dot{P}$ and pulsed X-ray luminosity in the BATSE archival data of the bright hard X-ray pulsar GX 1+4. We have also seen a delay of 5.6 $\pm$ 1.2 days between the luminosity change and the corresponding change in the spin-down rate. The observed correlation between $\dot{P}$ and L${}_{X}$ is used to reproduce the period history of GX 1+4 based on the observed luminosity alone, and it is found that the spin period can be predicted correct to 0.026% when the luminosity is adequately sampled. The idea that at a higher luminosity more matter is accreted and the accretion disk extends closer to the neutron star thereby transferring more angular momentum to the system, seems not to be the case with GX 1+4. The observed lag between the spin-down rate and the luminosity is reported here for the first time in any such binary X-ray pulsar, and is found to be consistent with the time scale for the core-crust relaxation in a neutron star. keywords: X-rays: stars - pulsars: individual - GX 1+4 \offprints B. Paul, bpaul@tifrvax.tifr.res.in \thesaurus13.25.5; 08.16.7 GX 1+4 1 Introduction Period variations in X-ray binary pulsars are quite common and a number of pulsars show both spin-down and spin-up episodes over time scale of years or less. In binary systems with Roche-lobe overflow of the mass losing secondary, such variations are generally explained in terms of the conventional accretion disk theory where the spinning-up or spinning-down of a neutron star of a given magnetic moment, mass and period depends only on the X-ray luminosity. In binary systems containing massive early type secondaries, however, accretion onto the neutron star is mostly through strong stellar wind and conditions for forming stable accretion disks are generally not present. According to the numerical simulations of mass accretion onto such systems (Taam & Fryxell 1988; Blondin et al. 1990; Matsuda et al. 1991) the small accreted specific angular momentum can change sign in an erratic manner which may lead to alternating spin-up and spin-down episodes. Study of torque-luminosity relationships in X-ray binary systems can therefore, be very instructive in understanding the accretion process in them. The luminous hard X-ray pulsar GX 1+4, first detected in 1970 (Lewin et al. 1971), has several characteristics which makes it an ideal source to test out the concepts of accretion powered X-ray pulsars. It has shown a continuous decrease of pulse period (spin-up) from about 135s in 1970 to about 110s in 1980 and it was included as one of the test sources in understanding the behaviors of disk-fed X-ray pulsars (Ghosh & Lamb 1979a,b). The source was below the detection limit of EXOSAT in 1983 (Hall & Davelaar 1983) and after its rediscovery by GINGA in 1987 (Makishima et al. [1988]) it has been showing a monotonically increasing spin period (spin-down), except for a brief spin-up episode in between (Finger et al. [1993]; Chakrabarty et al. [1994]). GX 1+4 has been identified with a red giant M6III star V2116 Oph having an emission line spectrum that resembles a symbiotic star with a strong stellar wind (Davidsen et al. 1976). So far no binary period has been detected from this system, although optical pulsations with the same period as in X-rays have recently been reported (Jablowski et al. 1996). The presence of a giant companion and period change sign reversals could imply that GX 1+4 is a wind-fed system without a stable accretion disk. A correlation between spin-down and X-ray luminosity was pointed out by Chakrabarty (1996), which is in apparent contradiction with the general ideas of X-ray pulsars with accretion disks. To confirm the correlation between spin-down and X-ray luminosity found by Chakrabarty (1996) and to understand the torque-luminosity relation in greater detail, we have obtained the pulse period and luminosity history of GX 1+4 for about 1200 days from the Compton Gamma Ray Observatory Science Support Center (COSSC) BATSE archive and carried out our analysis. In the following sections, we present the analysis, results and its implications. 2 Data The pulse period and the luminosity of GX 1+4 for Truncated Julian Day (TJD) 8370 to 9615 (ie., 1991 April 24 to 1994 September 20) were obtained from the COSSC archive. The observations were done with BATSE Large Area Detectors (see Fishman et al. [1989] for a description of BATSE). The archived data consists of the pulsar frequency obtained from blind searches and epoch folding performed on the BATSE data, the confidence level of the period determination and a Y/N flag indicating whether the confidence level exceeds a predetermined value. The archive also provides the pulsed flux F${}_{X}$ at 40 keV obtained by fitting an optically thin thermal bremsstrahlung (OTTB) spectrum of temperature 50 keV for channels 1 to 5 (25 to 98 keV). In the following analysis, we have taken only those data points with flag Y (i.e., the period determination is reliable), unless otherwise mentioned. 3 Analysis and results The pulse period and the observed pulsed flux F${}_{X}$ at 40 keV are plotted in Fig. 1 for TJD 8350 – 9650 (Fig. 1a and 1d, respectively). A linear fit to the pulse period gives an average value for $\dot{P}$ of 2.12 s yr${}^{-1}$. A quadratic fit to the data gives a value of $\dot{\nu}/\ddot{\nu}$ of 8.3 yr. Higher order polynomials do not improve the fit. To see the period variation in more detail, the residuals to the quadratic fit are shown in Fig. 1b. The pulsed X-ray flux is seen to increase by a factor of $\geq$4 for duration of 2 $-$ 10 days compared to the average flux and by a factor of $\geq$2 for a duration of about 20 $-$ 100 days. Since F${}_{X}$ is obtained after fitting a OTTB spectrum, it is in fact a measure of the hard X-ray pulsed luminosity. Though there have been some indications of an anti-correlation between pulse fraction and total X-ray luminosity (Rao et al. [1994]), the observed pulse fraction in the present spin-down era lies in a narrow range of 0.3 to 0.5. In fact, from a compilation of hard X-ray luminosity of GX 1+4 (Chitnis [1994]) we find a positive correlation between X-ray luminosity and F${}_{X}$. Hence, in the following, we treat F${}_{X}$ as a measure of the total X-ray luminosity. To examine whether the luminosity is related to the pulse period variation, the instantaneous spin down rate $\dot{P}$ is calculated for each of the data points by doing a linear fit to the neighboring 25 data points and this is shown in Fig. 1c. The similarity in the Figs. 1c and 1d led to further analysis of correlation and cross-correlation between $\dot{P}$ and the pulsed flux F${}_{X}$. 3.1 Spin down rate and luminosity To estimate the correlation of spin change rate and luminosity we choose only those $\dot{P}$ values where the linear fit around that data point (for about $\pm$ 12 days) is acceptable (unlike Fig. 1c. where all the data points are included). As the regions of very high F${}_{X}$ are of short durations, the determination of $\dot{P}$ is not very reliable and hence we exclude those points from our analysis. The two quantities are positively correlated and we have calculated a correlation coefficient of 0.63 (for 102 data points) and the probability of no correlation in the given data set is estimated to be 10${}^{-12}$. To investigate whether the pulse period variation is completely governed by the luminosity variation, we made an attempt to reproduce the pulse period history of GX 1+4 only from the luminosity history. The positive correlation seen between $\dot{P}$ and F${}_{X}$ is assumed to be the real torque transfer equation in the pulsar and the pulse period of the first data point is propagated with time depending on F${}_{X}$, using the linear relation obtained between $\dot{P}$ and F${}_{X}$. For this purpose we have used those F${}_{X}$ values even when the period determination is uncertain (Flag N in the archive). The resultant residuals in the period determination are shown in Fig. 2b. For comparison, we show in Fig. 2(a) the residuals to the period obtained by assuming a constant $\dot{P}$. The 100 $-$ 200 days features in the upper plot is not present in the lower plot signifying that the pulse period changes are actually correlated to the luminosity. However, the reproduction of the pulse period for days later than TJD 9000 deviates from the observed one by up to about 0.5 s because of the lack of sufficient number of F${}_{X}$ measurements. The rms deviation in the pulse period as estimated from only a constant $\dot{P}$ (Fig. 2a) is 0.1 s and it improves to 0.04 s when pulse period is predicted from the $\dot{P}$ $-$ L${}_{X}$ relation (Fig. 2b). The rms deviation reduced further to 0.03 s (which is the typical error in the period determination) for TJD 8370 to 9000 (where F${}_{X}$ is well determined and well sampled). Hence, we can conclude that when F${}_{X}$ is well sampled, all the variations in period can be explained correctly within the observational errors using a simple linear relation between $\dot{P}$ and F${}_{X}$. 3.2 Time delay The instantaneous spin-down rate and the pulsed flux were subjected to cross-correlation tests. For this purpose $\dot{P}$ is calculated using two neighboring data points and the average value of F${}_{X}$ is used. When the total data is taken we find a positive correlation between $\dot{P}$ and F${}_{X}$ at a confidence level of 99.4%. The reduced level of confidence is due to the fact that $\dot{P}$ is calculated over 2 observations (unlike $\pm$12 data points used in the previous section). The correlation, however, was found to be delayed by a few days. To improve the confidence level, the total data are divided into several sets of 128 data points and the derived cross-correlation values are co-added. The resultant profile is shown in Fig. 3. The central part of the figure is shown in an expanded form in the inset to the figure. As can be seen from the figure, there is a clear asymmetry near 0. A Gaussian fit to the profile near 0 gives a $\chi^{2}$ of 20 for 35 degrees of freedom (dof) and the derived value of delay is (4.8 $\pm$ 1.0) $\times$ 10${}^{5}$ s (5.6$\pm$1.2 days). The errors are calculated by the criterion of $\chi_{min}^{2}$+2.3 (1 $\sigma$ error for two free parameters). A constant fit to the profile gives $\chi^{2}=$ 75 for 36 dof showing the existence of correlation at a confidence level of 99.99%. This confidence level improves further if the value of the constant is kept fixed at 0 (i.e., there is no correlation instead of constant correlation). A Gaussian fit with the centroid frozen at zero gives $\chi^{2}=$ 54 signifying the existence of a delay at a very high confidence level (the value of $\Delta\chi^{2}$ being 34 for one additional parameter). Hence the co-adding method resulted in the detection of a delay at a high confidence level, and could be the reason for the lack of detection of any such delay by other workers (Chakrabarty 1996). The delay between $\dot{P}$ and F${}_{X}$ is seen for the first time in an X-ray pulsar. 4 Discussion The frequent and large variations in the X-ray luminosity of GX 1+4 observed with BATSE are quite similar to those found in the other accretion-powered X-ray pulsars. Power spectrum analysis of the luminosity fluctuations shows a power-law component (index= $-$2.1) indicative of a red noise in the system and has been seen before (Baykal & Ogelman 1993). Power spectrum analysis of the period fluctuations also shows a red-noise component. Period fluctuations have also been seen in other accreting pulsars with time scales down to a few days, but the red noise component seen here suggests that these fluctuations might represent torques that are internal to the neutron star rather than due to inhomogeneities in the accretion flow (White et al. 1995). The luminosity fluctuations are found to be correlated with the instantaneous $\dot{P}$, which always stays positive throughout the observation. The direct positive correlation of $\dot{P}$ with the X-ray luminosity is difficult to explain in terms of accretion disk models (Ghosh & Lamb 1991) as in such models an increase in luminosity is related to increased mass accretion rate that decreases the inner radius of the disk and leads to a spin-up of the neutron star i.e., negative $\dot{P}$. On the other hand, if GX 1+4 accretes matter directly through stellar wind with negligible specific angular momentum, then the reversal of spin change sign could mean a reversal in the direction of the small disk that can form. A positive correlation between $\dot{P}$ and L${}_{X}$ can then be expected, as a sudden decrease in the net angular momentum can lead to an increase in accretion (King 1995). The delay between L${}_{X}$ and $\dot{P}$ is difficult to explain in any accretion theory. The region of hard X-ray emission is very close to the neutron star surface and one cannot expect any delay between the X-ray emission and the resultant angular momentum transfer to the neutron star. Hence we look for some phenomena internal to the neutron star as a possible explanation to the delay. In this regard it is very instructive to compare these results to a similar phenomena observed in GRO J1744-28 (Stark et al. 1996). Stark et al. have found a phase lag in the bursting X-ray pulsar GRO J1744-28. During an X-ray burst when the X-ray luminosity increased by more than a factor of 15 in about 10 s, the phase lag increased to about 28 ms and subsequently the phase lag relaxes back with an exponential decay time of about 720 s. Interpreting this phenomenon in terms of models for pulsar glitches developed for radio pulsars, the phase lag during the burst corresponds to an initial spin-down with $\Delta\Omega/\Omega\sim$ $-$ 10${}^{-3}$. The exponential decay time scale is equated to the crust-core coupling time scale, which is (4$\times$10${}^{2}$ $-$ 10${}^{4}$) P, where P is the rotation period of the neutron star (Alpar & Sauls 1988). If the phenomena observed in GRO J1744-28 is treated as an impulse response to luminosity change and if this phenomena is common to GX 1+4 too, continuous changes in luminosity (as seen in GX 1+4) will reflect as a delay in the $\dot{P}$ variation. The magnitude of period variation in GX 1+4 (dP/P = $-$ $\Delta\Omega/\Omega$ $\sim$ 10${}^{-3}$) is comparable to that seen in GRO J1744-28. Further, the observed time scale (6 days) agrees with the relation between $\tau$ and P given by Alpar & Sauls (1988). The observed lag of $\dot{P}$ with respect to L${}_{X}$ is, therefore, consistent with the impulse response of X-ray luminosity variation seen in GRO J1744-28, with the time scales scaled up according to the relation given for core-crust relaxation. As pointed out by Stark et al., for the core-crust relaxation to occur, first the crust has to decouple and the angular momentum has to be transferred to the crust and the crust couples back to the core in a time scale given by Alpar & Sauls. In conclusion, our analysis of the period and X-ray luminosity history of GX 1+4 observed with the BATSE shows: (i) a positive correlation between pulsed hard X-ray luminosity and spin-down rate, and (ii) the spin-down rate lags by 5.6$\pm$1.2 days with respect to the pulsed luminosity. These results suggest that the internal torque of the neutron star can play a dominant role in the period-luminosity history of GX 1+4. Acknowledgements. We thank the BATSE team and COSSC for providing the valuable pulsar data. We thank the anonymous referees for their comments and suggestions. References [1988] Alpar, M. A., Sauls, J. A., 1988, ApJ 327, 723 [1993] Baykal, A., Ogelman, H., 1993, A&A 267, 119 [1993] Blondin, J.M., Kallman, T.R., Fryxell, B.A., Taam, R.E., 1990, ApJ 356, 591. [1994] Chakrabarty D., Prince T. A., Finger M. H., 1994, IAU Circular Nr. 6105 [1996] Chakrabarty D., 1996, Presented at the Symposium on X-ray Timing, 31st COSPAR Scientific Assembly, 14 $-$ 21 July, 1996, Birmingham, UK. [1994] Chitnis V. R., 1994, PhD Thesis, Bombay University. [1976] Davidsen, A., Malina, R.F., Bowyer, S., 1976, ApJ 203, 448. [1993] Finger M. H., Wilson R. Rb., Fishman G. J., et al., 1993 IAU Circular Nr. 5859 [1989] Fishman, G. J. et al., 1989 in Proc. of the GRO Science Workshop, ed. W. N. Jonshon (Greenbelt: NASA/GSFC), p2. [1979a] Ghosh P., Lamb F. K., 1979a, ApJ 223, L83 [1979b] Ghosh P., Lamb F. K., 1979b, ApJ 234, 296 [1991] Ghosh P., Lamb F. K., 1991, in Neutron Stars: Theory and Observations, eds. J. Venturs, D. Pines, (NATO ASI Ser. C, 344) (Dordrecht: Kluwer), p363 [1983] Hall, R., Davelaar J., 1983, IAU Circular Nr. 3872 [1996] Jablowski, F., Pereira, M., Braga, J., Campos, S.J., Gneiding, C., 1996, IAU Circular Nr. 6489 [1995] King, A., 1995, in X-ray Binaries, eds. W. H. G. Lewin, Jan van Paradijs, E.P.J. van den Heuvel, Cambridge University Press, Cambridge, p419 [1971] Lewin, W. H. G., Ricker, G., McClintock, J.E., 1971, ApJL 169, L17 [1988] Makishima K., Ohashi T., Sakao T., et al., 1988, Nat 333, 746 [1991] Matsuda, T., Sekino, N., Sawada, K., et al., 1991, A&A 248, 301. [1994] Rao A. R., Paul B., Chitnis V. R., Agrawal P. C., Manchanda R. K., 1994, A&A 289, L43 [1996] Stark, M.J., Baykal, A., Strohmayer, T., Swank, J.H. 1996, ApJ 470, L109 [1988] Taam, R.E., Fryxell, B.A., 1988, ApJ 327, L73. [1995] White, N.E., Nagase, F., Parmar, A. N., 1995, in X-ray Binaries, eds. W. H. G. Lewin, Jan van Paradijs, and E.P.J. van den Heuvel, Cambridge University Press, Cambridge, p1.
Self-organization processes in laser system with nonlinear absorber and external force influence E.D. Belokolos, V.O. Kharchenko vasiliy@imag.kiev.ua Institute of magnetism, National Academy of Science of Ukraine, 03142, Kiev, Ukraine (December 7, 2020) Abstract We discuss mechanisms of self-organization processes in two-level solid-state class-B laser system. The model is considered under assumptions of influence of nonlinear absorber and external force, separately. It was found that self-organization occurs through the Hopf bifurcation and results to a stable pulse radiation. Analysis is performed according to the Floquet exponent investigation. It was found that influence of the nonlinear absorber extends the domain of control parameters that manage a stable periodic radiation processes. An external force suppresses self-organization processes. A combined influence of both external force and nonlinear absorber results to more complicated picture of self-organization with two reentrant Hopf bifurcations. pacs: 05.45.-a, 42.65.-k, 89.75.Fb, 02.30.Oz I Introduction The most intriguing phenomena in systems with nonlinear dynamics is a transition to the regime with dissipative structures formation. A related problem of such effects investigation in systems with large numbers of freedom degrees attracts an increasing attention in last three decades. Due to self-organization effects a number of freedom degrees is reduced and description of the system dynamics can be performed in terms of macroscopic variables. A typical picture is realized in laser systems where description is provided with a help of amplitudes of electric field (or intensity of the radiation), polarization and population inversion Haken . Laser theory shows that corresponding dissipative structures define formation of pulse or modulated signals in homogeneous systems or spirals in spatially extended ones zigzag . In practice a formation of stable periodic radiation can be induced by introducing an additional medium with nonlinear properties which are realized as an absorber or modulator. Such type of lasing is known as passive one. Usually, such a kind of medium leads to nonlinear dependence of the relaxation time of the electric field amplitude Khanin1 ; Khanin2 or nonlinear dependence of refractive exponent Hercher ; Hercher2 ; physrevlet76 ; b716 , composite material can be used to introduce different type of such nonlinearity sdarticle4 ; citation2 ; citation3 ; citation4 ; citation5 ; citation6 . A coherent dynamics of two-level laser systems in the presence of dispersive and absorptive effects was observed theoretically and experimentally exp2003 ; numeric2002 . Statistical properties of self-organization effects of such type systems was discussed in condmat1 . Regimes of optical parametric oscillation in a semiconductor microcavity are studied in condmat2 . It was found that stationary behaviour of polarization can be described by the formalism of non-equilibrium transitions, where bistability is observed (see physrevlet76 ; PhysRevA78 ). It was shown that an oscillating lasing is realized inside a bounded domain of the system parameters. Another (active) way to initiate a coherent lasing is an introducing an external influence on nonlinear processes in the cavity kachmarek . An actual problem in laser physics is to find possible mechanisms and to set a range of control parameters that manage properties of stable periodic radiation (see for instance Khanin1 ; Khanin2 and citations therein). Despite this problem is still opened in deterministic (regular) systems a lot of attention is paid to find coherent regimes under influence of stochastic sources Gardiner86 ; Gardiner2000 ; Risken84 ; Horstshemke . In this Paper, we are aimed to investigate the dynamics of the solid-state class-B laser systems which are simple in realization and are wide used in physical applications. We consider deterministic models only. According to the theoretical approach, based on Floquet analysis, we will explore in what a manner a nonlinear medium can induce a stable periodic radiation. It will be shown that varying in a saturation amplitude of electric field and absorption coefficient one can arrive at stable and unstable dissipative structures. Properties of self-organization process induced by an external force influence will be considered. At last, a combined influence of both nonlinear medium and external force on the system dynamics will be described. The paper is organized in the following manner. In Section II we present a model of our system where we introduce theoretical constructions to model an influence of both an absorber and external force. Section III is devoted to development of the analytical approach to study process of dissipative structures formation. In Section IV we apply the derived formalism to investigate properties of stable periodic radiation in the presence of the absorber, external force and its combined affect. Main results and perspectives are collected in the Conclusion (Section V). II Model Considering a prototype model for a two-level laser system, one deals with dimensionless variables such as: an electric field amplitude $E$, polarization $P$ and $S$ to be a population inversion. A standard technique usage Khanin1 allows to reproduce evolution equations for these three macroscopic freedom degrees from both the Maxwell-type equation for electro-magnetic field and density matrix evolution equation. It leads to the system of Maxwell-Bloch type that is reduced to the Lorenz-Haken model in the form $$\left\{\begin{array}[]{l}\varkappa^{-1}\dot{E}=-E+P,\\ \gamma_{\bot}^{-1}\dot{P}=-P+ES,\\ \gamma_{\|}^{-1}\dot{S}=(S_{e}-S)-EP.\end{array}\right.$$ (1) For the single mode laser system a relaxation of electric field amplitude $E$ is addressed to losses in a bulk of the medium and characterized by the velocity $\varkappa=1/2\tau_{c}$, where $\tau_{c}$ is a life-time of a photon in a cavity. $\gamma_{\bot}$ is a relaxation velocity of nondiagonal elements of density matrix which is related to the half-width of a spectral line. The relaxation scale for the population inversion is determined by the velocity $\gamma_{\|}$ defined by both transition probability between two energy levels and a corresponding frequency. $S_{e}$ controls the pump intensity, as usual. The model (1) shows a linear combination of the amplitude $E$ and polarization $P$, despite the evolution of both $P$ and the pump intensity $S$ are nonlinear. It is principally that the positive feedback of $E$ and $S$ leads to instability in the polarization that induces a self-organization. According to the Le-Shatelier principle such positive feedback is compensated through negative one in third equation (the last term). To make an analysis we pass to dimensionless variables $\tau\equiv t\varkappa$, $\sigma\equiv\varkappa/\gamma_{\bot}$ and $\varepsilon\equiv\varkappa/\gamma_{\|}$. Hence, the system (1) takes the form $$\left\{\begin{array}[]{l}\dot{E}=-E+P,\\ \sigma\dot{P}=-P+ES,\\ \varepsilon\dot{S}=(S_{e}-S)-EP.\end{array}\right.$$ (2) Assuming different combinations between relaxation scales $\sigma$ and $\varepsilon$, one can describe three possible classes of laser systems. At $\varepsilon,\sigma\ll 1$ we arrive at the laser models of class-A (organic dye lasers) with one-dimensional phase space, where systems states are represented by fixed points only. Here self-organization effects are described by a formalism of non-equilibrium phase transitions. Class-B (solid-state lasers) is characterized by a condition $\sigma\ll 1$. Here phase space is two-dimensional and transition processes are of oscillation type and hence self-organization processes result in dissipative structures formation. For the class-C (molecular gas lasers) we set $\sigma,\epsilon\sim 1$ and in three-dimensional phase space a strange attractor can be realized. At last, the class-D (beam masers) is characterized by condition $\sigma,\epsilon\gg 1$. In this Paper we consider the class-B only, where the polarization $P$ is assumed to be a microscopic quantity and should be treated as fast variable which follows the electric field amplitude $E$ evolution. Such situation is realized in single mode solid laser systems with low-doped crystals ($Al_{2}O_{3}:Cr^{3+}$) and glasses (soda-lime glass), some gas lasers ($CO_{2}$), fiber and semi-conductor lasers Khanin1 ; Khanin2 ; PRA2002 . Assuming conditions $\gamma_{\bot}\gg\varkappa,\gamma_{\|}$, one can use the adiabatic elimination procedure which yields the relation $P=ES$. As a result, instead of the system (2) we obtain a two-component model in the form $$\left\{\begin{array}[]{l}\dot{E}=-E(1-S),\\ \dot{S}=\varepsilon^{-1}\left[S_{e}-S(1+E^{2})\right].\end{array}\right.$$ (3) The model (3) can not show the stable oscillating regime of the electric field $E$, itself. It was shown experimentally and theoretically Hercher ; Khanin1 ; Khanin2 that stable oscillations can be realized if an additional nonlinear medium is introduced into the cavity. The first way to get the periodic lasing is to use a passive modulating medium (nonlinear material) to absorb a weak radiation and transmit signal with large amplitude. Such a type of absorbers is realized in practice as phthalocyanine fluid in Fabry–Perot cavities Optics2002 , gases $SF_{6}$, $BaCl_{3}$ and $CO_{2}$ PismavGETF ; GETF71 . To describe action of the absorber it was proposed to introduce a nonlinear damping into evolution equation for the electric field Haken80 $$f_{\kappa}=-\frac{\kappa E}{1+E^{2}/E_{s}^{2}},$$ (4) here $E_{s}$ is the saturation amplitude. The second way is to use an additional medium with nonlinear refractive exponent $n=n(E)$ Hercher ; Hercher2 ; sdarticle4 ; physrevlet76 . Such type of modulator can be used to increase the Q-factor of laser. We will model action of such an effective medium by the external force $f_{e}(E)$ assumed in the form $$f_{e}=-A-CE^{2},$$ (5) that correspond to action of a bare potential $V=AE+CE^{3}/3$, where coefficients $A$, $C$ controls photon processes in the modulator. We use an general construction (5) in order to investigate an influence of parameters $A$ and $C$ on lasing. In physical applications one can associate $-A$ as incident field amplitude, $C$ can control nonlinear properties of the refractive index $n(E)$. One of the simplest situations is considered in PhysRevA78 , where only a case of $A<0$ was investigated. Combining all above suppositions into the one model for a single-mode laser system, we will get the generalized system of nonlinear equations type of $$\left\{\begin{array}[]{l}\dot{E}=-E(1-S)+f_{e}(E)+f_{\kappa}(E),\\ \dot{S}=\varepsilon^{-1}\left[S_{e}-S(1+E^{2})\right].\end{array}\right.$$ (6) Using two type of additional medium in the cavity, one can expect that some combinations of parameters for both modulator and absorber should exist to provide the stable periodic radiation of the laser. III Main equations To find mechanisms which takes care of the stable dissipative structures formation we will use the standard procedure to analyze conditions where bifurcation into limit cycle occurs hassard . To this end we rewrite the system (6) in a most general form $$\left\{\begin{array}[]{l}\dot{E}=f^{(1)}(E,S),\\ \varepsilon\dot{S}=f^{(2)}(E,S),\end{array}\right.$$ (7) where effective forces are as follows: $$\begin{split}&\displaystyle f^{(1)}(E,S)\equiv-\left[A+CE^{2}\right]-E-\frac{% \kappa E}{1+E^{2}/E_{s}^{2}}+ES,\\ &\displaystyle f^{(2)}(E,S)\equiv\varepsilon^{-1}\left[S_{e}-S(1+E^{2})\right]% ,\end{split}$$ (8) here constructions (4), (5) are used. We deal with a problem of nonlinear dynamics and present a behaviour of the system in the phase plane $(E,S)$. Firstly, we consider steady states $E_{0}$ and $S_{0}$, defined as coordinates of fixed points in the phase plane. Setting $\dot{E}=0$ and $\dot{S}=0$, one can find steady states as solutions of stationary equations $$\begin{split}&\displaystyle E_{0}\left(\frac{S_{e}}{1+E_{0}^{2}}-\frac{\kappa E% _{s}^{2}}{E_{0}^{2}+E_{s}^{2}}-2CE_{0}-1\right)=A,\\ &\displaystyle S_{0}=S_{e}(1+E_{0}^{2})^{-1}.\end{split}$$ (9) A behaviour of phase trajectories in the vicinity of these fixed points can be analyzed with a help of the Lyapunov exponents approach. Here time dependent solutions of above system are assumed to be in the form $E\propto e^{\Lambda t},\quad\Lambda=\lambda+i\omega$, where $\lambda$ controls the stability of the phase trajectories, $\omega$ determines pulse frequency of the signal. Magnitudes for real and imaginary parts of $\Lambda$ are calculated according to the Jacobi matrix elements $$M_{ij}\equiv\left(\frac{\partial{f^{(i)}}}{\partial{x_{j}}}\right)_{x_{j}=x_{j% 0}};\quad x_{j}\equiv\{E,S\},\quad i,j=1,$$ (10) where subscript 0 relates to steady states. Inserting (8) into definition (10), we get matrix elements $$\displaystyle M_{11}=-M_{0}+S_{0},$$ (11) $$\displaystyle M_{0}=1+2CE_{0}+\kappa\frac{1-E_{0}^{2}/E_{s}^{2}}{(1+E_{0}^{2}/% E_{s}^{2})^{2}},$$ $$\displaystyle M_{12}=E_{0};\ M_{21}=-2\varepsilon^{-1}S_{0}E_{0};$$ (12) $$\displaystyle M_{22}=-\varepsilon^{-1}(1+E_{0}^{2}).$$ Then, an equation for eigenvalues and eigenvectors $$\sum_{j}M_{ij}V_{j}=\Lambda V_{i}$$ (13) gives expressions for $\lambda$ and $\omega_{0}$ as follows: $$\begin{split}&\displaystyle\lambda=\frac{1}{2}\left[(S_{0}-M_{0})-\varepsilon^% {-1}\left(1+E_{0}^{2}\right)\right],\\ &\displaystyle\omega_{0}=\frac{1}{2}\sqrt{8\varepsilon^{-1}S_{0}E_{0}^{2}-% \left[(S_{0}-M_{0})+\varepsilon^{-1}(1+E_{0}^{2})\right]^{2}}.\end{split}$$ (14) If the real part of the Lyapunov exponent $\lambda=0$ then a fixed point $(E_{0},S_{0})$ is addressed to a center of a limit cycle. It leads to relation $$\varepsilon(S_{0}-M_{0})\geq 1+E_{0}^{2};$$ (15) and yields a condition for the frequency of oscillations $$8\varepsilon S_{0}E_{0}^{2}\geq\left[\varepsilon(S_{0}-M_{0})+(1+E_{0}^{2})% \right]^{2}.$$ (16) To investigate a stability of such a limit cycle we analyze a behaviour of trajectories in the vicinity of the fixed point $(E_{0},S_{0})$. To this end we rewrite motion equations (7) where variables $E$ and $S$ are count off from stationary magnitudes $E_{0},S_{0}$. To do this one can use following transformation $$\vec{X}=\vec{X}_{0}+\hat{P}\cdot\vec{\delta},$$ (17) where notations for pseudovectors are used: $$\vec{X}\equiv\left(\begin{array}[]{l}E\\ S\end{array}\right),\quad\vec{\delta}\equiv\left(\begin{array}[]{l}E-E_{0}\\ S-S_{0}\end{array}\right).$$ (18) The corresponding transformation matrix $\hat{P}$ is obtained with a help of eigenvector $\vec{V}$ components, i.e.: $$P\equiv\left(\begin{array}[]{l}\Re V_{1}\quad-\Im V_{1}\\ \Re V_{2}\quad-\Im V_{2}\end{array}\right),\quad\vec{V}\equiv\left(\begin{% array}[]{l}V_{1}\\ V_{2}\end{array}\right).$$ (19) Assuming $V_{1}\equiv 1$, for the second component $V_{2}$ from Eq.(13) one gets $$\begin{split}\displaystyle V_{2}&\displaystyle=\frac{(M_{0}-S_{0})+{\rm i}% \omega_{c}}{E_{0}},\\ \displaystyle\omega_{c}&\displaystyle\equiv\left.\omega_{0}\right|_{\lambda=0}% =\varepsilon^{-1}(1+E_{0}^{2})\left[\frac{2S_{e}E_{0}^{2}}{(1+E_{0}^{2})^{3}}% \varepsilon-1\right]^{1/2}.\end{split}$$ (20) Hence, the transformation matrix (19) takes the form $$P=\left(\begin{array}[]{l}1\qquad\qquad\quad\quad\quad 0\\ (M_{0}-S_{0})/E_{0}\quad-\omega_{c}/E_{0}\end{array}\right).$$ (21) It leads to evolution equations for deviations written in a vector form $$\dot{\vec{\delta}}=\vec{F},\qquad\vec{F}\equiv P^{-1}\vec{f}.$$ (22) Here a pseudovector of the canonical force $$\vec{F}=\left(\begin{array}[]{l}F^{(1)}\\ F^{(2)}\end{array}\right)\equiv\left(\begin{array}[]{l}f^{(1)}-f^{(1)}_{0}\\ f^{(2)}-f^{(2)}_{0}\end{array}\right),$$ (23) satisfies conditions hassard ; Poincare ; Andronov ; Leontovich $$\frac{\partial\vec{F}}{\partial\vec{\delta}}=\left(\begin{array}[]{l}0\quad-% \omega_{c}\\ \omega_{c}\qquad 0\end{array}\right),$$ (24) and has following components: $$F^{(1)}=f^{(1)},\quad F^{(2)}=\alpha f^{(1)}-\beta\varepsilon f^{(2)};$$ (25) $$\alpha\equiv\frac{M_{0}-S_{0}}{\omega_{c}},\qquad\beta\equiv\frac{E_{0}}{% \varepsilon\omega_{c}}.$$ (26) Above procedure allows to find the stability of the manifold formed by the fixed point $(E_{0},S_{0})$. Using the standard technique hassard , one can say that the limit cycle is stable only if a real part of the Floquet exponent $$\Phi=\frac{\rm i}{2{\omega_{0}}}\left(g_{11}g_{20}-2|g_{11}|^{2}-\frac{1}{3}|g% _{02}|^{2}\right)+\frac{1}{2}g_{21},$$ (27) is negative in a bifurcation point. Structure constants in the definition (27) are described by derivatives with respect to $E$ and $S$, denoted with subscripts: $$g_{11}=\frac{1}{4}\left[\left(F^{(1)}_{EE}+F^{(1)}_{SS}\right)+{\rm i}\left(F^% {(2)}_{EE}+F^{(2)}_{SS}\right)\right],$$ (28) $$\begin{split}\displaystyle\left(\begin{array}[]{l}g_{02}\\ g_{20}\end{array}\right)=\frac{1}{4}&\displaystyle\left[\left(F^{(1)}_{EE}-F^{% (1)}_{SS}\mp 2F^{(2)}_{ES}\right)+\right.\\ &\displaystyle\left.{\rm i}\left(F^{(2)}_{EE}-F^{(2)}_{SS}\pm 2F^{(1)}_{ES}% \right)\right],\end{split}$$ (29) $$\begin{split}\displaystyle g_{21}=&\displaystyle\frac{1}{8}\left\{\left[\left(% F^{(1)}_{EEE}+F^{(1)}_{ESS}\right)+\left(F^{(2)}_{EES}+F^{(2)}_{SSS}\right)% \right]+\right.\\ &\displaystyle\left.{\rm i}\left[\left(F^{(2)}_{EEE}+F^{(2)}_{ESS}\right)-% \left(F^{(1)}_{EES}+F^{(1)}_{SSS}\right)\right]\right\}.\end{split}$$ (30) Using some algebra, the stability condition for the limit cycle can be written as follows $$\begin{split}&\displaystyle 2\alpha(\psi_{\kappa}-C)^{2}+\alpha\beta% \varepsilon S_{0}(1+2\beta\varepsilon E_{0})+\omega_{c}(\phi_{\kappa}+\beta% \varepsilon)\leq\\ &\displaystyle(C-\psi_{\kappa})(\alpha^{2}-1+2\beta\varepsilon S_{0}+2\alpha% \beta\varepsilon E_{0}),\end{split}$$ (31) where notations $$\psi_{\kappa}=-2{\frac{k{{\it E_{s}}}^{2}E\left(-3\,{{E_{s}}}^{2}+{E}^{2}% \right)}{\left({{E_{s}}}^{2}+{E}^{2}\right)^{3}}},$$ $$\phi_{\kappa}=6\,{\frac{k{{E_{s}}}^{2}\left(-6\,{E}^{2}{{E_{s}}}^{2}+{E}^{4}+{% {E_{s}}}^{4}\right)}{\left({{E_{s}}}^{2}+{E}^{2}\right)^{4}}}$$ are used. IV Analysis of Hopf bifurcations IV.1 Influence of nonlinear absorber To proceed let us consider steady states behaviour under supposition that action of the absorber is given by expression (4), $f_{e}=0$. Setting $\dot{E}=\dot{S}=0$, one gets stationary values of the electric field amplitude $E_{0}$ shown in Fig.1. A steady states analysis allows to find that a bistable regime is realized only if $\kappa<\kappa_{min}$, here $\kappa_{min}=E_{s}^{2}/(1-E_{s}^{2})$. In such a case one gets the hysteresis loop in $E_{0}(S_{e})$ dependence in the domain $[S_{c0},S_{c}]$ (curve 1) which disappears when the threshold $\kappa_{min}$ is crossed, where $$S_{c}=1+\kappa,\quad S_{c0}=1+E_{s}\sqrt{\frac{\kappa}{1-E_{s}^{2}}}.$$ (32) The behaviour of the amplitude $E_{0}$ is the same as in the first order phase transitions where zero value of $E_{0}$ below $S_{c0}$ corresponds to a disordered state, values $E_{0}\neq 0$ (solid line) relate to an ordered state, whereas intermediate magnitudes of $E_{0}$ (dotted line) correspond to unstable state. The critical value for the absorption coefficient is realized only if the saturation amplitude $E_{s}<1$. In opposite case one can get the stationary picture of the second order phase transition where $E_{0}$ increases monotonically from 0 if the critical value $S_{c}$ is crossed (curve 2). The analysis of the Floquet exponent allows to find the phase diagram (Fig.2), which shows the stable periodic radiation (formation of limit cycle in the phase plane $(E,S)$). In Fig.2 the domain I defines configuration of the phase space with both a stable focus (ordered state) and a saddle point (disordered state); in the domain II only disordered state is realized (node); the domain III is characterized by the hysteresis loop, where ordered state corresponds to unstable focus, unstable state is represented by a saddle, disordered state is a node. Inside the domain IV the stable limit cycle is formed (Fig.3a), which transforms into stable focus, unstable and stable cycles if dotted line is crossed (Fig.3b). An influence of the parameters of the absorber on a topology of phase plane is shown in Fig.4. Here an increase in the absorption coefficient $\kappa$ at small $E_{s}$ leads to transformation of unstable focus into a stable one with additional node and saddle points appearing. At values $\kappa$ and $E_{s}$, corresponding to the dashed line, one gets an unstable limit cycle and in the domain bounded by dashed and solid lines one gets the unstable focus, node and saddle. When the solid line is crossed the phase portrait is characterized by a single node. An increase in $\kappa$ at saturation amplitude $E_{s}\simeq 1$ transforms an unstable focus into a stable limit cycle, which becomes unstable at values that correspond to the dashed line. In the domain bounded by the dashed and straight horizontal lines there is a single unstable focus only. A further increase in $\kappa$ transforms this focus into a node. The frequency of pulse radiation regime appears at non zero value, that correspond to the first bifurcation point $S_{e}$ a further increase in the pump intensity, leads to the growth of $\omega_{c}$ till the second critical point $S_{e}$ is achieved. We have analyzed behavior of pulse radiation frequency at different values of the absorption coefficient $\kappa$. According to Fig.5 an increase in $\kappa$ at fixed saturation amplitude magnitudes leads to the shift of minimal and maximal values of $\omega_{c}$ despite a topology of the dependence $\omega_{c}(S_{e})$ is not changed. Obtained results are in good corresponding with experimental observations of such dependence casperson . Therefore, the dispersion in the relaxation time of the electric field amplitude $E$, promoting by the absorbing influence, leads to formation of the stable periodic radiation at saturation amplitude $E_{s}\simeq 1$. IV.2 Influence of external modulator Let us consider an influence of the external source $f_{e}$ at $f_{\kappa}=0$. It is principally important that the periodic radiation is possible only if parameter that controls nonlinear effects $C<0$. Here stationary behavior of the field $E_{0}$ versus pump intensity $S_{e}$ is shown in Fig.6. Analysis of the Floquet exponent shows that limit cycles can be formed only if a stable focus is transformed into an unstable one and vice versa (see Fig.6). Here at $S_{e}<S_{c}$ and $S_{e}>S^{c}$ the phase portrait is characterized by single saddle point $S_{1}$ or $S_{2}$, respectively. In the domain $S_{c}<S_{e}<S^{0}$ one gets two saddles $S_{1}$ and $S_{2}$, divided by an unstable focus $F_{u}$. If $S^{0}<S_{e}<S^{c}$, then such saddles are divided by a stable focus $F_{s}$. Only if $S_{e}=S^{0}$ we will get a trivial situation, where $\Re\Phi=0$. It means a formation of nested loops of neutral stability (Fig.7). Therefore, external force suppresses processes of dissipative structure formation. IV.3 Combined effect of external modulator and nonlinear absorber Now we consider an influence of both external modulator and nonlinear absorber on the processes of dissipative structure formation. Setting $\dot{E}=\dot{S}=0$ in the system (7), one gets stationary values of the electric field amplitude $E_{0}$ shown in Fig.8. As it is seen, if the modulator is turned off ($A=C=0$) then we have a single stable state with no radiation at small values of the pump parameter $S_{e}$. If the threshold given by expression $S_{e}^{c}=1+\kappa$ is crossed, then a new solution of the steady state equation appears and we have a stationary radiation with an amplitude $E_{0}\neq 0$ which increases with an increase in the pump intensity. If we set $A<0$ at $C=0$ then we will get a single stable solution on the whole axis of the pump parameter magnitudes which defines the radiation amplitude $E_{0}$. In the opposite case of $A>0$ one gets two stationary solutions, only if the energy barrier $S_{e}^{c}$, given by the solution of equation $S_{e}^{c}=f(A,\kappa_{0},\kappa)$, is overcame. Next, we investigate conditions where stable periodic radiation can be realized. To this end we need to determine a domain defined by conditions $\lambda=0$ and $\Re\Phi<0$ where periodic solutions of the system (7) are exist. Corresponding solutions of the Eq.(31) are shown in Fig.9. It illustrates domains of the absorption coefficient $\kappa$ and pump intensity $S_{e}$ magnitudes at different intensities $A$, $C$ where the stable radiation process is realized. As Fig.9 shows, if we set an absorber inside the cavity only, then a semi-limited domain of $\kappa$ and $S_{e}$ magnitudes is formed; inside of this domain the stable periodic radiation is possible. Introducing a modulator with $A>0$, $C=0$ (see Fig.9a), such a domain becomes totally limited. Moreover, an increase in the parameter $A$ leads to restriction of the values for the collective parameter $\kappa$ and pump intensity $S_{e}$, at which one has stable periodic radiation. At large values $A$ such domain is degenerated into the line. From Fig.9b one can see that an increase in the $C$ at $A=0$ leads to extension of the domain of stable periodic radiation that occurs at large magnitudes of pump intensity parameter. An influence of nonlinear processes in the modulator on a picture of the stable periodic radiation formation is presented in Fig.10. It is seen, if $A<0$ then there is only stable stationary state (see Fig.8) which is a focus ($\Re\Lambda<0$, $\Im\Lambda\neq 0$) on a phase plane $(E,S)$. Such a fixed point is transformed into a manifold if control parameters are in the domain including its border shown in Fig.10a. Such a manifold is a limit cycle ($\Re\Phi<0$, $\Re\Lambda=0$) in the phase plane $(E,S)$, that attracts all phase trajectories in the vicinity of it. From a physical viewpoint it means the formation of the stable pulse periodic radiation. The domain shown in Fig.10a is limited by the value of intensity of nonlinear processes $C_{c}^{s}$. One needs to note that if photon scattering occurs with intensities $C<C_{c}^{s}$ then an increase in pump intensity $S_{e}$ induces formation of stable periodic radiation at magnitude $S_{e}^{c1}$ and destroys it at $S_{e}^{c2}$. In other words, one gets the situation where the only one reason serves as stimulus for both self-organization and desorganization. A picture became more complicated at $A>0$. At first let us discuss the phase diagram shown in Fig.10b. At pump limited by the dashed curve in Fig.10b there are no stationary solutions and, hence, no stable regimes of radiation. Next, processes of spontaneous photon annihilation reduce a domain of stable periodic radiation at pump intensities above dashed curve, here a domain of unstable behaviour of phase trajectories appears. At small $C<C_{0}$ such a stationary regime is defined by the corresponding stationary solution which is an unstable focus ($\Re\Lambda>0$, $\Im\Lambda\neq 0$). The related fixed point is defined as an upper branch of the dashed curve in Fig.8. At large values $C_{c}^{u}<C<C_{c}^{s}$ one has a picture similar to discussed above. At intermediate values $C_{0}<C_{r}<C_{c}^{u}$ one can get a very complicated picture of self-organization. Here with an increase in $S_{e}$ we have following picture of transformations: (i) the system passes from the unstationary regime to stationary one with fixed point to be a stable focus (Fig.11a); (ii) at values $S_{e}^{c1(1)}$ one has a stable periodic radiation that exists till magnitudes $S_{e}<S_{e}^{c2(1)}$ (Fig.11b); (iii) a further increase in pump intensity destroys the limit cycle and a system pass to unstable regime which is characterized by the unstable focus (Fig.11c); (iv) if critical value $S_{e}^{c1(2)}$ is achieved, then a new Hopf bifurcation occurs and the system evolves according to periodic trajectories (Fig.11b); (v) at last, such a coherent regime is destroyed at $S_{e}>S_{e}^{c2(2)}$ (Fig.11a). Let us consider more closely properties of phase diagram (Fig.4), which shows magnitudes of the absorption coefficient $\kappa$ and the control parameter $S_{e}$. Here the thin solid curve (bifurcation line) defines critical magnitudes for $\kappa$ and $S_{e}$ where stationary states appeared. In the domain with $\Re\Lambda>0$ one has an unstable focus (see Fig.13a). The stable limit cycle is realized inside the bounded domain with $\Re\Phi<0$ (Fig.13b). At small $\kappa$ and large $S_{e}$ one has a stable focus (Fig.13c). Therefore, one gets a transformation of topology of attractors in the phase plane if parameters $S_{e}$ or $\kappa$ are changed. An increase in absorption coefficient $\kappa$ at fixed pump intensity $S_{e}$ will produce the oscillating regime (transition from a stable focus to limit cycle). Such stable periodic regime can be destroyed at large magnitudes of $\kappa$ (transition from the limit cycle into repeller — unstable focus) and a further increase in $\kappa$ leads to the absence of any stationary regime at all. However, the stable periodic solution is observed not on a whole border of the indicated domain. Figure 12 shows that a stable dissipative structure is formed inside the domain and on the thick solid lines only ($\Re\Phi<0$). A part of the domain border plotted as dashed line corresponds to conditions $\lambda=0$ and $\Re\Phi>0$, which mean existence of unstable periodic solution (see Fig.13d). Hence, there is a point where $\Re\Phi=0$ and periodic solution changes its stability. In this point the phase portrait of the system is characterized by a set of nested loops. V Conclusions In this Paper we have analyzed properties of self-organization processes in the two-level class-B laser systems in the presence of absorption effects and influence of the external force. We have shown that due to the nonlinear damping the domain of control parameters of the cavity with the stable pulse radiation is realized. It was shown that varying a saturation amplitude and absorption coefficient one can pass to different type of radiation, characterized by fixed point in the phase space type of: stable and unstable focuses, stable and unstable limit cycles. Introducing the external force that leads to additional nonlinear effects that reduce domains of control parameters with stable periodic radiation. It is principally important that due to the external force influence one can get reentrant Hopf bifurcation. Here there is a wide range of the external force parameters, where both stable and unstable dissipative structures are in the phase space. Our results are in good correspondence with theoretical ones Khanin1 ; condmat1 and experimental observations Khanin2 ; casperson ; GETF71 ; PismavGETF ; condmat3 . In our investigation we have considered the simplest case, where relaxation velocities of the electric field and population inversion are of the same order. In real systems of the solid-state class-B lasers $\varepsilon\equiv\varkappa/\gamma_{\|}\sim 10^{-1}\div 10^{-3}$, in gas lasers of such class $\varepsilon\simeq 1$. As was shown theoretically and experimentally GETF71 a difference between above relaxation velocities will not change the picture of stable pulse regime qualitatively. Experimental investigation shows quantitative changes only. In our consideration the construction for the external force can be applied to describe influence of the nonlinear processes: in the nonlinear medium with the nonlinear dependence of refractive index (a variation of the parameter $C$); introducing an external incident field with amplitude $A<0$; more complicated picture with arbitrary $A$ and $C$ under supposition of the dynamic system stability only. References (1) H.Haken. Synergetics (Springer, New York, 1983). (2) M.L.Berre, E.Ressayre, A.Tallet, Phys. Rev. E 71, 036224(11) (2005). (3) Ya.I.Khanin, Osnovi dinamiki lazerov (Nauka, Phizmatlit, Moskva, 1999). (4) Ya.I.Khanin, Principles of laser dynamics (Nort-Holland, Amsterdam, 1995). (5) H.M.Gibbs, S.L.MacCall, T.N.C.Venkatesan, Phys. Rev. Lett., 36, 1135 (1976). (6) G.P.Agarwal, H.J.Carmichael, Phys. Rev. A, 19, 2074 (1979). (7) M.Hercher, Applied Optics, 6, 947, (1967). (8) M.Hercher., W.Chu, D.L.Stockman, IEEE, Journ. Quantum Electrinics, QE-4(11), 954, (1968). (9) L.Gao, Phys.Lett. A, 318 P.119-125, (2003). (10) J.W.Haus, N.Kalyaniwalla, R.Inguva, M.Bloemer, C.M.Bowden. J. Opt. Soc. Am., B6, (1989), 797. (11) J.W.Haus, N.Kalyniwalla, R.Inguva, C.M.Bowden, J. Appl. Phys. 65, (1989), 1420. (12) N.Kalyaniwalla, J.W.Haus, R.Inguva, M.H.Birnboim, Phys. Rev. A 42, (1990), 5613. (13) D.J.Bergman, O.Levy, D.Stroud, Phys. Rev. B, 49, (1994), 129. (14) R.Levy-Nathansohn, D.J.Bergman, J. Appl. Phys., 77, (1995), 4263. (15) P.Domokos, H.Ritsch, Phys. Rev. Lett., 89, 253003(4) (2002). (16) A.T.Black, H.W.Chan, Phys. Rev. Lett., 91, 203001(4) (2003). (17) J.K.Asboth, P.Domokos, H.Ritsch, A.Vukics, Phys. Rev. A 72, 053417(12) (2005). (18) M.Wouters, I.Carusotto, cond-mat/0607719. (19) R.Bonifacio, L.A.Lugatio, Phys. Rev. A, 18, 3, (1978). (20) F.Kaczmarek. Wstep do fizyki laserow (Panstwowe Wydawnictwo Naukowe, Warszawa, 1979). (21) W.Horsthemke, R.Lefever, Noise–Induced Transitions (Springer-Verlag, Berlin, 1984). (22) C.W.Gardiner. Handbook of stochastic methods (Springer-Verlag, Berlin, Heidelberg, New York, 1986). (23) Risken H. The Fokker-Planck equation (Springer Verlag, Berlin, 1984). (24) C.W.Gardiner, P.Zoller Quantum noise (Springer Verlag, Berlin, Heidelberg, New York, 2000). (25) O.G.Caldero, S.Melle, I.Gonzalo, Phys. Rev. A, 65, 023811(6), (2002). (26) M.M.El-Nicklawy, A.F.Hassan, S.M.M.Salman, A.Abdel-Aty, Optics & Laser Technology, 34, 363-368 (2002)(see citations 8-14). (27) U.B.Brgazovskiy, L.S.Vasilenko, C.G.Rautian, G.S.Popova, V.P.Chebotaev, JETP, 61 2(8), p.500., (1971). (28) N.V.Karlov, G.P.Kuz’min, U.N.Petrov, A.M.Prokhorov, JETP Pisma, 7, p.174, (1968). (29) H.Haken. Synergetics. An Introduction, 2-nd ed. (Springer-Verlag, Berlin, Heidelberg, New-York, 1978). (30) B.D.Hassard, N.D.Kazarinov, Y.H.Wan. Theory and Application of the Hopf Bifurcation (Cambridge Univ. Press, Cambridge, 1981). (31) H.Poincare. Les Methodes Nouvelles de la Mecanique Celeste (Gauthier-Villars, Paris, 1892). (32) A.A.Andronof, A.A.Vitt, S.E.Khaikin. Theory of Oscillators (Pergamon Press, Oxford, 1966). (33) N.N.Bautin, Ye.A. Leontovich. Methodi i priyemi kachestvennogo issledovaniya dinamicheskikh system na ploskosti (Nauka, Ìoskva, 1990). (34) L.W.Casperson, J. Opt. Soc. Am., B-2, p.62., (1985). (35) F.V.Garcia-Ferrer, I.Perez-Arjona, G.J. de Valcarcel, E.Roldan, quant-ph/0702113.
Partially Thermostated Kac Model Hagop Tossounian${}^{1}$, Ranjini Vaidyanathan${}^{1}$ ${}^{1}$School of Mathematics, Georgia Institute of Technology $686$ Cherry Street Atlanta, GA $30332-0160$ USA Abstract We study a system of $N$ particles interacting through the Kac collision, with $m$ of them interacting, in addition, with a Maxwellian thermostat at temperature $\frac{1}{\beta}$. We use two indicators to understand the approach to the equilibrium Gaussian state. We prove that i) the spectral gap of the evolution operator behaves as $\frac{m}{N}$ for large $N$ ii) the relative entropy approaches its equilibrium value (at least) at an eventually exponential rate $\sim\frac{m}{N^{2}}$ for large $N$. The question of having non-zero entropy production at time $0$ remains open. A relationship between the Maxwellian thermostat and the thermostat used in [2] is established through a van Hove limit. ††Work partially supported by U.S. National Science Foundation grant DMS 1301555 ©  2015 by the authors. This paper may be reproduced, in its entirety, for non-commercial purposes. 1 Introduction Mark Kac introduced a stochastic model of $N$ identical particles interacting through binary collisions [12]. The particles are constrained in $1$ dimension and are uniformly distributed in space. Hence, the phase space consists of 1-dimensional velocities $\mathbf{v}=(v_{1},...,v_{N})$ that evolve when the particles undergo random collisions as follows: Two particles $i,j$ are chosen uniformly among the $\binom{N}{2}$ pairs, and $\theta\in[0,2\pi)$ is chosen uniformly. The outgoing velocities $v_{i}^{\ast}$ and $v_{j}^{\ast}$ are given by $v_{i}\cos{\theta}+v_{j}\sin{\theta}$ and $-v_{i}\sin{\theta}+v_{j}\cos{\theta}$ respectively, where $v_{i}$ and $v_{j}$ are the incoming velocities of particles $i$ and $j$. This collision preserves the kinetic energy and hence $\mathbf{v}$ lies on the constant energy sphere $S^{N-1}(\sqrt{NE})$, where $E$, the energy per particle, is determined by the initial condition. The system is modeled as a Markov jump process with collision times that are exponentially distributed with mean $\frac{1}{N\lambda}$. A probability density $f(\mathbf{v},t)$ on the phase space evolves through the corresponding Kolmogorov forward equation, called the Kac master equation: $$\frac{\partial f}{\partial t}=N\lambda(Q-I)f$$ (1) where $Q=\frac{1}{\binom{N}{2}}\displaystyle\sum_{i<j}Q_{ij}$ is the Kac operator with $$Q_{ij}f=\mathchoice{{\vbox{\hbox{$\textstyle-$ }}\kern-13.499794pt}}{{\vbox{% \hbox{$\scriptstyle-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$\scriptscriptstyle-$% }}\kern-9.899849pt}}{{\vbox{\hbox{$\scriptscriptstyle-$ }}\kern-8.999863pt}}% \!\int_{0}^{2\pi}f(...,v_{i}\cos{\theta}+v_{j}\sin{\theta},...,-v_{i}\sin{% \theta}+v_{j}\cos{\theta},...)\,d\theta$$ The unique equilibrium state is the uniform distribution on the sphere. In his paper, Kac precisely formulates Boltzmann’s Stosszahlansatz (molecular chaos hypothesis), which says that for a dilute gas in the large particle limit, the incoming velocities of colliding particles are uncorrelated. Kac proved for his model that this property propagates in time. This notion, now known as “propagation of chaos”, enabled him to rigorously derive a space-homogeneous Boltzmann equation for his model [12] (see also [15]). In fact, one of Kac’s motivations was to study approach to equilibrium for the Boltzmann equation using the linear $N$ particle master equation (1). In particular, Kac conjectured that the spectral gap of $N\lambda(I-Q)$ is bounded below, away from 0, uniformly in $N$. This was proved by Janvresse [11] and the exact gap was found by Carlen, Carvalho, and Loss in [3] and independently by Maslen [14]. It follows from their work that $||f(\vec{v},t)-1||_{2}\leq e^{-\frac{\lambda}{2}t}||f(\vec{v},0)-1||_{2}$, where $f(\vec{v},t)$ satisfies (1), and the norm is in $L^{2}(S^{N-1}(\sqrt{NE}),d\sigma)$, where $d\sigma$ is the normalized uniform measure on the sphere. It turns out that the relative entropy $S(f|1)=\int f\log fd\sigma$, being an extensive quantity, is a more favorable measure of distance to equilibrium in the large particle limit. For the Kac model, entropic approach to the equilibrium at an exponential rate of order $\frac{1}{N}$ is shown by Villani in [16]. This rate was shown to be essentially optimal near $t=0$ by Einav in [9], by constructing states in which a macroscopic fraction of the kinetic energy was contained in a fraction $\sim N^{-\alpha}$ of the particles, for $\alpha>0$ suitably chosen. The Kac model coupled to a heat bath was studied in [2], where they explored the possibility of obtaining better entropic convergence by remaining close to physically realizable initial states. To this end, they considered a system of $N$ particles where, in addition to the Kac collision among them, each collides with a reservoir modeled as an infinite gas in thermal equilibrium. This resulted in a system in which all except a relatively small number of particles are in equilibrium. Exponential convergence to the canonical equilibrium at a rate of $\frac{\mu}{2}$ was proved, where $\mu$ is the strength of the thermostat. Note that since the energy of the $N$ particles is not conserved in the presence of a heat bath, the phase space becomes $\mathbb{R}^{N}$. In this paper, we take the model in [2] but let only $m<N$ of the particles be thermostated, and use a simpler model for the thermostat: the Maxwellian thermostat given by (3). (We will refer to the Maxwellian thermostat as the strong thermostat, and to the thermostat used in [2] (see (9)) as the weak thermostat.) The motivation for our study is two-fold. First, studying partially thermostated systems is a step towards introducing spatial inhomogeneity in Kac-type models, by viewing the $m$ thermostated particles as situated “closer” to the heat bath. These $m$ particles act as the medium of heat exchange between the other particles and the reservoir. Second, the convergence to equilibrium in [2] persisted even without the interparticle interaction, which did not play a role in the slowest decay modes. By thermostating only a subset of the particles, the interparticle interaction become necessary for the system to approach the canonical equilibrium and hence their role can be better understood. Using the spectral gap, we show that (strongly) thermostating a macroscopic fraction of particles i.e. $m=\alpha N$ guarantees approach to equilibrium in the $L^{2}$ distance uniformly in $N$. We also obtain a weaker convergence result in terms of the relative entropy of the system. Description of Model and Results We have $N$ particles interacting via the Kac collision, with $m$ among them interacting, in addition, with a Maxwellian thermostat at inverse temperature $\beta$. We fix $N\geq 2$, $1\leq m<N$ (The case $m=N$ has been studied in [2], using the weak thermostat.) When particle $k\in\{1,...,m\}$ is thermostated, it forgets its precollisional velocity $v_{k}$ and is given a new velocity from the Gaussian distribution at the temperature of the heat bath. Physically, this could model the behavior of a particle colliding a large number of times with particles from the heat bath. This can also be thought of as a particle in the system being replaced with one from the heat bath. The collision times with the heat bath for particles $\{1,\dots,m\}$ are independent and exponentially distributed with parameter $\mu$. The master equation for the evolution of a phase space probability density $f(\mathbf{v},t)$ is given by $$\frac{\partial f}{\partial t}=N\lambda(Q-I)f+\mu\sum_{k=1}^{m}(R_{k}-I)f\,\,\,\,,$$ (2) where the operator $$R_{k}f:=\sqrt{\frac{\beta}{2\pi}}e^{-\beta\frac{v_{k}^{2}}{2}}\int{dwf(v_{1},v% _{2},...,v_{k{-}1},w,v_{k+1},\dots,v_{N})}$$ (3) corresponds to the thermostat acting on the $k$-th particle. Recall that phase space is $\mathbb{R}^{N}$ since our system is non-isolated and energy is not conserved. We assume that the particles $1,2,\dots,m$ and the particles $m+1,...,N$ are indistinguishable, i.e. $f(\mathbf{v},t)$ is symmetric under exchange of variables $v_{1},\dots,v_{m}$ and under the exchange of variables $v_{m+1},...,v_{N}$. The evolution preserves this symmetry. The interplay between the thermostat-interaction and the Kac collisions that distribute the energy to the non-thermostated particles lead the system to equilibrium. As we will see, the unique equilibrium of eq. (2) is the Gaussian $$\gamma(\mathbf{v}):=\prod_{k=1}^{N}g(v_{k}):=\prod_{k=1}^{N}\sqrt{\frac{\beta}% {2\pi}}e^{-\beta\frac{v_{k}^{2}}{2}}\,\,.$$ The evolution operator in eq. (2) is not self-adjoint on $L^{2}(\mathbb{R}^{N})$, and to this end we make the ground-state transformation $$f(\mathbf{v})=\gamma(\mathbf{v})\bigl{(}1+h(\mathbf{v})\bigr{)}\,\,\,,$$ (4) where $\int h\gamma=0$. The evolution equation for $h(\mathbf{v},t)$ becomes $$\frac{\partial h}{\partial t}=N\lambda(Q-I)h+\mu\sum_{k=1}^{m}(P_{k}-I)h\,\,\,\,,$$ (5) where $$P_{k}h:=\int{dwg(w)h(v_{1},...,v_{k-1},w,v_{k+1},\dots,v_{N})}$$ is a function independent of $v_{k}$. In the Hilbert space $L^{2}(\mathbb{R}^{N},\gamma)$, the operators $P_{k}$, $Q$ and hence $\mathcal{L}_{N,m}:=N\lambda(I-Q)+\mu\sum_{k=1}^{m}(I-P_{k})$ associated with the evolution, are self-adjoint. In fact, each $P_{k}$ is a projection. The rate at which $h$ tends to its equilibrium value $0$ in the space $L^{2}(\mathbb{R}^{N},\gamma)$ is given by the spectral gap $\Delta_{N,m}$ (see (12)). Theorem 2.2 states that $\Delta_{N,m}\sim\frac{m}{N}$ for large $N$. It turns out that the kinetic energy $K(t):=\int(\sum_{j=1}^{N}v_{j}^{2})f(\mathbf{v},t)d\mathbf{v}$ also behaves similarly for large $N$. More precisely, $K(t)$, which is not conserved since the $N$-particle system is not isolated, tends to its equilibrium value $\frac{N}{2\beta}$ at a rate $\sim\frac{m}{N}$ when $N$ is large. Remark. The behavior of the kinetic energy is indicative of the action of the operator $\mathcal{L}_{N,m}$ on polynomials of the form $v_{j}^{2}$. Moreover, for $N=2,m=1$, we show in Appendix A that the gap eigenfunction - the slowest rate of decay in the space $L^{2}(\mathbb{R}^{2},\gamma)$ - is a second degree polynomial. One may thus wonder if the gap eigenfunction is a second degree polynomial for other values of $N$ too. However, currently we only have asymptotic bounds on $\Delta_{N,m}$. Next, we study the behavior of the relative entropy $S(f|\gamma)$ (defined in (23)). To obtain a quantitative rate for the decay in the relative entropy (we use the opposite sign for the relative entropy), one could try to prove Cercignani’s conjecture [6] applied to our system: $$-\frac{dS(f(.,t)|\gamma)}{dt}\geq kS(f(.,0)|\gamma)$$ (6) for some $k>0$, which would yield an exponential bound $$S(f(.,t)|\gamma)\leq e^{-kt}S(f(.,0)|\gamma)$$ for the entropy. The quantity $-\frac{dS}{dt}$ is called the entropy production. Parenthetically, note that the spectral gap imposes a condition on how big $k$ can be: linearizing (6) and comparing lowest order terms gives $$k\leq 2\Delta_{N,m}.$$ (7) For our problem, finding a bound for the entropy production appears to be hard because the familiar methods to obtain such estimates fail (we demonstrate why at the end of Section 4). Instead, we show in Theorem 4.1 that the entropy at time $t$ satisfies $$S(f(.,t)|\gamma)\leq D_{N,m}(t)S(f(.,0)|\gamma)$$ (8) where $f(.,t)$ is the solution of (2) with initial condition $f(.,0)$ and $$D_{N,m}(t)=\left(-\frac{\delta_{-}e^{-\delta_{+}t}}{\delta_{+}-\delta_{-}}+% \frac{\delta_{+}e^{-\delta_{-}t}}{\delta_{+}-\delta_{-}}\right).$$ For large $N$ and $t$, $D_{N,m}(t)\sim\exp(-\frac{m\lambda\mu}{(N-1)(N\lambda+\mu)}t)$. Note that (8) is weaker than (6). For instance, (8) does not yield an entropy production bound at time $0$, since $D_{N,m}^{\prime}(0)=0$. We prove the Theorem by employing the convexity of $S(f|\gamma)$ directly. The idea is similar to a method used in [2] to study the entropy of a particle acted on by the weak thermostat. The generator $U$ of the weak thermostat is defined as follows: $$U[f(v)]:=\int dw\mathchoice{{\vbox{\hbox{$\textstyle-$ }}\kern-13.499794pt}}{{% \vbox{\hbox{$\scriptstyle-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$% \scriptscriptstyle-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$\scriptscriptstyle-$ }% }\kern-8.999863pt}}\!\int_{0}^{2\pi}d\theta f(v\cos{\theta}+w\sin{\theta})g(-v% \sin\theta+w\cos\theta)$$ (9) where $g(v)=\sqrt{\frac{\beta}{2\pi}}e^{-\beta v^{2}/2}$. The following entropy decay bound for the process is shown in [2]: $$S(e^{\eta(U-I)t}f|g)\leq e^{-\eta t/2}S(f|g)\,,\mbox{ or }$$ (10) $$\frac{dS}{dt}\leq-\frac{\eta}{2}S.$$ (11) As an aside, we show in Appendix B that the bound in (10) is optimal by using an optimizing sequence similar to that used in [1, 4, 9]. One can interpret the weak thermostat as a particle interacting with an infinite heat bath at temperature $\frac{1}{\beta}$ via the Kac-collision. The velocity distribution $g(v)$ of the particles in the heat bath is not affected by the collisions by virtue of the infinite size of the bath. This picture shows why it is weaker than the strong thermostat: In order for a particle from the system to forget its incoming velocity and pick a new one from the distribution $g(v)$, it has to undergo a large number of weak-thermostat interactions. The strong thermostat achieves this in one step (3). Although the weak thermostat mimics heat bath interactions more naturally, the strong thermostat is advantageous to use as a first step since the corresponding operator is idempotent and thus mathematically simpler. Moreover, we demonstrate in Theorem 3.1 that the weak thermostat can be obtained from the strong thermostat via a van Hove limit. The paper is organized as follows: We show approach to equilibrium in $L^{2}$ in Section 2, compute the van Hove limit in Section 3, and show approach to equilibrium in relative entropy in Section 4. 2 Approach to equilibrium in $L^{2}$ From this point on, we set $\beta=1$ without loss of generality. In concurrence with the ground-state transformation (4), let $\mathcal{X}_{N}:=\{u\in L^{2}(\mathbb{R}^{N},\gamma):\langle u,1\rangle=0\}$, where $\langle.,.\rangle$ denotes the inner product in the $L^{2}$ space with weight $\gamma$. The condition $\langle u,1\rangle=\int{u(\mathbf{v})\gamma(\mathbf{v})d\mathbf{v}}=0$ corresponds to the normalization of the probability density $f$. Lemma 2.1. • $\mathcal{L}_{N,m}\geq 0$ on $\mathcal{X}_{N}$. • $\mathcal{L}_{N,m}h=0\Leftrightarrow h=0$. Proof. We know from [12, 2] that $(I-Q)\geq 0$ and $(I-Q)h=0\Leftrightarrow h$ is radial. Each $(I-P_{k})$ is a projection with kernel precisely the subspace of functions in $\mathcal{X}_{N}$ that are independent of $v_{k}$. The only function in $\mathcal{X}_{N}$ that belongs to the kernel of $\sum_{k=1}^{m}(I-P_{k})$ and is also radial is $0$. Hence, the Lemma is proved. ∎ The spectral gap of the operator $\mathcal{L}_{N,m}$ is defined as: $$\Delta_{N,m}:=\inf\{\langle h,\mathcal{L}_{N,m}[h]\rangle:||h||=1,h\in\mathcal% {X}_{N}\}\,\,.$$ (12) Lemma 2.1 implies that initial states in $\mathcal{X}_{N}$ decay to equilibrium at an exponential rate $\Delta_{N,m}$. Remark. Gaussian states of temperature greater than twice the temperature of the heat bath cannot be represented by a function $h\in\mathcal{X}_{N}$. The observation that $\mathcal{L}_{2,1}$ is simply a linear combination of two projections ($Q\equiv Q_{12}$ is an orthogonal projection onto radial functions in $\mathbb{R}^{2}$) lets us compute the whole spectrum in this case. This is done in Appendix A. We see that the spectral gap is the lower root of the quadratic $x^{2}-(2\lambda+\mu)x+\lambda\mu$: $$\Delta_{2,1}:=\frac{(2\lambda+\mu)-\sqrt{4\lambda^{2}+\mu^{2}}}{2}$$ (13) with gap eigenfunction $$\frac{2\lambda}{2\lambda+\mu-\Delta_{2,1}}H_{2}(v_{1})+\frac{2\lambda}{2% \lambda-\Delta_{2,1}}H_{2}(v_{2}),$$ where $H_{2}$ is the monic Hermite polynomial (with weight $\gamma$) of degree $2$. For general $N,m$, we have the following theorem: Theorem 2.2. Assume $\lambda,\mu>0$. Then $$\frac{m}{N-1}\Delta_{2,1}\leq\Delta_{N,m}\leq\frac{m}{N-1}\frac{2\lambda\mu}{% \mu+\lambda}\,\,.$$ (14) Proof. The proof is based on an inductive argument that follows in essence the one in [3] in which the spectral gap of the Kac model is computed exactly. We first prove the following claim for $1\leq m<N$: $$\Delta_{N,m}\geq\frac{N-m-1}{N-1}\Delta_{N-1,m}+\frac{m}{N-1}\Delta_{N-1,m-1}% \,\,.$$ (15) We let $\mathcal{L}_{N,m}^{(k)}$ be the evolution operator $\mathcal{L}_{N,m}$ with the $k^{th}$ particle removed: $$\mathcal{L}_{N,m}^{(k)}=\frac{(N-1)\lambda}{\binom{N-1}{2}}\sum_{\begin{array}% []{c}i{<}j\\ i,j\neq k\end{array}}^{N}(I-Q_{ij})+\mu\sum_{\begin{array}[]{c}l{=}1\\ l{\neq}k\end{array}}^{m}(I-P_{l}).$$ Remark 2.3. $\mathcal{L}_{N,m}^{(k)}$ is also self-adjoint in $L^{2}(\mathbb{R}^{N},\gamma)$, and will have $m$ or $m-1$ thermostats in it, depending on whether $k>m$ or $k\leq m$, respectively. Also, the coefficient of the Kac term corresponds to collisions among $N-1$ particles. Next we show that $$\mathcal{L}_{N,m}=\frac{1}{N-1}\sum_{k=1}^{N}\mathcal{L}_{N,m}^{(k)}.$$ (16) This follows, since $$\displaystyle\sum_{k=1}^{N}\mathcal{L}_{N,m}^{(k)}$$ $$\displaystyle=$$ $$\displaystyle\sum_{k=1}^{N}\left(\frac{2\lambda}{N-2}\sum_{\begin{array}[]{c}i% <j,\\ i,j\neq k\end{array}}^{N}(I-Q_{ij})+\mu\sum_{\begin{array}[]{c}l=1\\ l\neq k\end{array}}^{m}(I-P_{l})\right)$$ $$\displaystyle=$$ $$\displaystyle 2\lambda\sum_{i<j}^{N}(I-Q_{ij})+(N-1)\mu\sum_{l=1}^{m}(I-P_{l})$$ $$\displaystyle=$$ $$\displaystyle(N-1)\mathcal{L}_{N,m}.$$ Then $$\langle h,\mathcal{L}_{N,m}[h]\rangle=\frac{1}{N-1}\sum_{k=1}^{N}{\langle h,% \mathcal{L}_{N,m}^{(k)}[h]\rangle}$$ (17) At this point, we want to introduce the gaps $\Delta_{N-1,m}$ and $\Delta_{N-1,m-1}$ for $N-1$ particles into the right hand side; for this, we will need the functions to be orthogonal to $1$ in the space $L^{2}(\mathbb{R}^{N-1},\gamma(\hat{v}_{k}))$, where $\gamma(\hat{v}_{k})$ is the Gaussian $\gamma$ with the variable $v_{k}$ missing. To this end, we define the projections $$\pi_{k}[h]:=\int h\gamma(\hat{v_{k}})\,dv_{1}\dots dv_{k-1}dv_{k+1}\dots d{v_{% N}}$$ and write, for each $k$, $\langle h,\mathcal{L}_{N,m}^{(k)}[h]\rangle=\langle(h-\pi_{k}h),\mathcal{L}_{N% ,m}^{(k)}(h-\pi_{k}h)\rangle$. This holds because the range of the projection $\pi_{k}$ is exactly the kernel of $\mathcal{L}_{N,m}^{(k)}$, and the operator $\mathcal{L}_{N,m}^{(k)}$ is self-adjoint. Thus, from (17), $$\Delta_{N,m}=\frac{1}{N-1}\inf\sum_{k=1}^{N}{\langle(h-\pi_{k}h),\mathcal{L}_{% N,m}^{(k)}(h-\pi_{k}h)\rangle}\,$$ where the infimum is over $h\in\mathcal{X}_{N}$, $||h||=1$ as per the definition of the spectral gap. Since $(h-\pi_{k}h)$ is orthogonal to the constant function $1$ in $L^{2}(\mathbb{R}^{N-1},\gamma(\hat{v}_{k}))$ by construction, we use the definition of the spectral gap to write $$\displaystyle\Delta_{N,m}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{N-1}\inf\left(\sum_{k=m+1}^{N}\Delta_{N-1,m}(||h-\pi_{k}% h||^{2})+\sum_{k=1}^{m}\Delta_{N-1,m-1}(||h-\pi_{k}h||^{2})\right)\text{ (by % Remark \ref{R:induc})}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{N{-}1}\inf\left(\Delta_{N-1,m}\sum_{k=m{+}1}^{N}(||h||^{% 2}-||\pi_{k}h||^{2})+\Delta_{N-1,m-1}\sum_{k=1}^{m}(||h||^{2}-||\pi_{k}h||^{2}% )\right)$$ $$\displaystyle\geq$$ $$\displaystyle\frac{N-m}{N-1}\Delta_{N-1,m}+\frac{m}{N-1}\Delta_{N-1,m-1}-\frac% {1}{N{-}1}\max\{\Delta_{N-1,m},\Delta_{N-1,m-1}\}\sup\sum_{k=1}^{N}||\pi_{k}h|% |^{2}\ ,$$ where we have used symmetry among $1,...,m$ and $m+1,...,N$ and the fact that the infimum is over functions with norm $1$. First, we note that $\Delta_{N-1,m}\geq\Delta_{N-1,m-1}$ since $(I-P_{m})\geq 0$. Next, $\sup\{\sum_{k=1}^{N}||\pi_{k}h||^{2},h\in\mathcal{X}_{N}\}$ equals $\sup_{\mathcal{X}_{N}}\langle h,\sum_{k=1}^{N}\pi_{k}h\rangle$. Since $\{\pi_{k}\}_{1}^{N}$ is a collection of commuting projection operators, $\sum_{k=1}^{N}\pi_{k}$ is a projection and the supremum is $1$. We then get $$\Delta_{N,m}\geq\frac{N{-}m}{N{-}1}\Delta_{N{-}1,m}+\frac{m}{N{-}1}\Delta_{N{-% }1,m{-}1}-\frac{1}{N-1}\Delta_{N-1,m},$$ which implies claim (15). We now prove the first inequality in Theorem 2.2. The region of interest is $\{(N,m):1\leq m\leq N-1\}$. We will use induction on $N\geq 2$. • The base case $N=2$, $m=1$ is the trivial statement $\Delta_{2,1}\geq\Delta_{2,1}$. • Now suppose $$\Delta_{N,m}\geq\Delta_{2,1}\frac{m}{N-1}$$ (18) for all $m$ such that $1\leq m\leq N{-}1$. To show that $\Delta_{N+1,m}\geq\Delta_{2,1}\frac{m}{N}$ for all $m$ such that $1\leq m\leq N$, consider the following two cases: $\star$ $m=1$: We need to show that $\Delta_{N+1,1}\geq\frac{\Delta_{2,1}}{N}$. From (15), we deduce that $$\Delta_{N+1,1}\geq\frac{N-1}{N}\Delta_{N,1}+\frac{1}{N}\Delta_{N,0}=\frac{N-1}% {N}\Delta_{N,1}\;.$$ In the above, we have $\Delta_{N,0}=0$ because when none of the particles are thermostated, the ground-state is degenerate (any radial function in $\mathbb{R}^{N}$ is an equilibrium for the Kac part). Applying (18) with $m=1$ then completes the proof of this case. $\star$ $1<m\leq N$: $$\displaystyle\Delta_{N+1,m}$$ $$\displaystyle\geq\frac{N{-}m}{N}\left(\frac{m\Delta_{2,1}}{N-1}\right)+\frac{m% }{N}\left(\frac{(m{-}1)\Delta_{2,1}}{N-1}\right)\text{ (using \eqref{E:claim} % and \eqref{E:indu})}$$ $$\displaystyle=\Delta_{2,1}\frac{m}{N(N-1)}(N-m+m-1)=\Delta_{2,1}\frac{m}{N}$$ This proves the first inequality in (14). We prove second inequality in (14), by finding an upper bound proportional to $\frac{m}{N-1}$, for $\Delta_{N,m}$. This can be done by finding a (possibly crude) upper bound on the eigenvalues of $\mathcal{L}_{N,m}$ on the space of second degree Hermite polynomials with weight $\gamma$. This space is invariant under $\mathcal{L}_{N,m}$ and its action on it with basis $\{\sum_{k=m+1}^{N}H_{2}(v_{k}),\sum_{k=1}^{m}H_{2}(v_{k})\}$ can be described by the following matrix (as mentioned before, this is related to the evolution of kinetic energy of the system). We use the identities $Q_{ij}H_{2}(v_{i})=(H_{2}(v_{i})+H_{2}(v_{j}))/2$ and $Q_{ij}H_{2}(v_{k})=H_{2}(v_{k})$ for $i,j\neq k$ in obtaining the entries. $$\begin{pmatrix}\frac{\lambda m}{N-1}&\frac{-\lambda m}{N-1}\\ -\frac{\lambda(N-m)}{N-1}&\frac{\lambda(N-m)}{N-1}+\mu\end{pmatrix}.$$ Its smallest eigenvalue is $\frac{1}{2}(\mu+\frac{N\lambda}{N-1})\bigl{(}1-\sqrt{1-\frac{4m\lambda\mu}{N-1% }\frac{1}{(\mu+\frac{N\lambda}{N-1})^{2}}}\bigr{)}$. Hence, by definition of the gap, $$\Delta_{N,m}\leq\frac{1}{2}(\mu+\frac{N\lambda}{N-1})\bigl{(}1-\sqrt{1-\frac{m% }{N-1}\frac{4\lambda\mu}{(\mu+\frac{N\lambda}{N-1})^{2}}}\bigr{)}$$ For $N$ large enough, we can write $$\Delta_{N,m}\leq\frac{1}{2}\left(\mu+\frac{N\lambda}{N-1}\right)\frac{m}{N-1}% \frac{4\lambda\mu}{(\mu+\frac{N\lambda}{N-1})^{2}}$$ or $$\Delta_{N,m}\leq\frac{m}{N-1}\frac{2\lambda\mu}{\mu+\lambda}$$ ∎ Thus, as we are close to equilibrium, $h\to 0$ in $L^{2}(\mathbb{R}^{N},\gamma)$ at an exponential rate $\Delta_{N,m}$, which for large $N$, is proportional to the fraction of thermostated particles. 3 van Hove Limit In this section, we relate the strong and weak thermostats by studying the two-particle system ($N=2$, $m=1$) described by eq. (2): $$\frac{\partial f^{\lambda}}{\partial t}=-2\lambda(I-Q_{12})f^{\lambda}-\mu(I-R% _{1})f^{\lambda}=:-\mathcal{G}^{\lambda}f^{\lambda}\,\,,$$ (19) Here the superscript makes it explicit that the solution depends on $\lambda$. Particle $2$ interacts through the Kac collision with Particle $1$, which is given the Gaussian distribution $g(v)=\sqrt{\frac{1}{2\pi}}e^{-\frac{v^{2}}{2}}$ at random times due to the action of the strong thermostat $R_{1}$. We increase the rate $\mu$ at which the strong thermostat acts relative to the rate of the Kac collision $2\lambda$. This can be achieved by increasing the time scale of the Kac operator $\frac{1}{2\lambda}\rightarrow\infty$ and sampling at longer time intervals $\displaystyle\tau:=t\lambda$. Thus, the thermostat, operating on a much smaller time-scale, becomes powerful in the limit. The result is that by passing through a van Hove (weak-coupling, large time) limit [8] of this system, Particle $2$ gets thermostated “weakly”, via its interaction with Particle $1$ whose distribution is essentially always $g(v)$. We are interested in the evolution of $\tilde{f}^{\lambda}(v_{1},v_{2},\tau):=f^{\lambda}(v_{1},v_{2},\frac{\tau}{% \lambda})$ in the limit $\lambda\to 0$. Here $f^{\lambda}(v_{1},v_{2},t)$ satisfies (19) above. The equation satisfied by $\tilde{f}^{\lambda}(v_{1},v_{2},\tau)$ is then: $$\frac{\partial\tilde{f}^{\lambda}}{\partial\tau}=-2(I-Q_{12})\tilde{f}^{% \lambda}-\frac{\mu}{\lambda}(I-R_{1})\tilde{f}^{\lambda}=:-\frac{\mathcal{G}^{% \lambda}}{\lambda}\tilde{f}^{\lambda}$$ (20) We have the following theorem, which states that the diagram in Figure $1$ commutes. Theorem 3.1. Let $\tilde{f}^{\lambda}$ satisfy eq. (20) with initial condition $\tilde{f}^{\lambda}(v_{1},v_{2},0)=\phi(v_{1},v_{2})\in L^{1}(\mathbb{R}^{2})$. Then for $\tau>0,\,\displaystyle\lim_{\lambda\to 0}\tilde{f}^{\lambda}=:g(v_{1})\tilde{f% }(v_{2},\tau)$ exists in $L^{1}(\mathbb{R}^{2})$, where $\tilde{f}$ satisfies the equation $$\frac{\partial\tilde{f}}{\partial\tau}=-2(I-U_{2})\tilde{f}$$ (21) together with the initial condition $\tilde{f}(v_{2},0)=\frac{R_{1}\phi(v_{1},v_{2})}{g(v_{1})}$. $U_{2}$ is the weak thermostat (9) acting on $v_{2}$. Proof. We can write $e^{-\frac{\mu}{\lambda}\tau(I-R_{1})}=I+(I-R_{1})(e^{-\mu\tau/\lambda}-1)$ because $(I-R_{1})$ is idempotent. This implies that $$||e^{-\frac{\tau\mu}{\lambda}(I-R_{1})}-R_{1}||_{1}=e^{-\mu\frac{\tau}{\lambda% }}||I-R_{1}||_{1}\leq 2e^{-\mu\frac{\tau}{\lambda}}.$$ (22) For each $\lambda$, the operators in $\frac{1}{\lambda}\mathcal{G}^{\lambda}$ are bounded. Thus, the Dyson expansion (the infinite series version of the Duhamel formula) corresponding to the evolution in (20) gives $e^{-\frac{\tau}{\lambda}\mathcal{G}^{\lambda}}\phi=\sum_{k=0}^{\infty}b_{k}(\phi)$ where $$\displaystyle b_{0}(\phi)$$ $$\displaystyle=$$ $$\displaystyle e^{-\frac{\mu}{\lambda}(I-R_{1})\tau}\phi,$$ $$\displaystyle b_{1}(\phi)$$ $$\displaystyle=$$ $$\displaystyle\int_{t_{1}=0}^{\tau}e^{-\frac{\mu}{\lambda}(I-R_{1})(\tau-t_{1})% }[-2(I-Q_{12})]e^{-\frac{\mu}{\lambda}(I-R_{1})t_{1}}\phi\,\,dt_{1}\mbox{, and}$$ $$\displaystyle b_{k}(\phi)$$ $$\displaystyle=$$ $$\displaystyle\int_{\{0\leq t_{k}\leq\dots t_{1}\leq\tau\}}e^{-\frac{\mu}{% \lambda}(I-R_{1})(\tau-t_{1})}[-2(I-Q_{12})]e^{-\frac{\mu}{\lambda}(I-R_{1})(t% _{1}-t_{2})}\dots[-2(I-Q_{12})]e^{-\frac{\mu}{\lambda}(I-R_{1})(t_{k})}\phi\,d% \vec{t}$$ Using (22) and the identity $R_{1}Q_{12}R_{1}=R_{1}U_{2}=U_{2}R_{1}$, we show that $\forall k$, $b_{k}(\phi)$ converges to $$\int_{\{0\leq t_{k}\leq\dots t_{1}\leq\tau\}}R_{1}[-2(I-Q_{12})]R_{1}\dots[-2(% I-Q_{12})]R_{1}\phi\,\,d\vec{t}=\frac{1}{k!}\left(-2(I-U_{2})\right)^{k}(R_{1}\phi)$$ in $L^{1}$ as $\lambda\rightarrow 0$. Finally, we use the fact that for each $u\geq 0$, $||e^{-\frac{\mu}{\lambda}(I-R_{1})u}\phi||_{1}=||\phi||_{1}$ and $||(I-Q_{12})\phi||_{1}\leq 2||\phi||_{1}$ so that $||b_{k}(\phi)||\leq 4^{k}\int_{\{0\leq t_{k}\leq\dots t_{1}\leq\tau\}}dt_{1}% \dots dt_{k}||\phi||_{1}=\frac{(4\tau)^{k}}{k!}||\phi||_{1}$, independently of $\lambda$. Therefore the dominated convergence theorem can be applied to give $$\displaystyle\lim_{\lambda\rightarrow 0}e^{-\frac{\tau}{\lambda}\mathcal{G}^{% \lambda}}\phi$$ $$\displaystyle=$$ $$\displaystyle\lim_{\lambda\rightarrow 0}\sum_{k=0}^{\infty}b_{k}(\phi)=\sum_{k% =0}^{\infty}\lim_{\lambda\rightarrow 0}b_{k}(\phi)$$ $$\displaystyle=$$ $$\displaystyle\sum_{k=0}^{\infty}(-2(I-U_{2}))^{k}\frac{\tau^{k}}{k!}(R_{1}\phi% )=e^{-2(I-U_{2})\tau}(R_{1}\phi).$$ ∎ The above proof can be generalized to give the following van Hove results for the $N$-particle case. We will use the statement “the van Hove limit of $\{A(\lambda):\lambda>0\}$ as $\lambda\rightarrow 0$ is $A^{\ast}$ with idempotent operator $B$” to mean $\lim_{\lambda\rightarrow 0}e^{-\frac{\tau}{\lambda}A(\lambda)}\phi=e^{-\tau A^% {\ast}}(B\phi)=Be^{-\tau A^{\ast}}\phi$ for all $\tau>0$ and all $\phi\in L^{1}$. • The van Hove limit of $\{\lambda\sum_{j=2}^{N}(I-Q_{1j})+\mu(I-R_{1})\}$, acting on $L^{1}(\mathbb{R}^{N})$, is $\sum_{j=2}^{N}(I-U_{j})$ with idempotent operator $R_{1}$. • The van Hove limit of $\{N\lambda(I-Q)+\mu(I-R_{1})\}$, acting on $L^{1}(\mathbb{R}^{N})$ is $\frac{2}{N-1}\sum_{j=2}^{N}(I-U_{j})+\frac{N-2}{N-1}(N-1)(I-Q^{(1)})$ with idempotent operator $R_{1}$. Here $Q^{(1)}=\frac{2}{N-2}\sum_{2\leq i<j}(I-Q_{ij})$ is the Kac operator acting on particles $2,\dots,N$. • Let $\alpha=\frac{2N-1}{N-1}$. The van Hove limit of $\{\lambda\alpha 2N(I-Q_{(2N)})+\mu\sum_{i=N+1}^{2N}(I-R_{i})\}$, acting on $L^{1}(\mathbb{R}^{2N})$, is $N(I-Q_{(N)})+\frac{2N}{N-1}\sum_{i=1}^{N}(I-U_{i})$, with idempotent operator $R_{N+1}R_{N+2}\dots R_{2N}$. Here $Q_{(N)}$ and $Q_{(2N)}$ are the Kac operators acting on particles $v_{1},\dots,v_{N}$ and $v_{1},\dots,v_{2N}$ respectively. The first two results show that having one strongly thermostated particle is sufficient to“weakly” thermostat each particle colliding with it in the van Hove limit. The strength of this thermostat will be $O(\frac{1}{N})$ under the usual Kac collision unless $\sim N$ strongly thermostated particles are used, as in the third result. Remark 3.2. The third result shows that up to a constant in the thermostat terms, it is possible to obtain the model in [2] as a van Hove limit of models in which half the particles are strongly thermostated. 4 Approach to Equilibrium in Entropy In this section, we study the behavior of the relative entropy functional $$S(f|\gamma):=\int f\log\frac{f}{\gamma}d\mathbf{v}$$ (23) under the evolution (2). This is a standard way to track the approach to equilibrium since it satisfies $S(f|\gamma)\geq 0$ and $S(f|\gamma)=0\Leftrightarrow f=\gamma$. For our model, we show below that $S(f(.,t)|\gamma)\to 0$ as $t\to\infty$, provided the initial distribution $f(.,0)$ has finite relative entropy. Set $f=\gamma h$ (this is slightly different from the ground-state transformation (4)), and restrict to $h\geq 0$, $\int h\gamma d\mathbf{v}=1$. The evolution equation obeyed by $h(\mathbf{v},t)$ is eq. (5), which we restate below: $$\frac{\partial h}{\partial t}=N\lambda(Q-I)h+\mu\sum_{k=1}^{m}(P_{k}-I)h=-% \mathcal{L}_{N,m}h\,\,.$$ The relative entropy then becomes $\int h\log h\,\gamma\,d\mathbf{v}$, which we denote by $S(h)$ (overloading the notation) for the remainder of this section. Now, $$\frac{dS}{dt}\,=\int\frac{\partial h}{\partial t}\log h\,\gamma d\mathbf{v}+% \int\frac{h}{h}\frac{\partial h}{\partial t}\,\gamma d\mathbf{v}\,=\int\frac{% \partial h}{\partial t}\log h\,\gamma d\mathbf{v}\,,$$ where the second term vanishes because the normalization $\int h\gamma\,d\mathbf{v}=1$ is preserved by the evolution. Hence, $$\frac{dS}{dt}=\int\left(N\lambda(Q-I)h+\mu\sum_{k=1}^{m}(P_{k}-I)h\right)\log h% \,\gamma d\mathbf{v}\,.$$ We know (from [12]) that $\int N(Q-I)h\log h\,\gamma d\mathbf{v}\leq 0$. Also, $$\displaystyle\int P_{k}h\log h\,\gamma d\mathbf{v}$$ $$\displaystyle=\int P_{k}h\,\,P_{k}(\log h)\gamma d\mathbf{v}\,\,\,\,\text{(by % self-adjointness of $P_{k}$ as observed in Section~{}\ref{S:L2})}$$ $$\displaystyle\leq\int(P_{k}h)\log(P_{k}h)\gamma d\mathbf{v}\,\,\,\,\text{(by % concavity of $\log$ and averaging property of $P_{k}$)}$$ $$\displaystyle\leq\int h\log h\,\gamma d\mathbf{v}\,\,\,\,\text{(by convexity % of $x\log x$)}$$ Thus $\frac{dS}{dt}\leq 0$. The following theorem indicates how fast the relative entropy decays under the evolution. Theorem 4.1. Assume $1\leq m<N$ and let $h(\mathbf{v},t)$ be the solution of (5). Then we have that $$S(h(\mathbf{v},t))\leq\left(-\frac{\delta_{-}e^{-\delta_{+}t}}{\delta_{+}-% \delta_{-}}+\frac{\delta_{+}e^{-\delta_{-}t}}{\delta_{+}-\delta_{-}}\right)S(h% (\mathbf{v},0))$$ (24) where $\delta_{\pm}\equiv\delta_{\pm}(N,m)=\left(\frac{N\lambda+\mu}{2}\pm\frac{1}{2}% \sqrt{(N\lambda+\mu)^{2}-4m\lambda\mu/(N-1)}\right)$. We first state a few observations on the above bound. Let us define $$D(t):=-\frac{\delta_{-}e^{-\delta_{+}t}}{\delta_{+}-\delta_{-}}+\frac{\delta_{% +}e^{-\delta_{-}t}}{\delta_{+}-\delta_{-}}\,.$$ As expected, $D(t)$ is identically equal to $1$ when $\lambda$ or $\mu$ is $0$. For $\lambda,\mu>0$, $\displaystyle\lim_{t\to\infty}D(t)=0$, $D(t)$ is equal to $1$ at $t=0$ and it is a decreasing function of $t>0$. The last claim can be seen by computing $$\frac{dD}{dt}=\frac{\delta_{-}\delta_{+}}{\delta_{+}-\delta_{-}}\bigl{(}e^{-% \delta_{+}t}-e^{-\delta_{-}t}\bigr{)}\leq 0$$ (25) since $\delta_{-}<\delta_{+}$. For large $t$, the dominant term in the bound (24) is $e^{-\delta_{-}t}$, and for large $N$, $\delta_{-}\sim\frac{m\lambda\mu}{(N-1)(N\lambda+\mu)}$. Hence, we obtain an eventually exponential decay of relative entropy through this bound, albeit with decay constant $\sim\frac{m}{N^{2}}$. In this paragraph, we make a few remarks about the bound for the special case $N=2,m=1$. Observe that $\delta_{-}(2,1)=\Delta_{2,1}$ is the spectral gap of $2\lambda(I-Q)+\mu(I-P_{1})$ (see (13)). As an aside, note that this is in accordance with (7). Upon making the transformation $(\mu,\lambda)\rightarrow(\frac{\mu}{\lambda},1)$ corresponding to the van Hove limit (see eq. (20)), we obtain $D(t)\rightarrow e^{-t}$ as $\lambda\rightarrow 0$. This is exactly the optimal entropy production bound (10) for the weak thermostat (Note: the weak thermostat here appears with a factor of $2$, owing to the $2\lambda$ term). The Theorem is proved as follows: we write $h(\mathbf{v},t)$ explicitly in terms of the exponential of the generator of the evolution, expand the latter using the Dyson series and use the convexity of the entropy. We exploit the entropic contraction of terms of the form $P_{j}Q$ in the expansion. These steps will yield a non-trivial bound for the entropy at time $t$ in terms of the initial entropy. The following lemmas build up to the evolution operator $e^{-\mathcal{L}_{N,m}t}$ in steps. For instance, Lemma 4.2 bounds some of the terms obtained by decomposing the Kac operator in the expression $S(P_{1}Qh)$. Throughout, we assume that $h\in L^{1}(\mathbb{R}^{N},\gamma)$ and $h\geq 0$. Lemma 4.2. We have $$\sum_{j=2}^{N}S(P_{1}Q_{1j}h)\leq\bigl{(}(N-1)-\frac{1}{2}\bigr{)}S(h)$$ Proof. In the following proof, we will apply the continuous version of Han’s inequality [10] (this also follows from the Loomis-Whitney inequality [13]) for the entropy rewritten to suit our situation: $$\sum_{j=1}^{N}S(P_{j}h)\leq(N-1)S(h)$$ (26) Note that if $h$ is symmetric in its arguments, this amounts to saying that for each $j=1,..,N$, $$S(P_{j}h)\leq\frac{N-1}{N}S(h)$$ (27) For $j>1$, $$\displaystyle S(P_{1}Q_{1j}h)$$ $$\displaystyle=\int P_{1}Q_{1j}h\log\bigl{(}P_{1}Q_{1j}h\bigr{)}\;\gamma d% \mathbf{v}$$ $$\displaystyle=\int P_{1}(\frac{Q_{1j}h}{P_{1}P_{j}h})\log\bigl{(}P_{1}(\frac{Q% _{1j}h}{P_{1}P_{j}h})\bigr{)}P_{1}P_{j}h\;\gamma d\mathbf{v}+\int P_{1}Q_{1j}h% \log(P_{1}P_{j}h)\;\gamma d\mathbf{v}$$ where we use that $P_{1}P_{j}h$ does not depend on $v_{1}$. Since the argument of the logarithm in the last term is also independent of $v_{j}$, we can integrate $P_{1}Q_{1j}h$ with respect to those variables and use that $\int P_{1}Q_{1j}h\;g(v_{1})g(v_{j})dv_{1}dv_{j}=\int h\;g(v_{1})g(v_{j})dv_{1}% dv_{j}=P_{1}P_{j}h$ to write: $$S(P_{1}Q_{1j}h)=\int P_{1}(\frac{Q_{1j}h}{P_{1}P_{j}h})\log\bigl{(}P_{1}(\frac% {Q_{1j}h}{P_{1}P_{j}h})\bigr{)}P_{1}P_{j}h\;\gamma d\mathbf{v}+\int P_{1}P_{j}% h\log(P_{1}P_{j}h)\;\gamma d\mathbf{v}$$ Now, we apply the symmetric version of Han’s inequality (27) to $\frac{Q_{1j}h}{P_{1}P_{j}h}$ as a function of $v_{1}$ and $v_{j}$ to get: $$\displaystyle S(P_{1}Q_{1j}h)$$ $$\displaystyle\leq\frac{1}{2}\int\frac{Q_{1j}h}{P_{1}P_{j}h}\log\bigl{(}\frac{Q% _{1j}h}{P_{1}P_{j}h}\bigr{)}P_{1}P_{j}h\;\gamma d\mathbf{v}+\int P_{1}P_{j}h% \log(P_{1}P_{j}h)\;\gamma d\mathbf{v}$$ $$\displaystyle=\frac{1}{2}S(Q_{1j}h)-\frac{1}{2}\int Q_{1j}h\log(P_{1}P_{j}h)\;% \gamma d\mathbf{v}+\int P_{1}P_{j}h\log(P_{1}P_{j}h)\;\gamma d\mathbf{v}$$ $$\displaystyle=\frac{1}{2}S(Q_{1j}h)+\frac{1}{2}S(P_{1}P_{j}h)$$ where, to get to the last step, we have used that $Q_{1j}$ is self-adjoint and $P_{1}P_{j}$ is independent of $v_{1}$ and $v_{j}$. Now, summing these terms, and noting that $S(Q_{1j}h)\leq S(h)$ by the averaging property of $Q_{1j}$, we get $$\sum_{j=2}^{N}S(P_{1}Q_{1j}h)\leq\frac{N-1}{2}S(h)+\frac{1}{2}\sum_{j=2}^{N}S(% P_{j}P_{1}h)\,.$$ We invoke Han’s inequality (26) on $P_{1}h\equiv(P_{1}h)(v_{2},...v_{N})$, ie. $\sum_{j=2}^{N}S(P_{j}P_{1}h)\leq(N-2)S(P_{1}h)\leq(N-2)S(h)$ to complete the proof. ∎ Lemma 4.3. $$S(e^{\mu(P_{1}-I)t}Qh)\leq\left(1-\frac{1-e^{-\mu t}}{N(N-1)}\right)S(h)\,.$$ Proof. $$\displaystyle S(e^{\mu(P_{1}-I)t}Qh)$$ $$\displaystyle=S(e^{-\mu t}Qh+(1-e^{-\mu t})P_{1}Qh)\;\;\;\;\;\text{ (since $P_% {1}$ is a projection)}$$ $$\displaystyle\leq e^{-\mu t}S(Qh)+(1-e^{-\mu t})S(P_{1}Qh)$$ $$\displaystyle\leq e^{-\mu t}S(h)+(1-e^{-\mu t})\frac{1}{\binom{N}{2}}\sum_{i<j% }S(P_{1}Q_{ij}h)$$ $$\displaystyle=e^{-\mu t}S(h)+(1-e^{-\mu t})\frac{1}{\binom{N}{2}}\bigl{(}\sum_% {i<j,i,j\neq 1}S(P_{1}Q_{ij}h)+\sum_{j=2}^{N}S(P_{1}Q_{1j}h)\bigr{)}$$ $$\displaystyle\leq e^{-\mu t}S(h)+(1-e^{-\mu t})\frac{1}{\binom{N}{2}}\bigl{(}% \sum_{i<j,i,j\neq 1}S(h)+(N-1-\frac{1}{2})S(h)\bigr{)}$$ $$\displaystyle=\left(1-\frac{1-e^{-\mu t}}{N(N-1)}\right)S(h),$$ where we use Lemma 4.2 in the last inequality. We use the convexity of the entropy and the averaging property of $P_{1}$ and $Q$ in the previous steps. ∎ Lemma 4.4. Let $1\leq m<N$. Then $$S\left(\exp{\left(\mu\displaystyle\sum_{k=1}^{m}(P_{k}-I)t)\right)}Qh\right)% \leq\left(1-\frac{m(1-e^{-\mu t})}{N(N-1)}\right)S(h).$$ (28) Proof. We prove the above by induction on $m$. The base case $m=1$ (and any $N>1$) was shown in the previous Lemma. We restrict to $\{(N,m):2\leq m<N\}$ for the rest of the proof. Assume that the Lemma is true for $m-1$ (and any $N>m-1)$. To infer from this its validity for the case $m$ (and any $N>m$), we analyze below the entropy of $P_{m}\exp{\left(\mu\sum_{k=1}^{m-1}(P_{k}-I)t\right)}$, where we expand the Kac operator $Q$, split it into terms that contain $m$ and those that do not, and utilize the convexity of the entropy. $$\displaystyle S\left(P_{m}\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}-I)t\bigr{)}}% Qh\right)$$ $$\displaystyle\leq(1-\frac{2}{N})S\left(\frac{\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(% P_{k}-I)t\bigr{)}}}{\binom{N-1}{2}}\displaystyle\sum_{\begin{subarray}{c}i<j\\ i,j\neq m\end{subarray}}Q_{ij}P_{m}h\right)$$ $$\displaystyle+\frac{2}{N}S\left(\frac{\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}-I% )t}\bigr{)}}{N-1}P_{m}\displaystyle\sum_{l\neq m}Q_{lm}h\right)\,.$$ In the first term222This term is non-zero only when $N>2$, which is the case here., we also use the commutativity of $P_{m}$ with $Q_{ij}$ when neither $i$ nor $j$ equal $m$. Next, we treat the terms as follows: • Term 1: We apply the induction hypothesis for $m-1$, $N-1$ since $P_{m}h$ is a function of $N-1$ variables and $\binom{N-1}{2}^{-1}\displaystyle\sum_{\begin{subarray}{c}i<j\\ i,j\neq m\end{subarray}}Q_{ij}$ is the Kac operator acting on $N-1$ variables. • Term 2: We use the averaging property of $\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}-I)t}\bigr{)}$, convexity, and Lemma 4.2. We obtain $$\displaystyle S(P_{m}\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}-I)t\bigr{)}}Qh)$$ $$\displaystyle\leq(1-\frac{2}{N})\left(1-(m-1)\frac{1-e^{-\mu t}}{(N-1)(N-2)}% \right)S(h)+\frac{2}{N}\frac{1}{N-1}(N-\frac{3}{2})S(h)\,.$$ (29) Now starting with the left-hand side of (28) and using convexity plus the fact that $P_{m}$ is a projection, write $$\displaystyle S\left(\exp{(\mu\displaystyle\sum_{k=1}^{m}(P_{k}-I)t)}Qh\right)$$ $$\displaystyle=S\left((e^{-\mu t}I+(1-e^{-\mu t})P_{m})\exp{\bigl{(}\mu\sum_{k=% 1}^{m-1}(P_{k}-I)t\bigr{)}}Qh\right)$$ $$\displaystyle\leq e^{-\mu t}S\left(\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}-I)t% \bigr{)}}Qh\right)$$ $$\displaystyle+(1-e^{-\mu t})S\left(P_{m}\exp{\bigl{(}\mu\sum_{k=1}^{m-1}(P_{k}% -I)t\bigr{)}}Qh\right)$$ Using the induction hypothesis for the case $m-1$, $N$ for the first term, and the bound (29) for the second term, the Lemma follows through some algebraic simplification. ∎ In the following, denote $A(t):=1-\frac{m(1-e^{-\mu t})}{N(N-1)}$. Proof of Theorem 4.1. Expanding $e^{-\mathcal{L}_{N,m}t}$ using the Dyson series with $Q$ as the perturbation: $$\displaystyle e^{N\lambda(Q-I)t+\mu\sum_{k}(P_{k}-I)t}$$ $$\displaystyle=e^{-N\lambda t}e^{N\lambda Qt+\mu\sum(P_{k}-I)t}$$ $$\displaystyle=e^{-N\lambda t}\{e^{\mu\sum(P_{k}-I)t}+\int_{0}^{t}dt_{1}e^{\mu% \sum(P_{k}-I)(t-t_{1})}\;N\lambda Q\;e^{\mu\sum(P_{k}-I)t_{1}}$$ $$\displaystyle+\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\;e^{\mu\sum(P_{k}-I)(t-% t_{1})}\;N\lambda Q\;e^{\mu\sum(P_{k}-I)(t_{1}-t_{2})}\;N\lambda Q\;e^{\mu\sum% (P_{k}-I)t_{2}}+...\}$$ Therefore, using the convexity of entropy, and Lemma 4.4, $$S(h(.,t))\leq e^{-N\lambda t}\left(1+N\lambda\int_{0}^{t}dt_{1}A(t-t_{1})+(N% \lambda)^{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}A(t-t_{1})A(t_{1}-t_{2})+.% ..\right)S(h(.,0))$$ $$=e^{-N\lambda t}\left(1+N\lambda(A*1)+(N\lambda)^{2}(A*A*1)+...\right)S(h(.,0))$$ where $*$ is the Laplace-convolution operation. Thus we have that $$S(h(.,t))\leq e^{-N\lambda t}\;\varphi(t)S(h(.,0))$$ (30) where $\varphi$ is defined through the series above. We compute $\varphi(t)$ using its Laplace transform $\tilde{\varphi}(s)$. Then: $$\tilde{\varphi}(s)=\frac{1}{s}\;\sum_{k=0}^{\infty}{(N\lambda\;\tilde{A}(s))^{% k}}$$ where $\tilde{A}(s)=\frac{1}{s}-\frac{m}{N(N-1)}(\frac{1}{s}-\frac{1}{s+\mu})$ is the Laplace transform of $A(t)$. Summing the geometric series (the sum converges if we assume, for instance, that $\tilde{\varphi}(s)$ is defined on the domain $s>N\lambda$), $$\tilde{\varphi}(s)=\frac{s+\mu}{s^{2}+(\mu-N\lambda)s-N\mu\lambda(1-\frac{m}{N% (N-1)})}$$ The inverse Laplace transform of the above is $$-\frac{\delta_{-}e^{(N\lambda-\delta_{+})t}}{\sqrt{(N\lambda+\mu)^{2}-4m% \lambda\mu/(N-1)}}+\frac{\delta_{+}e^{(N\lambda-\delta_{-})t}}{\sqrt{(N\lambda% +\mu)^{2}-4m\lambda\mu/(N-1)}}$$ Now we invoke the uniqueness of the Inverse Laplace Transform: No two piecewise continuous, locally bounded functions of exponential order can have the same Laplace transform (see e.g. [7]). Since $\varphi(t)$ (see eq. (30)) belongs to this space, we get $$\varphi(t)=-\frac{\delta_{-}e^{(N\lambda-\delta_{+})t}}{\sqrt{(N\lambda+\mu)^{% 2}-4m\lambda\mu/(N-1)}}+\frac{\delta_{+}e^{(N\lambda-\delta_{-})t}}{\sqrt{(N% \lambda+\mu)^{2}-4m\lambda\mu/(N-1)}}$$ Plugging this into (30), we obtain the desired result (24). ∎ Remarks. • From (25), one notices that $\frac{dD}{dt}|_{t=0}=0$. This implies, in particular, that Theorem 4.1 does not give us a bound like (6) on the entropy production. This results from the fact that the significant bounds used in the proof, from Lemma 4.2, required the presence of the second-order term $\sum_{k}(P_{k}-I)Q$. Note that $\frac{d^{2}D}{dt^{2}}|_{t=0}<0$. • The main bound (Lemma 4.2) was obtained by estimating terms of the form $S(P_{1}Q_{1j}h)$, and we ignored any possible contribution from many other terms e.g. $S(Q_{ij}Q_{kl}h)$. Thus, there may be scope for a better bound. • In particular, we hope to obtain an entropy decay rate that scales as $\frac{m}{N}$ (as we had for the spectral gap). We were able to obtain a decay rate scaling as $\frac{1}{N}$ for a modified model: a system of $N$ particles where one of them is thermostated (through a Maxwellian thermostat) and the Kac collision interaction is replaced by the (much stronger) projection onto radial functions. Thus, the role of the Kac interaction in the equilibration process needs to be better understood. Finally, we demonstrate why it is not easy to find an entropy production bound in our problem. Consider the case $N=2$, $m=1$ with $\lambda=\frac{1}{2}$, $\mu=1$. Here, one could write $$\frac{dS(h)}{dt}=\int{P_{1}h\log h\gamma d\mathbf{v}}+\int{Qh\log h\gamma d% \mathbf{v}}-2S(h)$$ $$\leq\int{P_{1}h\log P_{1}h\gamma d\mathbf{v}}+\int{Qh\log Qh\gamma d\mathbf{v}% }-2S(h)\;.$$ We use in the last step that $P_{1}$, $Q$ are projections and $\log x$ is concave. Bounding this from above by $-kS(h)$ (for some $k>0$) would be sufficient to obtain an entropy production bound. This idea has worked, e.g., for a sum of mutually orthogonal projections like strong thermostats acting on different particles. However, in our case, we can find, for every $\epsilon>0$, a density $h_{\epsilon}$ such that $$\frac{\int{P_{1}h_{\epsilon}\log P_{1}h_{\epsilon}\gamma d\mathbf{v}}+\int{Qh_% {\epsilon}\log Qh_{\epsilon}\gamma d\mathbf{v}}}{S(h_{\epsilon})}\geq 2-\epsilon$$ The idea is to take $h$ proportional to the characteristic function of the set $[-a,a]\times[R-a,R+a]$. As $R\to\infty$, the ratio above asymptotically approaches the value $2$. The intuition behind this construction is that as $R\to\infty$, $h$ is supported approximately in the intersection of the supports of $P_{1}h$ (a “band” of width $2a$ parallel to the $v_{1}$ axis) and $Qh$ (an annulus around the origin). It is the tangential nature of this intersection that precludes the application of Han’s inequality [10] to improve the bound $S(P_{1}h)+S(Qh)\leq 2S(h)$. We are not, however, ruling out the possibility of using a different method to obtain an entropy production bound. 5 Conclusion Our results imply that if a macroscopic fraction of particles is thermostated, the kinetic energy and the $L^{2}$ norm decay exponentially to their respective equilibrium values at a rate independent of $N$. However, our entropy bound (24) yields a decay rate that vanishes as $\frac{1}{N}$ in the thermodynamic limit. Hence, at least under a suitable class of initial conditions, we think it should be possible to improve (24) to reflect the physical situation. The question of entropy production at $t=0$ (and any $N$) remains unsettled. The bound (24) does not preclude the possibility of zero entropy production at time $0$. However, we do not know if it actually occurs in the model for some initial conditions. One could wonder how the notion of propagation of chaos (which was the main motivation behind the formulation of the Kac model) adapts to our situation. When $m$ is finite, the coupling to the heat bath becomes insignificant in the thermodynamic limit. On the other hand, when $m=\alpha N$ for some $\alpha<1$, preliminary calculations indicate that in the limit, a coupled Boltzmann equation system should result. The Stosszahlansatz needs to be reformulated in a precise manner to account for different distributions of the thermostated and the non-thermostated particles. Moreover, generalizations of our model could bring about connections to previously studied thermostated Boltzmann equations [5]. Lastly, the results in Section 3 suggest that it should be possible to extend our analysis to the case of systems partially coupled to the weak thermostat. Acknowledgement We are grateful to our advisors Federico Bonetto and Michael Loss for suggesting this topic, and for very fruitful discussions. We also thank them for their help in Theorem 2.2 and Lemma 4.2. Appendix A Appendix: Spectrum of Evolution Operator for $N=2,m=1$ We analyze the spectrum of the self-adjoint evolution operator $\mathcal{L}_{2,1}=2\lambda(I-Q)+\mu(I-P_{1})$, in the space $L^{2}(\mathbb{R}^{2},\gamma(\mathbf{v})d\mathbf{v})$, and deduce its spectral gap stated in (13). For simplicity, we denote the operators $\mathcal{L}_{2,1}$ and $P_{1}$ by $\mathcal{L}$ and $P$. Notice that $\mathcal{L}$ is a linear combination of two projections ($Q\equiv Q_{12}$ is an orthogonal projection onto radial functions in $\mathbb{R}^{2}$). The condition $\langle h,1\rangle=0$ corresponding to the normalization of $f=\gamma(1+h)$, the leads us to work in the space of Hermite polynomials $\{H_{\alpha}(v)\}_{\alpha=0}^{\infty}$ with weight $g(v)$. The space of interest $\mathcal{X}_{2}$ is spanned by $\{K_{i,j}:i,j\in\mathbb{N},(i,j)\neq(0,0)\}$, where $K_{i,j}:=H_{i}(v_{1})H_{j}(v_{2})$. Without loss of generality, we work with monic Hermite polynomials. The action of $P$ is as follows: $$PK_{i,j}=\left\{\begin{array}[]{lr}0&:i\neq 0\\ K_{0,j}&:i=0\end{array}\right.$$ Since each term in $K_{i,j}$ is odd in either $v_{1}$ or $v_{2}$ when either $i$ or $j$ is odd, we have that $QK_{i,j}=0$ when either $i$ or $j$ is odd. We deduce the action of $Q$ on $K_{2\alpha_{1},2\alpha_{2}}$ from its action on $v_{1}^{2\alpha_{1}}v_{2}^{2\alpha_{2}}$ using the following Lemma from [2], which applies to $Q$ as it is a projection onto radial functions. Lemma A.1. [2] Let $A$ be a self-adjoint operator on $L^{2}(\mathbb{R}^{N},\gamma(\mathbf{v})d\mathbf{v})$ that preserves the space $P_{2l}$ of homogeneous even polynomials in $v_{1},...,v_{N}$ of degree $2l$. If $$A(v_{1}^{2\alpha_{1}}...v_{N}^{2\alpha_{N}})=\sum_{\sum\alpha_{i}=\sum\beta_{i% }}c_{\beta_{1}...\beta_{N}}v_{1}^{2\beta_{1}}...v_{N}^{2\beta_{N}}\ ,$$ we get $$A(H_{2\alpha_{1}}(v_{1})...H_{2\alpha_{N}}(v_{N}))=\sum_{\sum\alpha_{i}=\sum% \beta_{i}}{c_{\beta_{1}...\beta_{N}}H_{2\beta_{1}}(v_{1})...H_{2\beta_{N}}(v_{% N})}\ .$$ Let $n:=\alpha_{1}+\alpha_{2}$ and $\Gamma_{\alpha_{1},\alpha_{2}}:=\mathchoice{{\vbox{\hbox{$\textstyle-$ }}\kern% -13.499794pt}}{{\vbox{\hbox{$\scriptstyle-$ }}\kern-12.149815pt}}{{\vbox{\hbox% {$\scriptscriptstyle-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$\scriptscriptstyle-$% }}\kern-8.999863pt}}\!\int_{0}^{2\pi}{\cos^{2\alpha_{1}}{\theta}\sin^{2\alpha% _{2}}{\theta}}d\theta=\frac{(2\alpha_{1}-1)!!(2\alpha_{2}-1)!!}{2^{\alpha_{1}+% \alpha_{2}}(\alpha_{1}+\alpha_{2})!}$, with the standard definition $(-1)!!=1$. Then we have $$QK_{i,j}=\left\{\begin{array}[]{ll}0&:i\text{ or }j\text{ odd}\\ \Gamma_{\alpha_{1},\alpha_{2}}\sum_{m=0}^{n}\binom{n}{m}K_{2m,2n-2m}&:i=2% \alpha_{1},j=2\alpha_{2}\end{array}\right.$$ Now a case-by-case analysis, using the fact that $L_{2n}:=\text{Span}\{H_{2\alpha_{1}}(v_{1})H_{2\alpha_{2}}(v_{2}):\alpha_{1}+% \alpha_{2}=n\}$ are invariant subspaces for $\mathcal{L}$, yields the following for the spectrum of $\mathcal{L}$: Eigenvalue Eigenfunction $$2\lambda+\mu$$ $$K_{i,j}$$, $$i$$ or $$j$$ odd, $$i\neq 0$$ $$\sum_{i=1}^{n}c_{i}K_{2i,2n-2i}$$ where $$\sum_{i=1}^{n}c_{i}\Gamma_{i,n-i}=0$$ $$2\lambda$$ $$K_{0,j}$$, $$j$$ odd $$x^{\pm,n}$$ $$\sum_{i=0}^{n}c^{\pm,n}_{i}K_{2i,2n-2i}$$ and eq. (31) Remark A.2. The first row corresponds to functions that belong to the kernels of both $Q$ and $P$, and the second row to functions that belong to the kernels of $Q$ and $I-P$. Here, $$x^{\pm,n}=\frac{(2\lambda+\mu)\pm\sqrt{(2\lambda+\mu)^{2}-8\lambda\mu(1-\Gamma% _{0,n})}}{2}$$ and $$c_{0}^{\pm,n}=\frac{2\lambda}{2\lambda-x^{\pm,n}}\text{ and }c_{i}^{\pm,n}=% \frac{2\lambda\binom{n}{i}}{x^{\mp,n}}\text{ for }i\neq 0$$ (31) Using the fact that $\Gamma_{0,n}=\frac{1}{2\pi}\int_{0}^{2\pi}{\cos^{2n}{\theta}d\theta}$ is decreasing in $n$, it is easy to see that the smallest eigenvalue is $x^{-,1}$. The corresponding eigenfunction is $\frac{2\lambda}{2\lambda-x^{-,1}}K_{0,2}+\frac{2\lambda}{x^{+,1}}K_{2,0}$. Appendix B Appendix: Entropy Bound Optimizer for the Weak Thermostat In [2], the convexity of entropy is employed to show that if $f(v,t)$ satisfies $$\frac{\partial f}{\partial t}=\eta(Uf-f)$$ where $U$ is the weak thermostat, then (11) holds true. We remark here that if $\phi_{\delta}(v):=(1-\delta)M_{x}(v)+\delta M_{y}(v)$, where $x=\frac{1}{\beta(1-\delta)}$, $y=\frac{1}{\beta\delta}$ and $M_{a}(v)=\frac{1}{\sqrt{2\pi a}}e^{-v^{2}/2a}$, then $$\lim_{\delta\to 0}\frac{1}{S(\phi_{\delta})}\frac{dS}{dt}(\phi_{\delta})\geq-% \frac{\eta}{2}$$ thereby showing that (11) is an optimal bound. $\phi_{\delta}$ is a convex combination of Maxwellians, one of which approaches the distribution of the heat bath $M_{\frac{1}{\beta}}$ and the other corresponds to a very high energy distribution (albeit with a vanishing weight) as $\delta\to 0$. These types of functions have been used in [1, 4, 9] as examples of distributions that are away from equilibrium (in the sense of the entropy) and yet have vanishingly low entropy production (in magnitude). References [1] A. Bobylev and C. Cercignani. On the rate of entropy production for the Boltzmann equation. J. Stat. Phys., 94(3-4):603–618, 1999. [2] F. Bonetto, M. Loss, and R. Vaidyanathan. The Kac model coupled to a thermostat. J. Stat. Phys., 156(4):647–667, 2014. [3] E. A. Carlen, M. C. Carvalho, and M. Loss. Many-body aspects of approach to equilibrium. In Journées “Équations aux Dérivées Partielles” (La Chapelle sur Erdre, 2000), pages Exp. No. XI, 12. Univ. Nantes, Nantes, 2000. [4] E. A. Carlen, M. C. Carvalho, M. Loss, J. L. Roux, and C. Villani. Entropy and chaos in the Kac model. Kinet. Relat. Models, 3:85–122, 2010. [5] E. A. Carlen, J. L. Lebowitz, and C. Mouhot. Exponential approach to, and properties of, a non-equilibrium steady state in a dilute gas. arXiv:1406.4097. [6] C. Cercignani. H-theorem and trend to equilibrium in the kinetic theory of gases. Archiv of Mechanics, Archiwum Mechaniki Stosowanej, 34:231–241, 1982. [7] R. V. Churchill. New York: McGraw Hill, 1963. [8] G. Dell’Antonio. The van Hove limit in classical and quantum mechanics, pages 75–110. Springer Berlin / Heidelberg, 1982. [9] A. Einav. On Villani’s conjecture concerning entropy production for the Kac master equation. Kinet. Relat. Models, 4(2):479–497, 2011. [10] T. S. Han. Nonnegative entropy measures of multivariate symmetric correlations. Information and Control, 36(2):133–156, 1978. [11] E. Janvresse. Spectral gap for Kac’s model of Boltzmann equation. Ann. Probab., 29(1):288–304, 2001. [12] M. Kac. Foundations of kinetic theory. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. III, pages 171–197, Berkeley and Los Angeles, 1956. University of California Press. [13] L. H. Loomis and H. Whitney. An inequality related to the isoperimetric inequality. Bull. Amer. Math. Soc, 55:961–962, 1949. [14] D. K. Maslen. The eigenvalues of Kac’s master equation. Math. Z., 243(2):291–331, 2003. [15] H. P. McKean, Jr. Speed of approach to equilibrium for Kac’s caricature of a Maxwellian gas. Arch. Rational Mech. Anal., 21:343–367, 1966. [16] C. Villani. Cercignani’s conjecture is sometimes true and always almost true. Comm. Math. Phys., 234(3):455–490, 2003.
Physical conditions in the Homunculus Gary J. Ferland & Nick Abel    Kris Davidson    Nathan Smith Abstract Conditions within the Homunculus nebula around Eta Car are determined by many of the same physical processes that occur in molecular clouds in the interstellar medium. But there is one major exception – we know when the ejection occurred and something about its composition and initial state. The gas was warm, ionized, and dust-free when it was located within the star’s atmosphere and it is currently cold, molecular, and dusty. It undertook this transformation in a bit over 150 years. It offers a laboratory for the study of physical processes in a well-constrained environment. We derive a photoionization model of the Homunculus nebula that reproduces many of its observed properties. We conclude by outlining how observations of the Homunculus could address basic problems in the physics of the interstellar medium. Physics, University of Kentucky Astronomy, University of Minnesota CASA, University of Colorado 1. Introduction Eta Carinae, one of the most luminous stars in the Galaxy, offers a laboratory in which a variety of physical phenomena can be studied. The star has undergone at least one episode of substantial mass loss, and is likely to end its life as a supernova. Understanding the physics that occurs within the ejecta will offer insight into the initial stages of chemical enrichment of the interstellar medium. Both molecules and grains are known to have formed in the Homunculus nebula, gas that was ejected in the nineteenth century. The physical conditions in this nebula are the subject of this paper, along with a sketch of how Eta Car might be used as a test bed to understand critical interstellar processes. 2. An illustrative model of the Homunculus The Homunculus nebula is seen in the optical by reflected starlight and is one of the brightest objects in the sky in the mid-IR, showing that substantial amounts of dust are present (Westphal & Neugebauer 1969). In the infrared, emission lines of [Fe ii] and H${}_{2}$ are seen and have been traced across the nebula in long-slit observations (Smith 2002). These show a double-shell structure, with Fe${}^{+}$ present in the inner parts of the Homunculus, and H${}_{2}$ in the outer zone. Additionally, the Homunculus shows a double-shell structure in the dust color temperature distribution, with a cool outer shell at $\sim$140 K and a warmer inner shell at $\sim$200-250 K (Smith et al. 2003). These observations help constrain any model of the envelope. We model the shell as a constant-density layer with an inner radius of 1.7$\times$10${}^{17}$ cm and a thickness of 10${}^{17}$ cm, appropriate for material in the wall of the southeast polar lobe along our line of sight to the star (Smith 2002; Smith et al. 2003). The electron density in the inner Fe${}^{+}$ region, deduced from infrared [Fe ii] line ratios, is $\sim$10${}^{4}$ cm${}^{-3}$ (Smith 2002). The hydrogen density must be substantially higher here since the gas is mainly neutral. From the geometry of the nebula and the presumed total mass of $\sim$12 M${}_{\odot}$ for the entire Homunculus (Smith et al. 2003), we infer a hydrogen density of $\sim$10${}^{6}$ cm${}^{-3}$. Of course, this depends on the gas-to-dust mass ratio, taken to be 100 (but see below). For these parameters the column density through the shell is $N_{H}$=10${}^{23}$ cm${}^{-2}$. The dust that is clearly present in the nebula will be the catalyst in forming the observed H${}_{2}$. The measured visual extinction is $A_{V}\approx$ 4 mag, (e.g., Davidson & Humphreys 1997), corresponding to an extinction per unit column density of $A_{V}$/$N_{H}\approx 4\times 10^{-23}$ mag cm${}^{2}$. The extinction is observed to be grey and the dust color temperature in the shell is near the equilibrium blackbody temperature (Whitelock et al. 1983; Hillier et al. 2000; Smith et al. 1998, 2003), suggesting that the grains are large. We assume that the grains are similar to those seen in Orion’s Veil, $R$=5, and assume that only the silicate component is present. This large $R$ is produced by a grain size distribution that is lacking in small grains, which will affect the H${}_{2}$ formation rate and grain photoelectric heating. The ratio of C/O is less than unity in the ejecta, (Davidson et al. 1986), suggesting that the chemistry will be dominated by oxygen-bearing species once formation of CO is complete, and that the chemistry will eventually lead to oxygen-rich solids, motivating our use of the silicate grain type. The observations of silicate features in the infrared (e.g., Gehrz et al. 1973), and the absence of a graphite feature in the ultraviolet (Viotti et al. 1989) supports this idea. For simplicity we leave the silicate dust to gas ratio at its ISM value, which corresponds to an extended source $A_{V}$/$N_{H}\approx 9\times 10^{-23}$ mag cm${}^{2}$. The grain size distribution and dust to gas ratio will affect the details of our calculations, as well as clumping of the material, but not the overall results. The assumed gas-phase abundances are listed in Table 1, along with other parameters. Relative to H, He is overabundant, while O is highly underabundant, presumably due to partial CNO cycling. The N/H ratio, roughly ten times solar, corresponds to the conversion of nearly all C and O into N (Davidson et al. 1986; Smith & Morse 2004). Most C is expected to be in the form of the ${}^{13}$C isotope. We assume that the stellar continuum is represented by an interpolated 20,000 K CoStar atmosphere with a total luminosity of 5$\times$10${}^{6}L_{\odot}$. We add a high-energy component corresponding to a 3$\times$10${}^{6}$ K blackbody with a luminosity of 30 $L_{\odot}$ (e.g., Corcoran et al. 2001). The lack of a prominent H ii region shows that few hydrogen-ionizing photons strike the inner edge of the nebula, most likely due to photoelectric absorption by the stellar wind. We extinguish the net continuum by photoelectric absorption due to a neutral layer of 10${}^{21}$ cm${}^{-2}$ to account for this. Some high-energy photons are transmitted and they help drive the chemistry. The incident stellar continuum is shown in Figure 1. We also include the galactic background cosmic ray ionization rate. The actual ionization rate may be higher if radiative nuclei are present. Cosmic rays have effects that are similar to X-rays – they provide ionization that helps drive the chemistry. 3. Calculations We simulate the conditions within the nebula using the development version of Cloudy, last described by Ferland et al. (1998). Recent updates to Cloudy include an improved molecular network that allow for calculations deep in molecular clouds. Some of these improvements are discussed in Abel et al. (2004). The developmental version of Cloudy currently predicts molecular abundances for $\sim$70 molecules involving H, He, C, O, N, Si, and S. Approximately 1000 reactions are in the network, with most reaction rates taken from the latest version of the UMIST database. Our predictions are in good agreement with other codes that are designed to predict conditions in PDRs. The computed ionization, thermal, and molecular structure are shown in Figures 2 and 3. The emitted continuum is the solid line in Figure 1. Hydrogen is predominantly atomic at the illuminated face of the cloud. We assume that no H-ionizing radiation escapes from the stellar wind, but the H${}^{+}$ density at the illuminated face is quite sensitive to the transfer of the incident stellar continuum in the Lyman lines. If it is bright in these lines then hydrogen can become ionized by a two-step process. An excited state is populated by absorption of a Lyman line, which then decays into the H${}^{0}$ 2$s$ level. The Balmer continuum can photoionize atoms in this state, creating a thin region with H${}^{+}$ and a high electron density. The Lyman lines quickly become self-shielding and the process is no longer important, although a small amount of H${}^{+}$ is produced across the nebula by cosmic ray and X-ray ionization. H${}_{2}$ forms at depth of $\sim$3.5$\times$10${}^{16}$ cm, where the Lyman-Werner bands become optically thick, the continuum between L$\alpha$ and the Lyman limit is heavily extinguished, and the destruction rate of H${}_{2}$ goes down dramatically. As Figure 1 shows, little light escapes at short wavelengths. Grains are the dominant opacity across the cloud, helping shield H${}_{2}$ and allowing for efficient formation by catalysis on grain surfaces. The Fe${}^{+}$ profile is also shown, indicating an anti-correlation between Fe${}^{+}$ and H${}_{2}$. Observations (Smith 2002) show that Fe${}^{+}$ and H${}_{2}$ are segregated, occupying the inner and outer zones of the Homunculus walls, respectively, which generally agrees with the structure in Figure 2. The formation of large amounts of H${}_{2}$ initiates the formation of heavy-element molecules (see Figure 2 and Table 2). H${}_{2}$ is a step in the formation of H${}_{2}^{+}$ and H${}_{3}^{+}$, the highly reactive ion-molecules that undergo ion-neutral reactions to form molecules containing heavier elements. Large amounts of CO form when its electronic bands become self-shielding. For this calculation, CO fully forms at depth of $\sim$9$\times$10${}^{16}$ cm. We assume a C/O abundance ratio of 0.5. At the shielded face the $n$(CO)/$n$(C${}_{tot})$ ratio is nearly unity and a significant amount of O is in the form of OH. Nitrogen is strongly enhanced in the ejecta, and several nitrogen-bearing molecules are shown in Table 2 and Figure 2. As expected, N${}_{2}$ is the dominant molecule, although only a small amount of N is in this form – N remains predominantly atomic. The gas temperature is shown in Figure 3. It lies in the range $\sim$50 K to $\sim$300 K and is typical of a PDR. The temperature is mainly maintained by a balance between grain photoionization heating and cooling by fine structure lines of C, O, Si, and Fe. The temperature falls at the point where H${}_{2}$ forms due to the strong absorption by its electronic transitions and also by atomic C. In the coldest regions heating by cosmic rays becomes important, together with cooling by CO rotation transitions. The Homunculus appears “lumpy”, showing that the envelope does not have constant density or pressure. These calculations show two possible sources of local instabilities which might help form blobs. The radiative acceleration, mainly caused by the absorption of the incident continuum by grains and lines, exceeds 3$\times$10${}^{-3}$ cm s${}^{-2}$ at the illuminated face but falls to 3$\times$10${}^{-6}$ cm s${}^{-2}$ at the shielded face. This suggests that significant radiative acceleration can occur over the $\sim$10${}^{2}$ yr lifetime of the ejecta. It may be Rayleigh-Taylor unstable because of the decreasing acceleration. The thermal balance is a second source of instability – the temperature derivative of the net cooling, defined as cooling minus heating, is negative across much of the ejecta; this material is thermally unstable. Both instabilities will be the focus of future work. The emitted spectrum is shown as the solid line in Figure 1. Thermal infrared emission from grains is prominent, along with CO lines in the mm. The 10 $\mu$m silicate feature is present since the calculation only included a silicate component. A detailed comparison with observations would help constrain the model further, and so help deduce properties such as the dust abundance and composition. 4. The Future Our main purpose is to point to directions for future work on the Homunculus, with the goal of using it as a laboratory to understand basic physical processes. The Homunculus is an especially well-posed problem. The ejecta were once part of a hot stellar atmosphere and so must have initially been warm, ionized, and dust-free. Today it is cold, dusty, and molecular. How did it go between these states during the time since its ejection? This dust could not have formed in the atmosphere of Eta Car itself, since the energy density temperature is above the condensation temperature of most solids. Thus the dust now seen in reflection is most likely to have formed in the material after being expelled from the star. The cycle of dust destruction and formation is still poorly understood (Draine 1990). These newly formed dust grains have a large size, and measurements of the extinction curve across the spectrum would help quantify their radii. A comparison between intensities of the thermal IR continuum and H${}_{2}$ or CO lines would measure the dust-to-gas ratio. Infrared spectral features can also reveal the dust composition. This is especially important in light of recent work suggesting that supernovae are important sources of new grains in the galaxy (Morgan et al. 2003). The Homunculus is predicted to be predominantly molecular. A molecular inventory could be obtained from UV observations of electronic absorption lines or from the many prominent rotational emission lines that are expected in the IR – mm. We have a good idea of the initial gas-phase chemical composition so this inventory would test current chemical reaction networks, and especially the theory of H${}_{2}$ formation on grain surfaces. The chemistry will be affected by additional cosmic ray ionization if radioactive nuclei are present in the ejecta, and also by the isotopic variations – most C should be ${}^{13}$C rather than ${}^{12}$C. Can the chemistry test these assumptions? Finally, the prominent structures seen in the Homunculus can test dynamical theories. Could thermal or radiation driving instabilities play a role? Acknowledgments: Research into the physical processes of the ISM is supported by NSF (AST 0307720) and NASA (NAG5-12020). N.S. was supported by NASA through grant HF-01166.01A from STScI, which is operated by AURA, Inc., under NASA contract NAS 5-26555. References Abel, N.P., Brogan, C.L., Ferland, G.J., O’Dell, C.R., Shaw, G., & Troland, T.H. 2004, ApJ, 609, 247 Corcoran, M.F., et al. 2001, ApJ, 547, 1034 Davidson, K., & Humphreys, R.M. 1997, ARAA, 35, 1 Davidson, K., et al. 1986, ApJ, 305, 867 Draine B. T. 1990, in: The Evolution of the Interstellar Medium, ed. L. Blitz, ASP, San Francisco, p. 193 Ferland, G. J., Korista, K.T., Verner, D.A., Ferguson, J.W., Kingdon J.B., & Verner E.M. 1998, PASP, 110, 761 Gehrz, R.D. et al. 1973, Astrophys. Lett., 13, 89 Hillier, D.J., et al. 2001, ApJ, 553, 837 Morgan, H.L., Dunne, L., Eales, S.A., Ivison, R.J., & Edmunds, M.G. 2003, ApJ, 597L Smith, N. 2002, MNRAS, 337, 1252 Smith, N., Gehrz, R.D., & Krautter, J. 1998, AJ, 116, 1332 Smith, N., et al. 2003, AJ, 125, 1458 Smith, N., & Morse, J.A. 2004, ApJ, 605, 854 Viotti, R., et al. 1989, ApJS, 71, 983 Westphal, J.A., & Neugebauer, G. 1969, ApJ, 156, L45 Whitelock, P.A., et al. 1983, MNRAS, 203, 385 TABLE 1 – Model parameters TABLE 2 – Predicted column densities (cm${}^{-2})$
Kinetic entropy inequality and hydrostatic reconstruction scheme for the Saint-Venant system E. Audusse Université Paris 13, Laboratoire d’Analyse, Géométrie et Applications, 99 av. J.-B. Clément, F-93430 Villetaneuse, France - Inria, ANGE project-team, Rocquencourt - B.P. 105, F78153 Le Chesnay cedex, France - CEREMA, ANGE project-team, 134 rue de Beauvais, F-60280 Margny-Lès-Compiègne, France - UPMC University Paris VI, ANGE project-team, UMR 7958 LJLL, F-75005 Paris, France eaudusse@yahoo.fr ,  F. Bouchut Université Paris-Est, Laboratoire d’Analyse et de Mathématiques Appliquées (UMR 8050), CNRS, UPEM, UPEC, F-77454, Marne-la-Vallée, France Francois.Bouchut@u-pem.fr ,  M.-O. Bristeau Inria, ANGE project-team, Rocquencourt - B.P. 105, F78153 Le Chesnay cedex, France - CEREMA, ANGE project-team, 134 rue de Beauvais, F-60280 Margny-Lès-Compiègne, France - UPMC University Paris VI, ANGE project-team, UMR 7958 LJLL, F-75005 Paris, France Marie-Odile.Bristeau@inria.fr  and  J. Sainte-Marie Inria, ANGE project-team, Rocquencourt - B.P. 105, F78153 Le Chesnay cedex, France - CEREMA, ANGE project-team, 134 rue de Beauvais, F-60280 Margny-Lès-Compiègne, France - UPMC University Paris VI, ANGE project-team, UMR 7958 LJLL, F-75005 Paris, France Jacques.Sainte-Marie@inria.fr Abstract. A lot of well-balanced schemes have been proposed for discretizing the classical Saint-Venant system for shallow water flows with non-flat bottom. Among them, the hydrostatic reconstruction scheme is a simple and efficient one. It involves the knowledge of an arbitrary solver for the homogeneous problem (for example Godunov, Roe, kinetic…). If this solver is entropy satisfying, then the hydrostatic reconstruction scheme satisfies a semi-discrete entropy inequality. In this paper we prove that, when used with the classical kinetic solver, the hydrostatic reconstruction scheme also satisfies a fully discrete entropy inequality, but with an error term. This error term tends to zero strongly when the space step tends to zero, including solutions with shocks. We prove also that the hydrostatic reconstruction scheme does not satisfy the entropy inequality without error term. Key words and phrases:Shallow water equations, well-balanced schemes, hydrostatic reconstruction, kinetic solver, fully discrete entropy inequality 2000 Mathematics Subject Classification: 65M12, 74S10, 76M12, 35L65 1. Introduction The classical Saint-Venant system for shallow water describes the height of water $h(t,x)\geq 0$, and the water velocity $u(t,x)\in{\mathbb{R}}$ ($x$ denotes a coordinate in the horizontal direction) in the direction parallel to the bottom. It assumes a slowly varying topography $z(x)$, and reads (1.1) $$\begin{array}[]{l}\displaystyle\partial_{t}h+\partial_{x}(hu)=0,\\ \displaystyle\partial_{t}(hu)+\partial_{x}(hu^{2}+g\frac{h^{2}}{2})+gh\partial% _{x}z=0,\end{array}$$ where $g>0$ is the gravity constant. This system is completed with an entropy (energy) inequality (1.2) $$\partial_{t}\biggl{(}h\frac{u^{2}}{2}+g\frac{h^{2}}{2}+ghz\biggr{)}+\partial_{% x}\biggl{(}\bigl{(}h\frac{u^{2}}{2}+gh^{2}+ghz\bigr{)}u\biggr{)}\leq 0.$$ We shall denote $U=(h,hu)^{T}$ and (1.3) $$\eta(U)=h\frac{u^{2}}{2}+g\frac{h^{2}}{2},\qquad G(U)=\bigl{(}h\frac{u^{2}}{2}% +gh^{2}\bigr{)}u$$ the entropy and entropy fluxes without topography. The derivation of an efficient, robust and stable numerical scheme for the Saint-Venant system has received an extensive coverage. The issue involves the notion of well-balanced schemes, and we refer the reader to [10, 17, 15, 19] and references therein. The hydrostatic reconstruction (HR), introduced in [1], is a general and efficient method that uses an arbitrary solver for the homogeneous problem, like Roe, relaxation, or kinetic solvers. It leads to a consistent, well-balanced, positive scheme satisfying a semi-discrete entropy inequality, in the sense that the inequality holds only in the limit when the timestep tends to zero. The method has been generalized to balance all subsonic steady-states in [11], and to multi-layer shallow water in [12], with the source-centered variant of the hydrostatic reconstruction. The HR technique has also been used to derive efficient and robust numerical schemes approximating the incompressible Euler and Navier-Stokes equations with free surface [5, 3], i.e. non necessarily shallow water flows. The aim of this paper is to prove that the hydrostatic reconstruction, when used with the classical kinetic solver [8, 4, 18, 9, 2, 16, 13], satisfies a fully discrete entropy inequality. However, as established in Proposition 3.9, this inequality necessarily involves an error term. The main result of this paper is that this error term is in the square of the topography increment, ensuring that it tends to zero strongly as the space step tends to zero, for solutions that can include shocks. The topography needs however to be Lipschitz continuous. In general, to satisfy an entropy inequality is a criterion for the stability of a scheme. In the fully discrete case, it enables in particular to get an a priori bound on the total energy. In the time-only discrete case and without topography, the single energy inequality that holds for the kinetic scheme ensures the convergence [7]. The fully discrete case (still without topography) has been treated in [6]. Another approach to get a scheme satisfying a fully discrete entropy inequality is proposed in [14]. The outline of the paper is as follows. We recall in Section 2 the kinetic scheme without topography and its entropy analysis, in both the discrete and semi-discrete cases. We show in particular how one can see that the fully discrete inequality is always less dissipative than the semi-discrete one. In Section 3 we analyze the entropy inequality of the kinetic solver with topography. We propose a kinetic interpretation of the hydrostatic reconstruction and we give its properties. The semi-discrete scheme is analyzed first. Our main result Theorem 3.7 concerning the fully discrete scheme is finally proved. Before going into discretized models, we end this section by recalling the classical kinetic Maxwellian equilibrium, used in [18] for example, at the continuous level. The kinetic Maxwellian is given by (1.4) $$M(U,\xi)=\frac{1}{g\pi}\Bigl{(}2gh-(\xi-u)^{2}\Bigr{)}_{+}^{1/2},$$ where $\xi\in{\mathbb{R}}$ and $x_{+}\equiv\max(0,x)$ for any $x\in{\mathbb{R}}$. It satisfies the following moment relations, (1.5) $$\begin{array}[]{c}\displaystyle\int_{\mathbb{R}}\begin{pmatrix}1\\ \xi\end{pmatrix}M(U,\xi)\,d\xi=U,\\ \displaystyle\int_{\mathbb{R}}\xi^{2}M(U,\xi)\,d\xi=hu^{2}+g\frac{h^{2}}{2}.% \end{array}$$ These definitions allow us to obtain a kinetic representation of the Saint-Venant system. Proposition 1.1. The pair of functions $(h,hu)$ is a strong solution of the Saint-Venant system (1.1) if and only if $M(U,\xi)$ satisfies the kinetic equation $$\partial_{t}M+\xi\partial_{x}M-g(\partial_{x}z)\partial_{\xi}M=Q,$$ for some “collision term” $Q(t,x,\xi)$ which satisfies, for a.e. $(t,x)$, ô°‹ô°‹ $$\int_{\mathbb{R}}Qd\xi=\int_{\mathbb{R}}\xi Qd\xi=0.$$ Proof. Using (1.5), the proof relies on a very obvious computation. ∎ The interest of the particular form (1.4) lies in its link with a kinetic entropy. Consider the kinetic entropy, (1.6) $$H(f,\xi,z)=\frac{\xi^{2}}{2}f+\frac{g^{2}\pi^{2}}{6}f^{3}+gzf,$$ where $f\geq 0$, $\xi\in{\mathbb{R}}$ and $z\in{\mathbb{R}}$, and its version without topography (1.7) $$H_{0}(f,\xi)=\frac{\xi^{2}}{2}f+\frac{g^{2}\pi^{2}}{6}f^{3}.$$ Then one can check the relations (1.8) $$\int_{\mathbb{R}}H\bigl{(}M(U,\xi),\xi,z\bigr{)}\,d\xi=\eta(U)+ghz,$$ (1.9) $$\int_{\mathbb{R}}\xi H\bigl{(}M(U,\xi),\xi,z\bigr{)}\,d\xi=G(U)+ghzu.$$ One has the following subdifferential inequality and entropy minimization principle. Lemma 1.2. (i) For any $h\geq 0$, $u\in{\mathbb{R}}$, $f\geq 0$ and $\xi\in{\mathbb{R}}$ (1.10) $$H_{0}(f,\xi)\geq H_{0}\bigl{(}M(U,\xi),\xi\bigr{)}+\eta^{\prime}(U)\begin{% pmatrix}1\\ \xi\end{pmatrix}\bigl{(}f-M(U,\xi)\bigr{)}.$$ (ii) For any $f(\xi)\geq 0$, setting $h=\int f(\xi)d\xi$, $hu=\int\xi f(\xi)d\xi$ (assumed finite), one has (1.11) $$\eta(U)=\int_{\mathbb{R}}H_{0}\bigl{(}M(U,\xi),\xi\bigr{)}\,d\xi\leq\int_{% \mathbb{R}}H_{0}\bigl{(}f(\xi),\xi\bigr{)}\,d\xi.$$ Proof. This approach by the subdifferential inequality is explained in [8]. The property (ii) obviously follows from (i) by taking $f=f(\xi)$ and integrating (1.10) with respect to $\xi$. For proving (i), notice first that (1.12) $$\eta^{\prime}(U)=\bigl{(}gh-u^{2}/2,u\bigr{)},$$ where prime denotes differentiation with respect to $U=(h,hu)^{T}$. Thus (1.13) $$\eta^{\prime}(U)\begin{pmatrix}1\\ \xi\end{pmatrix}=gh-u^{2}/2+\xi u=\frac{\xi^{2}}{2}+gh-\frac{(\xi-u)^{2}}{2}.$$ Observe also that (1.14) $$H_{0}^{\prime}(f,\xi)=\frac{\xi^{2}}{2}+\frac{g^{2}\pi^{2}}{2}f^{2},$$ where here prime denotes differentiation with respect to $f$. The formula defining $M$ in (1.4) yields that (1.15) $$gh-\frac{(\xi-u)^{2}}{2}=\left\{\begin{array}[]{l}\displaystyle\frac{g^{2}\pi^% {2}}{2}M(U,\xi)^{2}\quad\mbox{if }M(U,\xi)>0,\\ \displaystyle\mbox{is nonpositive\quad if }M(U,\xi)=0,\end{array}\right.$$ thus (1.16) $$H_{0}^{\prime}\bigl{(}M(U,\xi),\xi\bigr{)}=\left\{\begin{array}[]{l}% \displaystyle\eta^{\prime}(U)\begin{pmatrix}1\\ \xi\end{pmatrix}\quad\mbox{if }M(U,\xi)>0,\\ \displaystyle\geq\eta^{\prime}(U)\begin{pmatrix}1\\ \xi\end{pmatrix}\quad\mbox{if }M(U,\xi)=0.\end{array}\right.$$ We conclude using the convexity of $H_{0}$ with respect to $f$ that (1.17) $$\begin{array}[]{l}\displaystyle H_{0}(f,\xi)\geq H_{0}\bigl{(}M(U,\xi),\xi% \bigr{)}+H_{0}^{\prime}\bigl{(}M(U,\xi),\xi\bigr{)}\bigl{(}f-M(U,\xi)\bigr{)}% \\ \displaystyle\hphantom{H_{0}(f,\xi)}\geq H_{0}\bigl{(}M(U,\xi),\xi\bigr{)}+% \eta^{\prime}(U)\begin{pmatrix}1\\ \xi\end{pmatrix}\bigl{(}f-M(U,\xi)\bigr{)},\end{array}$$ which proves the claim. ∎ We would like to approximate the solution $U(t,x)$, $x\in{\mathbb{R}}$, $t\geq 0$ of the system (1.1) by discrete values $U_{i}^{n}$, $i\in\mathbb{Z}$, $n\in\mathbb{N}$. In order to do so, we consider a grid of points $x_{i+1/2}$, $i\in\mathbb{Z}$, $$\ldots<x_{i-1/2}<x_{i+1/2}<x_{i+3/2}<\ldots,$$ and we define the cells (or finite volumes) and their lengths $$C_{i}=]x_{i-1/2},x_{i+1/2}[,\qquad\Delta x_{i}=x_{i+1/2}-x_{i-1/2}.$$ We consider discrete times $t^{n}$ with $t^{n+1}=t^{n}+\Delta t^{n}$, and we define the piecewise constant functions $U^{n}(x)$ corresponding to time $t^{n}$ and $z(x)$ as (1.18) $$U^{n}(x)=U^{n}_{i},\quad z(x)=z_{i},\quad\mbox{ for }x_{i-1/2}<x<x_{i+1/2}.$$ A finite volume scheme for solving (1.1) is a formula of the form (1.19) $$U^{n+1}_{i}=U^{n}_{i}-\sigma_{i}(F_{i+1/2-}-F_{i-1/2+}),$$ where $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$, telling how to compute the values $U^{n+1}_{i}$ knowing $U_{i}^{n}$. Here we consider a first-order explicit three points scheme where (1.20) $$F_{i+1/2-}=\mathcal{F}_{l}(U_{i}^{n},U_{i+1}^{n}),\qquad F_{i+1/2+}=\mathcal{F% }_{r}(U_{i}^{n},U_{i+1}^{n}).$$ The functions $\mathcal{F}_{l/r}(U_{l},U_{r})\in{\mathbb{R}}^{2}$ are the numerical fluxes. In the present work, the expressions for $\mathcal{F}_{l/r}(U_{l},U_{r})$ are based on the kinetic description given in Proposition 1.1. Indeed the method used in [18] in order to solve (1.1) can be viewed as solving (1.21) $$\partial_{t}f+\xi\partial_{x}f-g(\partial_{x}z)\partial_{\xi}f=0$$ for the unknown $f(t,x,\xi)$, over the time interval $(t^{n},t^{n+1})$, with initial data (1.22) $$f(t^{n},x,\xi)=M(U^{n}(x),\xi).$$ Defining the update as (1.23) $$U^{n+1}_{i}=\frac{1}{\Delta x_{i}}\int_{x_{i-1/2}}^{x_{i+1/2}}\int_{\mathbb{R}% }\begin{pmatrix}1\\ \xi\end{pmatrix}f(t^{n+1-},x,\xi)\,dxd\xi,$$ and (1.24) $$f^{n+1-}_{i}(\xi)=\frac{1}{\Delta x_{i}}\int_{x_{i-1/2}}^{x_{i+1/2}}f(t^{n+1-}% ,x,\xi)\,dx,$$ the formula (1.23) can then be written (1.25) $$U^{n+1}_{i}=\int_{\mathbb{R}}\begin{pmatrix}1\\ \xi\end{pmatrix}f^{n+1-}_{i}(\xi)\,d\xi.$$ This formula can in fact be written under the form (1.19), (1.20) for some numerical fluxes $\mathcal{F}_{l/r}$ computed in [18], involving nonexplicit integrals. Here we would like to use simplified formulas, and it will be done by choosing an approximation of $f^{n+1-}_{i}(\xi)$. We shall often denote $U_{i}$ instead of $U_{i}^{n}$, whenever there is no ambiguity. 2. Kinetic entropy inequality without topography In this section we consider the problem (1.1) without topography, and the unmodified kinetic scheme (1.21), (1.22), (1.24), (1.25). This problem is classical, and we recall here how the entropy inequality is analyzed in this case, in the fully discrete and semi-discrete cases. 2.1. Fully discrete scheme Without topography, the kinetic scheme is an entropy satisfying flux vector splitting scheme [9]. The update (1.24) of the solution of (1.21),(1.22) simplifies to the discrete kinetic scheme (2.1) $$f_{i}^{n+1-}=M_{i}-\sigma_{i}\xi\Bigl{(}{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{% i}+{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i+1}-{1\hskip-3.414331pt{\rm I}}_{\xi% <0}M_{i}-{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i-1}\Bigr{)},$$ with $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$ and with short notation (we omit the variable $\xi$). One can write it (2.2) $$f_{i}^{n+1-}=\left\{\begin{array}[]{ll}\displaystyle(1+\sigma_{i}\xi)M_{i}-% \sigma_{i}\xi M_{i+1}&\mbox{ if }\xi<0,\\ \displaystyle(1-\sigma_{i}\xi)M_{i}+\sigma_{i}\xi M_{i-1}&\mbox{ if }\xi>0.% \end{array}\right.$$ Then under the CFL condition that (2.3) $$\sigma_{i}|\xi|\leq 1\mbox{ in the supports of }M_{i},M_{i-1},M_{i+1},$$ $f_{i}^{n+1-}$ is a convex combination of $M_{i}$ and $M_{i+1}$ if $\xi<0$, of $M_{i}$ and $M_{i-1}$ if $\xi>0$. Thus $f_{i}^{n+1-}\geq 0$, and recalling the kinetic entropy $H_{0}(f,\xi)$ from (1.7), we have (2.4) $$H_{0}(f_{i}^{n+1-},\xi)\leq\left\{\begin{array}[]{ll}\displaystyle(1+\sigma_{i% }\xi)H_{0}(M_{i},\xi)-\sigma_{i}\xi H_{0}(M_{i+1},\xi)&\mbox{ if }\xi<0,\\ \displaystyle(1-\sigma_{i}\xi)H_{0}(M_{i},\xi)+\sigma_{i}\xi H_{0}(M_{i-1},\xi% )&\mbox{ if }\xi>0.\end{array}\right.$$ This can be also written as (2.5) $$\begin{array}[]{l}\displaystyle H_{0}(f_{i}^{n+1-},\xi)\leq H_{0}(M_{i},\xi)-% \sigma_{i}\xi\Bigl{(}{1\hskip-3.414331pt{\rm I}}_{\xi>0}H_{0}(M_{i},\xi)+{1% \hskip-3.414331pt{\rm I}}_{\xi<0}H_{0}(M_{i+1},\xi)\\ \displaystyle\hphantom{H_{0}(M_{i}^{n+1-},\xi)\leq}-{1\hskip-3.414331pt{\rm I}% }_{\xi<0}H_{0}(M_{i},\xi)-{1\hskip-3.414331pt{\rm I}}_{\xi>0}H_{0}(M_{i-1},\xi% )\Bigr{)},\end{array}$$ which can be interpreted as a conservative kinetic entropy inequality. Note that with (1.25) and (1.11), (2.6) $$\eta(U^{n+1}_{i})\leq\int_{\mathbb{R}}\,H_{0}(f_{i}^{n+1-}(\xi),\xi)d\xi,$$ which by integration of (2.5) yields the macroscopic entropy inequality. The scheme (2.1) and the definition (1.25) allow to complete the definition of the macroscopic scheme (1.19), (1.20) with the numerical flux $\mathcal{F}_{l}=\mathcal{F}_{r}\equiv\mathcal{F}$ given by the flux vector splitting formula [9] (2.7) $$\mathcal{F}(U_{l},U_{r})=\int_{\xi>0}\xi\begin{pmatrix}1\\ \xi\end{pmatrix}M(U_{l},\xi)\,d\xi+\int_{\xi<0}\xi\begin{pmatrix}1\\ \xi\end{pmatrix}M(U_{r},\xi)\,d\xi,$$ where $M$ is defined in (1.4). 2.2. Semi-discrete scheme Assuming that the timestep is very small (i.e. $\sigma_{i}$ very small), we have the linearized approximation of the entropy variation from (2.1) (2.8) $$\begin{array}[]{l}\displaystyle H_{0}(f_{i}^{n+1-},\xi)\simeq H_{0}(M_{i},\xi)% -\sigma_{i}\xi H_{0}^{\prime}(M_{i},\xi)\Bigl{(}{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i}+{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i+1}\\ \displaystyle\hphantom{H_{0}(M_{i}^{n+1-},\xi)\leq}-{1\hskip-3.414331pt{\rm I}% }_{\xi<0}M_{i}-{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i-1}\Bigr{)},\end{array}$$ where $H_{0}^{\prime}(f,\xi)=\partial_{f}H_{0}(f,\xi)$. This linearization with respect to $\Delta t^{n}$ (or equivalently with respect to $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$) represents indeed the entropy in the semi-discrete limit $\Delta t^{n}\to 0$ (divide (2.8) by $\Delta t^{n}$ and let formally $\Delta t^{n}\to 0$). The entropy inequality attached to this linearization can be estimated as follows. Lemma 2.1. The linearized term from (2.8) is dominated by the conservative difference from (2.5), (2.9) $$\begin{array}[]{l}\displaystyle-\sigma_{i}\xi H_{0}^{\prime}(M_{i},\xi)\Bigl{(% }{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i}+{1\hskip-3.414331pt{\rm I}}_{\xi<0}M% _{i+1}-{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i}-{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i-1}\Bigr{)}\\ \displaystyle\leq-\sigma_{i}\xi\Bigl{(}{1\hskip-3.414331pt{\rm I}}_{\xi>0}H_{0% }(M_{i},\xi)+{1\hskip-3.414331pt{\rm I}}_{\xi<0}H_{0}(M_{i+1},\xi)\\ \displaystyle\hphantom{H_{0}(M_{i}^{n+1-},\xi)\leq}-{1\hskip-3.414331pt{\rm I}% }_{\xi<0}H_{0}(M_{i},\xi)-{1\hskip-3.414331pt{\rm I}}_{\xi>0}H_{0}(M_{i-1},\xi% )\Bigr{)}.\end{array}$$ In particular, the semi-discrete scheme is more dissipative than the fully discrete scheme. Proof. It is enough to prove two inequalities, (2.10) $$\xi H_{0}^{\prime}(M_{i})({1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i}+{1\hskip-3.% 414331pt{\rm I}}_{\xi<0}M_{i+1}-M_{i})\geq\xi({1\hskip-3.414331pt{\rm I}}_{\xi% >0}H_{0}(M_{i})+{1\hskip-3.414331pt{\rm I}}_{\xi<0}H_{0}(M_{i+1})-H_{0}(M_{i}))$$ and (2.11) $$\xi H_{0}^{\prime}(M_{i})({1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i}+{1\hskip-3.% 414331pt{\rm I}}_{\xi>0}M_{i-1}-M_{i})\leq\xi({1\hskip-3.414331pt{\rm I}}_{\xi% <0}H_{0}(M_{i})+{1\hskip-3.414331pt{\rm I}}_{\xi>0}H_{0}(M_{i-1})-H_{0}(M_{i})).$$ We observe that (2.10) is trivial for $\xi>0$, and (2.11) is trivial for $\xi<0$. The two conditions can therefore be written (2.12) $$\begin{array}[]{l}\displaystyle H_{0}^{\prime}(M_{i})(M_{i+1}-M_{i})\leq H_{0}% (M_{i+1})-H_{0}(M_{i})\quad\mbox{for }\xi<0,\\ \displaystyle H_{0}^{\prime}(M_{i})(M_{i-1}-M_{i})\leq H_{0}(M_{i-1})-H_{0}(M_% {i})\quad\mbox{for }\xi>0.\end{array}$$ These last inequalities follow from the convexity of $H_{0}$. ∎ 3. Kinetic interpretation of the hydrostatic reconstruction scheme The hydrostatic reconstruction scheme (HR scheme for short) for the Saint-Venant system (1.1), has been introduced in [1], and can be written as follows, (3.1) $$U^{n+1}_{i}=U_{i}-\sigma_{i}(F_{i+1/2-}-F_{i-1/2+}),$$ where $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$, (3.2) $$\begin{array}[]{l}\displaystyle F_{i+1/2-}=\mathcal{F}(U_{i+1/2-},U_{i+1/2+})+% \begin{pmatrix}0\\ g\frac{h_{i}^{2}}{2}-\frac{gh_{i+1/2-}^{2}}{2}\end{pmatrix},\\ \displaystyle F_{i+1/2+}=\mathcal{F}(U_{i+1/2-},U_{i+1/2+})+\begin{pmatrix}0\\ g\frac{h_{i+1}^{2}}{2}-\frac{gh_{i+1/2+}^{2}}{2}\end{pmatrix},\end{array}$$ $\mathcal{F}$ is a numerical flux for the system without topography, and the reconstructed states (3.3) $$U_{i+1/2-}=(h_{i+1/2-},h_{i+1/2-}u_{i}),\qquad U_{i+1/2+}=(h_{i+1/2+},h_{i+1/2% +}u_{i+1}),$$ are defined by (3.4) $$h_{i+1/2-}=(h_{i}+z_{i}-z_{i+1/2})_{+},\qquad h_{i+1/2+}=(h_{i+1}+z_{i+1}-z_{i% +1/2})_{+},$$ and (3.5) $$z_{i+1/2}=\max(z_{i},z_{i+1}).$$ Then, looking for a kinetic interpretation of the HR scheme, we would like to approximate the solution to (1.21) in order to write down a kinetic scheme such that the associated macroscopic scheme is exactly (3.1)-(3.2) with numerical flux $\mathcal{F}$ given by (2.7). We denote $M_{i}=M(U_{i},\xi)$, $M_{i+1/2\pm}=M(U_{i+1/2\pm},\xi)$, $f_{i}^{n+1-}=f_{i}^{n+1-}(\xi)$, and we consider the scheme (3.6) $$\begin{array}[]{l}\displaystyle f_{i}^{n+1-}=M_{i}-\sigma_{i}\biggl{(}\xi{1% \hskip-3.414331pt{\rm I}}_{\xi<0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i+1/2-}+\delta M_{i+1/2-}\\ \displaystyle\mskip 160.0mu -\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i-1/2-}-% \xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-1/2+}-\delta M_{i-1/2+}\biggr{)}.% \end{array}$$ In this formula, $\delta M_{i+1/2\pm}$ depend on $\xi$, $U_{i}$, $U_{i+1}$, $\Delta z_{i+1/2}=z_{i+1}-z_{i}$, and are assumed to satisfy the moment relations (3.7) $$\int_{\mathbb{R}}\delta M_{i+1/2-}\,d\xi=0,\quad\int_{\mathbb{R}}\xi\,\delta M% _{i+1/2-}\,d\xi=g\frac{h_{i}^{2}}{2}-g\frac{h_{i+1/2-}^{2}}{2},$$ (3.8) $$\int_{\mathbb{R}}\delta M_{i-1/2+}\,d\xi=0,\quad\int_{\mathbb{R}}\xi\,\delta M% _{i-1/2+}\,d\xi=g\frac{h_{i}^{2}}{2}-g\frac{h_{i-1/2+}^{2}}{2}.$$ Using again (1.25), the integration of (3.6) multiplied by $\begin{pmatrix}1\\ \xi\end{pmatrix}$ with respect to $\xi$ then gives obviously the HR scheme (3.1)-(3.2) with (3.3)-(3.5), (2.7). 3.1. Analysis of the semi-discrete scheme Assuming that the timestep is very small (i.e. $\sigma_{i}$ very small), we have the linearized approximation of the entropy variation from (3.6), (3.9) $$\begin{array}[]{l}\displaystyle H(f_{i}^{n+1-},z_{i})\simeq H(M_{i},z_{i})-% \sigma_{i}H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<% 0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i+1/2-}\\ \displaystyle\mskip 60.0mu +\delta M_{i+1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i-1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-1/2+}-\delta M_{i-% 1/2+}\biggr{)},\end{array}$$ where the kinetic entropy $H(f,\xi,z)$ is defined in (1.6). As in Subsection 2.2, this linearization with respect to $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$ represents indeed the entropy in the semi-discrete limit $\Delta t^{n}\to 0$. Its dissipation can be estimated as follows. Proposition 3.1. We assume that the extra variations $\delta M_{i+1/2\pm}$ satisfy (3.7), (3.8), and also (3.10) $$M(U_{i},\xi)=0\ \Rightarrow\delta M_{i+1/2-}(\xi)=0\mbox{ and }\delta M_{i-1/2% +}(\xi)=0.$$ Then the linearized term from (3.9) is dominated by a quasi-conservative difference, (3.11) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi<0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i+% 1/2-}\\ \displaystyle\mskip 60.0mu +\delta M_{i+1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i-1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-1/2+}-\delta M_{i-% 1/2+}\biggr{)}\\ \displaystyle\geq\widetilde{H}_{i+1/2-}-\widetilde{H}_{i-1/2+},\end{array}$$ where (3.12) $$\begin{array}[]{l}\displaystyle\widetilde{H}_{i+1/2-}=\xi{1\hskip-3.414331pt{% \rm I}}_{\xi<0}H(M_{i+1/2+},z_{i+1/2})+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}H% (M_{i+1/2-},z_{i+1/2})\\ \displaystyle\hphantom{\widetilde{H}_{i+1/2-}=}+\xi H(M_{i},z_{i})-\xi H(M_{i+% 1/2-},z_{i+1/2})\\ \displaystyle\hphantom{\widetilde{H}_{i+1/2-}=}+\Bigl{(}\eta^{\prime}(U_{i})% \begin{pmatrix}1\\ \xi\end{pmatrix}+gz_{i}\Bigr{)}\bigl{(}\xi M_{i+1/2-}-\xi M_{i}+\delta M_{i+1/% 2-}\bigr{)},\end{array}$$ (3.13) $$\begin{array}[]{l}\displaystyle\widetilde{H}_{i-1/2+}=\xi{1\hskip-3.414331pt{% \rm I}}_{\xi<0}H(M_{i-1/2+},z_{i-1/2})+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}H% (M_{i-1/2-},z_{i-1/2})\\ \displaystyle\hphantom{\widetilde{H}_{i+1/2-}=}+\xi H(M_{i},z_{i})-\xi H(M_{i-% 1/2+},z_{i-1/2})\\ \displaystyle\hphantom{\widetilde{H}_{i+1/2-}=}+\Bigl{(}\eta^{\prime}(U_{i})% \begin{pmatrix}1\\ \xi\end{pmatrix}+gz_{i}\Bigr{)}\bigl{(}\xi M_{i-1/2+}-\xi M_{i}+\delta M_{i-1/% 2+}\bigr{)}.\end{array}$$ Moreover, the integral with respect to $\xi$ of the last two lines of (3.12) (respectively of (3.13)) vanishes. In particular, (3.14) $$\int_{\mathbb{R}}\bigl{(}\widetilde{H}_{i+1/2-}-\widetilde{H}_{i-1/2+}\bigr{)}% \,d\xi=\widetilde{G}_{i+1/2}-\widetilde{G}_{i-1/2},$$ with (3.15) $$\widetilde{G}_{i+1/2}=\int_{\xi<0}\xi H(M_{i+1/2+},z_{i+1/2})\,d\xi+\int_{\xi>% 0}\xi H(M_{i+1/2-},z_{i+1/2})\,d\xi.$$ Proof. The value of the integral with respect to $\xi$ of the two last lines of (3.12) is (3.16) $$\begin{array}[]{l}\displaystyle\bigl{(}h_{i}\frac{u_{i}^{2}}{2}+gh_{i}^{2}+gh_% {i}z_{i}\bigr{)}u_{i}-\bigl{(}h_{i+1/2-}\frac{u_{i}^{2}}{2}+gh_{i+1/2-}^{2}+gh% _{i+1/2-}z_{i+1/2}\bigr{)}u_{i}\\ \displaystyle+(gh_{i}+gz_{i}-u_{i}^{2}/2)u_{i}(h_{i+1/2-}-h_{i})+u_{i}^{3}(h_{% i+1/2-}-h_{i})\\ \displaystyle=u_{i}gh_{i+1/2-}(-h_{i+1/2-}-z_{i+1/2}+z_{i}+h_{i})\\ \displaystyle=0,\end{array}$$ because of the definition of $h_{i+1/2-}$ in (3.4). The computation for (3.13)) is similar. In order to prove (3.11), it is enough to prove the two inequalities (3.17) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi<0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i+% 1/2-}+\delta M_{i+1/2-}-\xi M_{i}\biggr{)}\\ \displaystyle\geq\widetilde{H}_{i+1/2-}-\xi H(M_{i},z_{i}),\end{array}$$ and (3.18) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi>0}M_{i-1/2-}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-% 1/2+}+\delta M_{i-1/2+}-\xi M_{i}\biggr{)}\\ \displaystyle\leq\widetilde{H}_{i-1/2+}-\xi H(M_{i},z_{i}).\end{array}$$ We note that the definitions of $h_{i+1/2\pm}$ in (3.4)-(3.5) ensure that $h_{i+1/2-}\leq h_{i}$, and $h_{i+1/2+}\leq h_{i+1}$. Therefore, because of (1.4) one has (3.19) $$0\leq M_{i+1/2-}\leq M_{i},\quad 0\leq M_{i+1/2+}\leq M_{i+1},$$ thus (3.20) $$M(U_{i},\xi)=0\ \Rightarrow M(U_{i+1/2-},\xi)=0\mbox{ and }M(U_{i-1/2+},\xi)=0.$$ Taking into account (3.10), with (1.16) we get (3.21) $$\begin{array}[]{l}\displaystyle\hphantom{=}\Bigl{(}\eta^{\prime}(U_{i})\begin{% pmatrix}1\\ \xi\end{pmatrix}+gz_{i}\Bigr{)}\bigl{(}\xi M_{i+1/2-}-\xi M_{i}+\delta M_{i+1/% 2-}\bigr{)}\\ \displaystyle=H^{\prime}(M_{i},z_{i})\bigl{(}\xi M_{i+1/2-}-\xi M_{i}+\delta M% _{i+1/2-}\bigr{)},\end{array}$$ and (3.22) $$\begin{array}[]{l}\displaystyle\hphantom{=}\Bigl{(}\eta^{\prime}(U_{i})\begin{% pmatrix}1\\ \xi\end{pmatrix}+gz_{i}\Bigr{)}\bigl{(}\xi M_{i-1/2+}-\xi M_{i}+\delta M_{i-1/% 2+}\bigr{)}\\ \displaystyle=H^{\prime}(M_{i},z_{i})\bigl{(}\xi M_{i-1/2+}-\xi M_{i}+\delta M% _{i-1/2+}\bigr{)}.\end{array}$$ Therefore, the inequalities (3.17)-(3.18) simplify to (3.23) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi<0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i+% 1/2-}-\xi M_{i+1/2-}\biggr{)}\\ \displaystyle\geq\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}H(M_{i+1/2+},z_{i+1/2})% +\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}H(M_{i+1/2-},z_{i+1/2})-\xi H(M_{i+1/2-% },z_{i+1/2})\vphantom{\Bigl{|}},\end{array}$$ (3.24) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi>0}M_{i-1/2-}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-% 1/2+}-\xi M_{i-1/2+}\biggr{)}\\ \displaystyle\leq\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}H(M_{i-1/2+},z_{i-1/2})% +\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}H(M_{i-1/2-},z_{i-1/2})-\xi H(M_{i-1/2+% },z_{i-1/2})\vphantom{\Bigl{|}}.\end{array}$$ The first inequality (3.23) is trivial for $\xi>0$, and the second inequality (3.24) is trivial for $\xi<0$. Therefore it is enough to satisfy the two inequalities (3.25) $$H^{\prime}(M_{i},z_{i})\Bigl{(}M_{i+1/2+}-M_{i+1/2-}\Bigr{)}\leq H(M_{i+1/2+},% z_{i+1/2})-H(M_{i+1/2-},z_{i+1/2}),$$ (3.26) $$H^{\prime}(M_{i},z_{i})\Bigl{(}M_{i-1/2-}-M_{i-1/2+}\Bigr{)}\leq H(M_{i-1/2-},% z_{i-1/2})-H(M_{i-1/2+},z_{i-1/2}).$$ But as in Subsection 2.2, we have according to the convexity of $H$ with respect to $f$, (3.27) $$\begin{array}[]{l}\displaystyle H(M_{i+1/2+},z_{i+1/2})\geq H(M_{i+1/2-},z_{i+% 1/2})\\ \displaystyle\mskip 200.0mu +H^{\prime}(M_{i+1/2-},z_{i+1/2})(M_{i+1/2+}-M_{i+% 1/2-}),\end{array}$$ (3.28) $$\begin{array}[]{l}\displaystyle H(M_{i-1/2-},z_{i-1/2})\geq H(M_{i-1/2+},z_{i-% 1/2})\\ \displaystyle\mskip 200.0mu +H^{\prime}(M_{i-1/2+},z_{i-1/2})(M_{i-1/2-}-M_{i-% 1/2+}).\end{array}$$ In order to prove (3.25), we observe that if $M_{i}(\xi)=0$ then $M_{i+1/2-}(\xi)=0$ also, thus $H^{\prime}(M_{i+1/2-},z_{i+1/2})-H^{\prime}(M_{i},z_{i})=g(z_{i+1/2}-z_{i})\geq 0$ because of (3.5), and the inequality (3.25) follows from (3.27). Next, if $M_{i}(\xi)>0$, one has (3.29) $$\begin{array}[]{l}\displaystyle\hphantom{=}H^{\prime}(M_{i},z_{i})(M_{i+1/2+}-% M_{i+1/2-})\\ \displaystyle=\bigl{(}\eta^{\prime}(U_{i})\begin{pmatrix}1\\ \xi\end{pmatrix}+gz_{i}\bigr{)}(M_{i+1/2+}-M_{i+1/2-}),\end{array}$$ and as in (1.17) (3.30) $$\begin{array}[]{l}\displaystyle\hphantom{\geq}H^{\prime}(M_{i+1/2-},z_{i+1/2})% (M_{i+1/2+}-M_{i+1/2-})\\ \displaystyle\geq\bigl{(}\eta^{\prime}(U_{i+1/2-})\begin{pmatrix}1\\ \xi\end{pmatrix}+gz_{i+1/2}\bigr{)}(M_{i+1/2+}-M_{i+1/2-}).\end{array}$$ Taking the difference between (3.30) and (3.29), we obtain (3.31) $$\begin{array}[]{l}\displaystyle\hphantom{\geq}H^{\prime}(M_{i+1/2-},z_{i+1/2})% (M_{i+1/2+}-M_{i+1/2-})-H^{\prime}(M_{i},z_{i})(M_{i+1/2+}-M_{i+1/2-})\\ \displaystyle\geq\bigl{(}gh_{i+1/2-}-gh_{i}+gz_{i+1/2}-gz_{i}\bigr{)}(M_{i+1/2% +}-M_{i+1/2-})\geq 0,\end{array}$$ because of the definition (3.4) of $h_{i+1/2-}$. Therefore we conclude that in any case ($M_{i}(\xi)$ being zero or not), one has (3.32) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})(M_{i+1/2+}-M_{i+1/2-})% -H(M_{i+1/2+},z_{i+1/2})+H(M_{i+1/2-},z_{i+1/2})\\ \leq H(M_{i+1/2-},z_{i+1/2})-H(M_{i+1/2+},z_{i+1/2})\\ \displaystyle\mskip 100.0mu +H^{\prime}(M_{i+1/2-},z_{i+1/2})(M_{i+1/2+}-M_{i+% 1/2-})\\ \leq 0\end{array}$$ because of (3.27), and this proves (3.25). Similarly one gets (3.33) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})(M_{i-1/2-}-M_{i-1/2+})% -H(M_{i-1/2-},z_{i-1/2})+H(M_{i-1/2+},z_{i-1/2})\\ \leq H(M_{i-1/2+},z_{i-1/2})-H(M_{i-1/2-},z_{i-1/2})\\ \displaystyle\mskip 100.0mu +H^{\prime}(M_{i-1/2+},z_{i-1/2})(M_{i-1/2-}-M_{i-% 1/2+})\\ \leq 0,\end{array}$$ proving (3.26). This concludes the proof, and we observe that we have indeed a dissipation estimate slightly stronger than (3.11), (3.34) $$\begin{array}[]{l}\displaystyle H^{\prime}(M_{i},z_{i})\biggl{(}\xi{1\hskip-3.% 414331pt{\rm I}}_{\xi<0}M_{i+1/2+}+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}M_{i+% 1/2-}\\ \displaystyle\mskip 60.0mu +\delta M_{i+1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}M_{i-1/2-}-\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}M_{i-1/2+}-\delta M_{i-% 1/2+}\biggr{)}\\ \displaystyle\geq\widetilde{H}_{i+1/2-}-\widetilde{H}_{i-1/2+}\\ \displaystyle-\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}\Bigl{(}H(M_{i+1/2+},z_{i+% 1/2})-H(M_{i+1/2-},z_{i+1/2})\\ \displaystyle\mskip 100.0mu -H^{\prime}(M_{i+1/2-},z_{i+1/2})(M_{i+1/2+}-M_{i+% 1/2-})\Bigr{)}\\ \displaystyle+\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}\Bigl{(}H(M_{i-1/2-},z_{i-% 1/2})-H(M_{i-1/2+},z_{i-1/2})\\ \displaystyle\mskip 100.0mu -H^{\prime}(M_{i-1/2+},z_{i-1/2})(M_{i-1/2-}-M_{i-% 1/2+})\Bigr{)}.\end{array}$$ ∎ Remark 3.2. The numerical entropy flux (3.15) can be written (3.35) $$\widetilde{G}_{i+1/2}={\mathcal{G}}(U_{i+1/2-},U_{i+1/2+})+gz_{i+1/2}\mathcal{% F}^{0}(U_{i+1/2-},U_{i+1/2+}),$$ where ${\mathcal{G}}$ is the numerical entropy flux of the scheme without topography, and $\mathcal{F}^{0}$ is the first component of $\mathcal{F}$. This formula is in accordance of the analysis of the semi-discrete entropy inequality in [1]. Remark 3.3. At the kinetic level, the entropy inequality (3.11) is not in conservative form. The entropy inequality becomes conservative only when taking the integral with respect to $\xi$, as is seen on (3.14). This is also the case in [18]. Indeed we have written the macroscopic conservative entropy inequality as an integral with respect to $\xi$ of the sum of a nonpositive term (the one in (3.11)), a kinetic conservative term (the difference of the first lines of (3.12) and (3.13)), and a term with vanishing integral (difference of the two last lines of (3.12) and (3.13)). However, such a decomposition is not unique. 3.2. Analysis of the fully discrete scheme We still consider the scheme (3.6), and we make the choice (3.36) $$\begin{array}[]{l}\displaystyle\delta M_{i+1/2-}=(\xi-u_{i})(M_{i}-M_{i+1/2-})% ,\\ \displaystyle\delta M_{i-1/2+}=(\xi-u_{i})(M_{i}-M_{i-1/2+}),\end{array}$$ that satisfies the assumptions (3.7), (3.8) and (3.10). The scheme (3.6) is therefore a kinetic interpretation of the HR scheme (3.1)-(3.5). Lemma 3.4. The scheme (3.6) with the choice (3.36) is “kinetic well-balanced”, and consistent with (1.21). Proof. The expression kinetic well-balanced means that we do not only prove that (3.37) $$\int_{\mathbb{R}}\begin{pmatrix}1\\ \xi\end{pmatrix}f^{n+1-}_{i}\,d\xi=\int_{\mathbb{R}}\begin{pmatrix}1\\ \xi\end{pmatrix}M_{i}\,d\xi,$$ at rest, but the stronger property (3.38) $$f_{i}^{n+1-}(\xi)=M_{i}(\xi),\quad\forall\xi\in{\mathbb{R}},$$ when $u_{i}=0$ and $h_{i}+z_{i}=h_{i+1}+z_{i+1}$ for all $i$. Indeed in this situation one has $U_{i+1/2-}=U_{i+1/2+}$ for all $i$, thus the first three terms between parentheses in (3.6) give $\xi M_{i}$, and the last three terms give $-\xi M_{i}$, leading to (3.38). The consistency of the HR scheme has been proved in [1], but here the statement is the consistency of the kinetic update (3.6) with the kinetic equation (1.21). We proceed as follows. Using (1.22) and (1.4), the topography source term in (1.21) reads (3.39) $$-g(\partial_{x}z)\partial_{\xi}M=g(\partial_{x}z)\frac{\xi-u}{2gh-(\xi-u)^{2}}M.$$ This formula is valid for $2gh-(\xi-u)^{2}\not=0$, i.e. when $\xi\neq u\pm\sqrt{2gh}$ or in $L^{1}(\xi\in{\mathbb{R}})$. Assuming that $h_{i}>0$ (otherwise the consistency is obvious), one has that $h_{i+1/2-}=h_{i}+z_{i}-z_{i+1/2}$ for $z_{i+1}-z_{i}$ small enough, and an asymptotic expansion of $M_{i+1/2-}$ gives (3.40) $$M_{i+1/2-}=M_{i}+(z_{i}-z_{i+1/2})(\partial_{h_{i}}M_{i})_{|u_{i}}+o(z_{i+1}-z% _{i}),$$ with (3.41) $$(\partial_{h_{i}}M_{i})_{|u_{i}}=g\frac{M_{i}}{2gh_{i}-(\xi-u_{i})^{2}}.$$ Thus (3.42) $$\frac{\delta M_{i+1/2-}}{\Delta x_{i}}=g\frac{z_{i+1/2}-z_{i}}{\Delta x_{i}}% \frac{\xi-u_{i}}{2gh_{i}-(\xi-u_{i})^{2}}M_{i}+o(1).$$ Similarly, one has (3.43) $$\frac{\delta M_{i-1/2+}}{\Delta x_{i}}=g\frac{z_{i-1/2}-z_{i}}{\Delta x_{i}}% \frac{\xi-u_{i}}{2gh_{i}-(\xi-u_{i})^{2}}M_{i}+o(1).$$ With the usual shift of index $i$ due to the distribution of the source to interfaces, the difference (3.42) minus (3.43) appears as a discrete version of (3.39). The other four terms in parentheses in (3.6) are conservative, and are classically consistent with $\xi\partial_{x}f$ in (1.21). ∎ Remark 3.5. The scheme (3.6) can be viewed as a consistent well-balanced scheme for (1.21), except that the notion of consistency is true here only for Maxwellian initial data. On the contrary, the exact solution used in [18] is consistent for initial data of arbitrary shape. The role of the special form of the Maxwellian (1.4) is seen here by the fact that for initial data $U_{i}$ at rest, one has that $M(U_{i},\xi)$ is a steady state of (1.21) (this results from (3.39) and (3.41)). Remark 3.6. Going one step further in the asymptotic expansion (3.40) gives (3.44) $$\begin{array}[]{l}\displaystyle M_{i+1/2-}=M_{i}+(z_{i}-z_{i+1/2})(\partial_{h% _{i}}M_{i})_{|u_{i}}+\frac{(z_{i}-z_{i+1/2})^{2}}{2}(\partial^{2}_{h_{i}}M_{i}% )_{|u_{i}}+o(z_{i+1}-z_{i})^{2}\\ \displaystyle\hphantom{M_{i+1/2-}}=M_{i}+\frac{z_{i}-z_{i+1/2}}{g\pi^{2}}\frac% {{1\hskip-3.414331pt{\rm I}}_{M_{i}>0}}{M_{i}}-\frac{(z_{i}-z_{i+1/2})^{2}}{2g% ^{2}\pi^{4}}\frac{{1\hskip-3.414331pt{\rm I}}_{M_{i}>0}}{M_{i}^{3}}+o(z_{i+1}-% z_{i})^{2},\end{array}$$ that can also be written $$M_{i+1/2-}=M_{i}+\frac{z_{i}-z_{i+1/2}}{g\pi^{2}}\frac{{1\hskip-3.414331pt{\rm I% }}_{M_{i}>0}}{M_{i}}+\frac{(z_{i}-z_{i+1/2})}{2g\pi^{2}}\frac{M_{i}-M_{i+1/2-}% }{M_{i}^{2}}+o(z_{i+1}-z_{i})^{2}.$$ Therefore, the relation (3.45) $$\displaystyle\left(1-\frac{z_{i+1/2}-z_{i}}{2g\pi^{2}M_{i}^{2}}\right)\frac{% \delta M_{i+1/2-}}{\Delta x_{i}}=\frac{z_{i+1/2}-z_{i}}{\Delta x_{i}}\frac{\xi% -u_{i}}{g\pi^{2}}\frac{{1\hskip-3.414331pt{\rm I}}_{M_{i}>0}}{M_{i}}+o\left(% \frac{(z_{i+1}-z_{i})^{2}}{\Delta x_{i}}\right)$$ holds, and improves (3.42). However, (3.45) is only valid when $2gh_{i}-(\xi-u_{i})^{2}$ is significantly far from $0$, otherwise (3.44) is meaningless since $1/\sqrt{x}$ does not have an integrable derivative for $x$ around $0$. When writing the entropy inequality for the fully discrete scheme, the difficulty is to estimate the positive part of the entropy dissipation by something that tends to zero when $\Delta x_{i}$ tends to zero, at constant courant number $\sigma_{i}$, and assuming only that $\Delta z/\Delta x$ is bounded (Lipschitz topography), but not that $\Delta U/\Delta x$ is bounded (the solution can have discontinuities). Here $\Delta z$ stands for a quantity like $z_{i+1}-z_{i}$, and $\Delta U$ stands for a quantity like $U_{i+1}-U_{i}$. The principle of proof of such entropy inequality is that we use the dissipation of the semi-discrete scheme proved in Proposition 3.1, under the strong form (3.34). This inequality involves the terms linear in $\sigma_{i}$. Under a CFL condition, the higher order terms (quadratic in $\sigma_{i}$ or higher) are either treated as errors if they are of the order of $\Delta z^{2}$ or $\Delta z\Delta U$, or must be dominated by the dissipation if they are of the order of $\Delta U^{2}$. Note that the dissipation in (3.34), i.e. the two last expressions in factor of ${1\hskip-3.414331pt{\rm I}}_{\xi<0}$ and ${1\hskip-3.414331pt{\rm I}}_{\xi>0}$ respectively, are of the order of $(M_{i+1/2+}-M_{i+1/2-})^{2}$ and $(M_{i-1/2+}-M_{i-1/2-})^{2}$ respectively, and thus neglecting the terms in $\Delta z$, they control $(M_{i+1}-M_{i})^{2}$ and $(M_{i}-M_{i-1})^{2}$ respectively. However, the Maxwellian (1.4) is not Lipschitz continuous with respect to $U$, thus a sharp analysis has to be performed in order to use the dissipation. We consider a velocity $v_{m}\geq 0$ such that for all $i$, (3.46) $$M(U_{i},\xi)>0\Rightarrow|\xi|\leq v_{m}.$$ This means equivalently that $|u_{i}|+\sqrt{2gh_{i}}\leq v_{m}$. We consider a CFL condition strictly less than one, (3.47) $$\sigma_{i}v_{m}\leq\beta<1\quad\mbox{ for all }i,$$ where $\sigma_{i}=\Delta t^{n}/\Delta x_{i}$, and $\beta$ is a given constant. Theorem 3.7. Under the CFL condition (3.47), the scheme (3.6) with the choice (3.36) verifies the following properties. (i) The kinetic function remains nonnegative $f^{n+1-}_{i}\geq 0$. (ii) One has the kinetic entropy inequality (3.48) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\ H(f_{i}^{n+1-},z_{i})\\ \displaystyle\leq H(M_{i},z_{i})-\sigma_{i}\Bigl{(}\widetilde{H}_{i+1/2-}-% \widetilde{H}_{i-1/2+}\Bigr{)}\\ \displaystyle\hphantom{\leq}-\nu_{\beta}\,\sigma_{i}|\xi|\frac{g^{2}\pi^{2}}{6% }\biggl{(}{1\hskip-3.414331pt{\rm I}}_{\xi<0}\,(M_{i+1/2+}+M_{i+1/2-})(M_{i+1/% 2+}-M_{i+1/2-})^{2}\\ \displaystyle\hphantom{\leq}+{1\hskip-3.414331pt{\rm I}}_{\xi>0}\,(M_{i-1/2-}+% M_{i-1/2+})(M_{i-1/2+}-M_{i-1/2-})^{2}\biggr{)}\\ \displaystyle\hphantom{\leq}+C_{\beta}(\sigma_{i}v_{m})^{2}\frac{g^{2}\pi^{2}}% {6}M_{i}\Bigl{(}(M_{i}-M_{i+1/2-})^{2}+(M_{i}-M_{i-1/2+})^{2}\Bigr{)},\end{array}$$ where $\widetilde{H}_{i+1/2-}$, $\widetilde{H}_{i-1/2+}$ are defined by (3.12),(3.13), $\nu_{\beta}>0$ is a dissipation constant depending only on $\beta$, and $C_{\beta}\geq 0$ is a constant depending only on $\beta$. Theorem 3.7 has the following corollary. Corollary 3.8. Integrating the estimate (3.48) with respect to $\xi$, using (1.11), (1.25) and (3.14) (neglecting the dissipation) and Lemma 3.13 yields that (3.49) $$\begin{array}[]{l}\displaystyle\eta(U_{i}^{n+1})+gz_{i}h_{i}^{n+1}\leq\eta(U_{% i})+gz_{i}h_{i}-\sigma_{i}\Bigl{(}\widetilde{G}_{i+1/2}-\widetilde{G}_{i-1/2}% \Bigr{)}\\ \displaystyle\hphantom{\eta(U_{i}^{n+1})+gz_{i}h_{i}^{n+1}\leq}+C_{\beta}(% \sigma_{i}v_{m})^{2}\biggl{(}g(h_{i}-h_{i+1/2-})^{2}+g(h_{i}-h_{i-1/2+})^{2}% \biggr{)},\end{array}$$ which is the discrete entropy inequality associated to the HR scheme (3.1)-(3.5), (2.7). Note that with (3.3)-(3.5) one has (3.50) $$0\leq h_{i}-h_{i+1/2-}\leq|z_{i+1}-z_{i}|,\quad 0\leq h_{i}-h_{i-1/2+}\leq|z_{% i}-z_{i-1}|.$$ We conclude that the quadratic error term (divide (3.49) by $\Delta t^{n}$ to be consistent with (1.2)) has the following key properties: it vanishes identically when $z=cst$ (no topography) or when $\sigma_{i}\rightarrow 0$ (semi-discrete limit), and as soon as the topography is Lipschitz continuous, it tends to zero strongly when the grid size tends to $0$ (consistency with the continuous entropy inequality (1.2)), even if the solution contains shocks. We state now a counter result saying that it is not possible to remove the error term in (3.48), even at the level of its integral with respect to $\xi$. It is indeed true for the HR scheme even if the homogeneous flux used is not the kinetic one. Proposition 3.9. The HR scheme (3.1)-(3.5) does not satisfy the fully-discrete entropy inequality (3.49) without quadratic error term, whatever restrictive is the CFL condition. Remark 3.10. An open problem is to establish the fully discrete entropy inequality with error (3.49) for an HR scheme with general (non kinetic) homogeneous numerical flux $\mathcal{F}$ satisfying a fully discrete entropy inequality. In the proofs given below, the Lemma (3.13) of $L^{2}-$Lipschitz dependency of the Maxwellian is used. Note that the Maxwellian (1.4) is only $1/2$-Hölder continuous at fixed $\xi$. Proof of Theorem 3.7.. Using (3.6) and (3.36), one has for $\xi\leq 0$ (3.51) $$\begin{array}[]{l}\displaystyle f^{n+1-}_{i}=M_{i}-\sigma_{i}\Bigl{(}\xi M_{i+% 1/2+}-\xi M_{i-1/2+}+(\xi-u_{i})(M_{i-1/2+}-M_{i+1/2-})\Bigr{)}\\ \displaystyle\hphantom{f^{n+1-}_{i}}=M_{i}-\sigma_{i}\Bigl{(}\xi(M_{i+1/2+}-M_% {i+1/2-})+u_{i}(M_{i+1/2-}-M_{i-1/2+})\Bigr{)},\end{array}$$ while for $\xi\geq 0$, (3.52) $$\begin{array}[]{l}\displaystyle f^{n+1-}_{i}=M_{i}-\sigma_{i}\Bigl{(}\xi M_{i+% 1/2-}-\xi M_{i-1/2-}+(\xi-u_{i})(M_{i-1/2+}-M_{i+1/2-})\Bigr{)}\\ \displaystyle\hphantom{f^{n+1-}_{i}}=M_{i}-\sigma_{i}\Bigl{(}\xi(M_{i-1/2+}-M_% {i-1/2-})+u_{i}(M_{i+1/2-}-M_{i-1/2+})\Bigr{)}.\end{array}$$ But because of (3.19), one has $0\leq M_{i+1/2-},M_{i-1/2+}\leq M_{i}$. Thus for all $\xi$ we get from (3.51)-(3.52) that $f^{n+1-}_{i}\geq(1-\sigma_{i}(|u_{i}|+|\xi-u_{i}|))M_{i}\geq 0$ under the CFL condition (3.47), proving (i). Then, we write the linearization of $H$ around the Maxwellian $M_{i}$ (3.53) $$H(f^{n+1-}_{i},z_{i})=H(M_{i},z_{i})+H^{\prime}(M_{i},z_{i})\bigl{(}f^{n+1-}_{% i}-M_{i}\bigr{)}+L_{i},$$ where $L_{i}$ is a remainder. The linearized term $H^{\prime}(M_{i},z_{i})\bigl{(}f^{n+1-}_{i}-M_{i}\bigr{)}$ in (3.53) is nothing but the dissipation of the semi-discrete scheme, that has been estimated in Proposition 3.1. Thus, multiplying (3.34) by $-\sigma_{i}$, using the form (1.6) of $H$ and the identity (3.54) $$b^{3}-a^{3}-3a^{2}(b-a)=(b+2a)(b-a)^{2},$$ we get (3.55) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}H^{\prime}(M_{i},z_{i})\bigl{(}f% ^{n+1-}_{i}-M_{i}\bigr{)}\\ \displaystyle\leq-\sigma_{i}\bigl{(}\widetilde{H}_{i+1/2-}-\widetilde{H}_{i-1/% 2+}\bigr{)}\\ \displaystyle+\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}\frac{g^{2}\pi^{% 2}}{6}\bigl{(}M_{i+1/2+}+2M_{i+1/2-}\bigr{)}\bigl{(}M_{i+1/2+}-M_{i+1/2-}\bigr% {)}^{2}\\ \displaystyle-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}\frac{g^{2}\pi^{% 2}}{6}\bigl{(}M_{i-1/2-}+2M_{i-1/2+}\bigr{)}\bigl{(}M_{i-1/2-}-M_{i-1/2+}\bigr% {)}^{2}.\end{array}$$ Then, using again the form of $H$ and (3.54), the quadratic term $L_{i}$ in (3.53) can be expressed as (3.56) $$L_{i}=\frac{g^{2}\pi^{2}}{6}(2M_{i}+f^{n+1-}_{i})(f^{n+1-}_{i}-M_{i})^{2}.$$ Using (3.51), we have for any $\alpha>0$ (3.57) $$\begin{array}[]{l}\displaystyle L_{i}\leq\frac{g^{2}\pi^{2}}{6}\sigma_{i}^{2}(% 2M_{i}+f^{n+1-}_{i})\Bigl{(}(1+\alpha)\xi^{2}\bigl{(}M_{i+1/2+}-M_{i+1/2-}% \bigr{)}^{2}\\ \displaystyle\mskip 60.0mu +(1+1/\alpha)u_{i}^{2}\bigl{(}M_{i+1/2-}-M_{i-1/2+}% \bigr{)}^{2}\Bigr{)},\quad\mbox{for all }\xi\leq 0,\end{array}$$ and similarly with (3.52) (3.58) $$\begin{array}[]{l}\displaystyle L_{i}\leq\frac{g^{2}\pi^{2}}{6}\sigma_{i}^{2}(% 2M_{i}+f^{n+1-}_{i})\Bigl{(}(1+\alpha)\xi^{2}\bigl{(}M_{i-1/2+}-M_{i-1/2-}% \bigr{)}^{2}\\ \displaystyle\mskip 60.0mu +(1+1/\alpha)u_{i}^{2}\bigl{(}M_{i+1/2-}-M_{i-1/2+}% \bigr{)}^{2}\Bigr{)},\quad\mbox{for all }\xi\geq 0.\end{array}$$ Therefore, adding the estimates (3.53), (3.55), (3.57), (3.58) yields (3.59) $$H(f_{i}^{n+1-},z_{i})\leq H(M_{i},z_{i})-\sigma_{i}\Bigl{(}\widetilde{H}_{i+1/% 2-}-\widetilde{H}_{i-1/2+}\Bigr{)}+d_{i},$$ where (3.60) $$\begin{array}[]{l}\displaystyle d_{i}=\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}% _{\xi<0}\frac{g^{2}\pi^{2}}{6}\left(M_{i+1/2+}+2M_{i+1/2-}+(1+\alpha)\sigma_{i% }\xi(2M_{i}+f^{n+1-}_{i})\right)\\ \displaystyle\mskip 350.0mu \times(M_{i+1/2+}-M_{i+1/2-})^{2}\\ \displaystyle\hphantom{d_{i}\leq}-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}\frac{g^{2}\pi^{2}}{6}\left(M_{i-1/2-}+2M_{i-1/2+}-(1+\alpha)\sigma_{i}% \xi(2M_{i}+f^{n+1-}_{i})\right)\\ \displaystyle\mskip 350.0mu \times(M_{i-1/2+}-M_{i-1/2-})^{2}\\ \displaystyle\hphantom{d_{i}\leq}+\sigma_{i}^{2}u_{i}^{2}\frac{g^{2}\pi^{2}}{6% }(1+1/\alpha)(2M_{i}+f^{n+1-}_{i})(M_{i+1/2-}-M_{i-1/2+})^{2},\end{array}$$ and $\alpha>0$ is an arbitrary parameter. Notice that the first two lines in (3.60) are basically nonpositive, whereas the third line is nonnegative. ∎ Before going further in the proof of Theorem 3.7, i.e. upper bounding $d_{i}$ by a sum of a dissipation term and an error, let us state a lemma, that gives another expression for $d_{i}$, in which the nonpositive contributions appear clearly. Lemma 3.11. The term $d_{i}$ from (3.60) can also be written (3.61) $$\displaystyle d_{i}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}\,\gamma_{i+1/2}^% {-}(M_{i+1/2+}-M_{i+1/2-})^{2}$$ $$\displaystyle-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}\,\gamma_{i-1/2}% ^{+}(M_{i-1/2+}-M_{i-1/2-})^{2}$$ $$\displaystyle+\sigma_{i}^{2}\frac{g^{2}\pi^{2}}{6}\biggl{(}(1+1/\alpha)u_{i}^{% 2}(2M_{i}+f^{n+1-}_{i})(M_{i+1/2-}-M_{i-1/2+})^{2}$$ $$\displaystyle                 +(1+\alpha)\xi^{2}\bigl{(}{1\hskip-3.414331pt{% \rm I}}_{\xi<0}\,\mu_{i+1/2}^{-}+{1\hskip-3.414331pt{\rm I}}_{\xi>0}\,\mu_{i-1% /2}^{+}\bigr{)}\biggr{)},$$ with (3.62) $$\begin{array}[]{l}\displaystyle\gamma_{i+1/2}^{-}=\frac{g^{2}\pi^{2}}{6}\biggl% {(}\bigl{(}1-(1+\alpha)(\sigma_{i}\xi)^{2}\bigr{)}M_{i+1/2+}\\ \displaystyle\hphantom{\gamma_{i+1/2}^{-}=\qquad\quad}+\Bigl{(}2+(1+\alpha)(% \sigma_{i}\xi)^{2}+3(1+\alpha)\sigma_{i}\xi\Bigr{)}M_{i+1/2-}\biggr{)},\\ \displaystyle\gamma_{i-1/2}^{+}=\frac{g^{2}\pi^{2}}{6}\biggl{(}\bigl{(}1-(1+% \alpha)(\sigma_{i}\xi)^{2}\bigr{)}M_{i-1/2-}\\ \displaystyle\hphantom{\gamma_{i-1/2}^{+}=\qquad\quad}+\Bigl{(}2+(1+\alpha)(% \sigma_{i}\xi)^{2}-3(1+\alpha)\sigma_{i}\xi\Bigr{)}M_{i-1/2+}\biggr{)},\end{array}$$ (3.63) $$\begin{array}[]{l}\displaystyle\mu_{i+1/2}^{-}=(M_{i+1/2+}-M_{i+1/2-})^{2}% \Bigl{(}3(M_{i}-M_{i+1/2-})\\ \displaystyle\mskip 280.0mu -\sigma_{i}u_{i}(M_{i+1/2-}-M_{i-1/2+})\Bigr{)},\\ \displaystyle\mu_{i-1/2}^{+}=(M_{i-1/2+}-M_{i-1/2-})^{2}\Bigl{(}3(M_{i}-M_{i-1% /2+})\\ \displaystyle\mskip 280.0mu -\sigma_{i}u_{i}(M_{i+1/2-}-M_{i-1/2+})\Bigr{)}.% \end{array}$$ Proof of Lemma 3.63. The expression (3.51) of $f^{n+1-}_{i}$ for $\xi\leq 0$ allows to precise the value of $d_{i}$ in (3.60), and gives for $\xi\leq 0$ $$\displaystyle M_{i+1/2+}+2M_{i+1/2-}+(1+\alpha)\sigma_{i}\xi(2M_{i}+f^{n+1-}_{% i})$$ $$\displaystyle=$$ $$\displaystyle(1-(1+\alpha)(\sigma_{i}\xi)^{2})M_{i+1/2+}+(2+(1+\alpha)(\sigma_% {i}\xi)^{2})M_{i+1/2-}$$ $$\displaystyle+(1+\alpha)\sigma_{i}\xi\left(3M_{i}-\sigma_{i}u_{i}(M_{i+1/2-}-M% _{i-1/2+})\right)$$ $$\displaystyle=$$ $$\displaystyle(1-(1+\alpha)(\sigma_{i}\xi)^{2})M_{i+1/2+}+(2+(1+\alpha)(\sigma_% {i}\xi)^{2}+3(1+\alpha)\sigma_{i}\xi)M_{i+1/2-}$$ $$\displaystyle+(1+\alpha)\sigma_{i}\xi\left(3(M_{i}-M_{i+1/2-})-\sigma_{i}u_{i}% (M_{i+1/2-}-M_{i-1/2+})\right).$$ Using (3.52) we obtain analogously for $\xi\geq 0$ $$\displaystyle M_{i-1/2-}+2M_{i-1/2+}-(1+\alpha)\sigma_{i}\xi(2M_{i}+f^{n+1-}_{% i})$$ $$\displaystyle=$$ $$\displaystyle(1-(1+\alpha)(\sigma_{i}\xi)^{2})M_{i-1/2-}+(2+(1+\alpha)(\sigma_% {i}\xi)^{2})M_{i-1/2+}$$ $$\displaystyle-(1+\alpha)\sigma_{i}\xi\left(3M_{i}-\sigma_{i}u_{i}(M_{i+1/2-}-M% _{i-1/2+})\right)$$ $$\displaystyle=$$ $$\displaystyle(1-(1+\alpha)(\sigma_{i}\xi)^{2})M_{i-1/2-}+(2+(1+\alpha)(\sigma_% {i}\xi)^{2}-3(1+\alpha)\sigma_{i}\xi)M_{i-1/2+}$$ $$\displaystyle-(1+\alpha)\sigma_{i}\xi\left(3(M_{i}-M_{i-1/2+})-\sigma_{i}u_{i}% (M_{i+1/2-}-M_{i-1/2+})\right).$$ These expressions yield the formulas (3.61)-(3.63). ∎ Continuation of the proof of Theorem 3.7. One would like the first two lines of (3.61) to be nonpositive. In order to get nonnegative coefficients $\gamma_{i+1/2}^{-}$, $\gamma_{i-1/2}^{+}$ in (3.61), it is enough that (3.64) $$1-(1+\alpha)(\sigma_{i}|\xi|)^{2}\geq 0,\quad 2+(1+\alpha)(\sigma_{i}|\xi|)^{2% }-3(1+\alpha)\sigma_{i}|\xi|\geq 0,$$ for all $\xi$ in the supports of $M_{i-1}$, $M_{i}$, $M_{i+1}$. But since both expressions in (3.64) are decreasing with respect to $|\xi|$ for $\sigma_{i}|\xi|\leq 1$ and because of the CFL condition (3.47), they are lower bounded respectively by (3.65) $$1-(1+\alpha)\beta^{2},\quad 2+(1+\alpha)\beta^{2}-3(1+\alpha)\beta.$$ But since $\beta<1$, one can choose $\alpha>0$ such that (3.66) $$1+\alpha<\frac{2}{\beta(3-\beta)},$$ and then the coefficients (3.65) are positive, and $\gamma_{i+1/2}^{-},\gamma_{i-1/2}^{+}\geq 0$. We denote (3.67) $$c_{\alpha,\beta}=\min\Bigl{(}1-(1+\alpha)\beta^{2},2+(1+\alpha)\beta^{2}-3(1+% \alpha)\beta\Bigr{)}>0.$$ Then we have (3.68) $${1\hskip-3.414331pt{\rm I}}_{\xi<0}\gamma_{i+1/2}^{-}\geq{1\hskip-3.414331pt{% \rm I}}_{\xi<0}\frac{g^{2}\pi^{2}}{6}c_{\alpha,\beta}(M_{i+1/2+}+M_{i+1/2-}),$$ and (3.69) $${1\hskip-3.414331pt{\rm I}}_{\xi>0}\gamma_{i-1/2}^{+}\geq{1\hskip-3.414331pt{% \rm I}}_{\xi>0}\frac{g^{2}\pi^{2}}{6}c_{\alpha,\beta}(M_{i-1/2-}+M_{i-1/2+}).$$ Next we write using (3.51), (3.52) and (3.19) (3.70) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\,2M_{i}+f_{i}^{n+1-}\\ \displaystyle\leq 3M_{i}-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}(M_{i% +1/2+}-M_{i+1/2-})_{+}\\ \displaystyle\hphantom{\leq}\,+\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0% }(M_{i-1/2-}-M_{i-1/2+})_{+}+\sigma_{i}|u_{i}||M_{i+1/2-}-M_{i-1/2+}|\\ \displaystyle\leq 4M_{i}-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0}(M_{i% +1/2+}-M_{i+1/2-})_{+}+\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0}(M_{i-1% /2-}-M_{i-1/2+})_{+}.\end{array}$$ We can estimate the first quadratic error term from (3.61) as (3.71) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\,(2M_{i}+f^{n+1-}_{i})(M_{i+1/2% -}-M_{i-1/2+})^{2}\\ \displaystyle\leq 4M_{i}(M_{i+1/2-}-M_{i-1/2+})^{2}\\ \displaystyle\hphantom{\leq}\,-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi<0% }M_{i}|M_{i+1/2+}-M_{i+1/2-}||M_{i+1/2-}-M_{i-1/2+}|\\ \displaystyle\hphantom{\leq}\,+\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{\xi>0% }M_{i}|M_{i-1/2-}-M_{i-1/2+}||M_{i+1/2-}-M_{i-1/2+}|.\end{array}$$ Finally we estimate (3.72) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\ |\mu_{i+1/2}^{-}|\\ \displaystyle\leq 4(M_{i+1/2+}-M_{i+1/2-})^{2}\bigl{(}|M_{i}-M_{i+1/2-}|+|M_{i% }-M_{i-1/2+}|\bigr{)}\\ \displaystyle\leq 2|M_{i+1/2+}-M_{i+1/2-}|\Bigl{(}\epsilon(M_{i+1/2+}-M_{i+1/2% -})^{2}\\ \displaystyle\mskip 100.0mu +\epsilon^{-1}\bigl{(}|M_{i}-M_{i+1/2-}|+|M_{i}-M_% {i-1/2+}|\bigr{)}^{2}\Bigr{)}\\ \displaystyle\leq 2\epsilon(M_{i+1/2+}+M_{i+1/2-})(M_{i+1/2+}-M_{i+1/2-})^{2}% \\ \displaystyle\hphantom{\leq}+4\epsilon^{-1}M_{i}|M_{i+1/2+}-M_{i+1/2-}|\bigl{(% }|M_{i}-M_{i+1/2-}|+|M_{i}-M_{i-1/2+}|\bigr{)},\end{array}$$ and similarly (3.73) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\ |\mu_{i-1/2}^{+}|\\ \displaystyle\leq 2\epsilon(M_{i-1/2-}+M_{i-1/2+})(M_{i-1/2+}-M_{i-1/2-})^{2}% \\ \displaystyle\hphantom{\leq}+4\epsilon^{-1}M_{i}|M_{i-1/2+}-M_{i-1/2-}|\bigl{(% }|M_{i}-M_{i+1/2-}|+|M_{i}-M_{i-1/2+}|\bigr{)},\end{array}$$ where $\epsilon>0$ is arbitrary. Putting together in (3.61) the estimates (3.68), (3.69), (3.72), (3.73), we get (3.74) $$\begin{array}[]{l}\displaystyle d_{i}\leq\sigma_{i}\xi{1\hskip-3.414331pt{\rm I% }}_{\xi<0}\frac{g^{2}\pi^{2}}{6}\bigl{(}c_{\alpha,\beta}-2\epsilon(1+\alpha)% \sigma_{i}|\xi|\bigr{)}\\ \displaystyle\mskip 160.0mu \times(M_{i+1/2+}+M_{i+1/2-})(M_{i+1/2+}-M_{i+1/2-% })^{2}\\ \displaystyle\hphantom{d_{i}\leq}-\sigma_{i}\xi{1\hskip-3.414331pt{\rm I}}_{% \xi>0}\frac{g^{2}\pi^{2}}{6}\bigl{(}c_{\alpha,\beta}-2\epsilon(1+\alpha)\sigma% _{i}|\xi|\bigr{)}\\ \displaystyle\mskip 160.0mu \times(M_{i-1/2-}+M_{i-1/2+})(M_{i-1/2+}-M_{i-1/2-% })^{2}\\ \displaystyle\hphantom{d_{i}\leq}+\sigma_{i}^{2}\frac{g^{2}\pi^{2}}{6}\biggl{(% }(1+1/\alpha)u_{i}^{2}(2M_{i}+f^{n+1-}_{i})(M_{i+1/2-}-M_{i-1/2+})^{2}\\ \displaystyle\mskip 60.0mu +4\epsilon^{-1}(1+\alpha)\xi^{2}M_{i}\bigl{(}|M_{i}% -M_{i+1/2-}|+|M_{i}-M_{i-1/2+}|\bigr{)}\\ \displaystyle\mskip 80.0mu \times\bigl{(}{1\hskip-3.414331pt{\rm I}}_{\xi<0}|M% _{i+1/2+}-M_{i+1/2-}|+{1\hskip-3.414331pt{\rm I}}_{\xi>0}|M_{i-1/2+}-M_{i-1/2-% }|\bigr{)}\biggr{)}.\end{array}$$ We set (3.75) $$\nu_{\beta}^{0}=c_{\alpha,\beta}-2\epsilon(1+\alpha)\beta,$$ which is positive if $\epsilon$ is taken small enough (recall that $\alpha>0$ has been chosen so as to satisfy (3.66), and hence depends only on $\beta$). Then using (3.59) and (3.74), the two first lines in the right-hand side of (3.74) give a dissipation as stated in (3.48), while the last lines give an error. From (3.74) and (3.71), for $\xi<0$ the typical error terms take the form (3.76) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\ M_{i}|M_{i+1/2+}-M_{i+1/2-}||M% _{i}-M_{i-1/2+}|\\ \displaystyle=\bigl{(}{1\hskip-3.414331pt{\rm I}}_{M_{i}\leq M_{i+1/2+}}+{1% \hskip-3.414331pt{\rm I}}_{M_{i}>M_{i+1/2+}}\bigr{)}M_{i}|M_{i+1/2+}-M_{i+1/2-% }||M_{i}-M_{i-1/2+}|\\ \displaystyle\leq{1\hskip-3.414331pt{\rm I}}_{M_{i}\leq M_{i+1/2+}}M_{i}\Bigl{% (}\epsilon_{2}|M_{i+1/2+}-M_{i+1/2-}|^{2}+\epsilon_{2}^{-1}|M_{i}-M_{i-1/2+}|^% {2}\Bigr{)}\\ \displaystyle\hphantom{\leq}+{1\hskip-3.414331pt{\rm I}}_{M_{i}>M_{i+1/2+}}% \Bigl{(}M_{i+1/2-}|M_{i+1/2+}-M_{i+1/2-}||M_{i}-M_{i-1/2+}|\\ \displaystyle\mskip 60.0mu +|M_{i}-M_{i+1/2-}||M_{i+1/2+}-M_{i+1/2-}||M_{i}-M_% {i-1/2+}|\Bigr{)}\\ \displaystyle\leq\epsilon_{2}M_{i+1/2+}|M_{i+1/2+}-M_{i+1/2-}|^{2}+\epsilon_{2% }^{-1}M_{i}|M_{i}-M_{i-1/2+}|^{2}\\ \displaystyle\hphantom{\leq}+M_{i+1/2-}\Bigl{(}\epsilon_{2}|M_{i+1/2+}-M_{i+1/% 2-}|^{2}+\epsilon_{2}^{-1}|M_{i}-M_{i-1/2+}|^{2}\Bigr{)}\\ \displaystyle\hphantom{\leq}+M_{i}|M_{i}-M_{i+1/2-}||M_{i}-M_{i-1/2+}|\\ \displaystyle\leq\epsilon_{2}\bigl{(}M_{i+1/2+}+M_{i+1/2-}\bigr{)}|M_{i+1/2+}-% M_{i+1/2-}|^{2}\\ \displaystyle\hphantom{\leq}+3\epsilon_{2}^{-1}M_{i}|M_{i}-M_{i-1/2+}|^{2}+% \epsilon_{2}M_{i}|M_{i}-M_{i+1/2-}|^{2}.\end{array}$$ The term proportional to $\epsilon_{2}$ can therefore be absorbed by $\nu_{\beta}^{0}$. Since a similar estimate holds for $\xi>0$, diminishing slightly $\nu_{\beta}^{0}$ by something proportional to $\epsilon_{2}$ (taken small enough), we get a coefficient $\nu_{\beta}>0$. The only remaining error terms finally take the form stated in the last line of (3.48). This completes the proof of (ii) in Theorem 3.7. ∎ Remark 3.12. Consider the situation when for some $i_{0}$ one has $$u_{i_{0}-1}=u_{i_{0}}=u_{i_{0}+1}\neq 0\text{ and }h_{i_{0}-1}+z_{i_{0}-1}=h_{i_{0}}+z_{i_{0}}=h_{i_{0}+1}+z_{i_{0}+1},$$ with $z_{i_{0}-1}\neq z_{i_{0}}$ or $z_{i_{0}}\neq z_{i_{0}+1}$. Then by (3.3), (3.4), the reconstructed states satisfy $U_{i+1/2-}=U_{i+1/2+}$ for $i=i_{0}-1,i_{0}$. We observe that then, in the formula (3.60) for $d_{i}$, the dissipative terms vanish for $i=i_{0}$, for all $\xi$. Thus $d_{i_{0}}\geq 0$ and $\int d_{i_{0}}(\xi)d\xi>0$, which means that the extra term $d_{i}$ in (3.59) gives a dissipation with the wrong sign, in agreement with Proposition 3.9. Proof of Proposition 3.9. It has been proved in [1] that the semi-discrete HR scheme (limit $\sigma_{i}\rightarrow 0$) satisfies the entropy inequality without error term. Here we prove that the fully-discrete scheme does not, whatever restrictive is the CFL condition. This result holds for an arbitrary numerical flux $\mathcal{F}$ taken for the homogeneous Saint-Venant system. The argument is as follows. Consider the local dissipation (3.77) $${\mathcal{D}}_{i}^{n}=\eta(U_{i}^{n+1})+gz_{i}h_{i}^{n+1}-\eta(U_{i})-gz_{i}h_% {i}+\sigma_{i}\Bigl{(}\widetilde{G}_{i+1/2}-\widetilde{G}_{i-1/2}\Bigr{)},$$ where $U^{n+1}_{i}$ is given by (3.1), $F_{i+1/2\pm}$ are defined by (3.2)-(3.5), and (3.78) $$\widetilde{G}_{i+1/2}={\mathcal{G}}(U_{i+1/2-},U_{i+1/2+})+gz_{i+1/2}\mathcal{% F}^{0}(U_{i+1/2-},U_{i+1/2+}),$$ where $\mathcal{G}$ is the numerical entropy flux associated to $\mathcal{F}$. Then, taking into account that $h_{i}^{n+1}=h_{i}-\sigma_{i}(\mathcal{F}^{0}(U_{i+1/2-},U_{i+1/2+})-\mathcal{F% }^{0}(U_{i-1/2-},U_{i-1/2+}))$, one has (3.79) $$\begin{array}[]{l}\displaystyle\frac{{\mathcal{D}}_{i}^{n}}{\sigma_{i}}=\frac{% \eta\left(U_{i}-\sigma_{i}(F_{i+1/2-}-F_{i-1/2+})\right)-\eta(U_{i})}{\sigma_{% i}}\\ \displaystyle\hphantom{\frac{{\mathcal{D}}_{i}^{n}}{\sigma_{i}}=}-gz_{i}\bigl{% (}\mathcal{F}^{0}(U_{i+1/2-},U_{i+1/2+})-\mathcal{F}^{0}(U_{i-1/2-},U_{i-1/2+}% )\bigr{)}\\ \displaystyle\hphantom{\frac{{\mathcal{D}}_{i}^{n}}{\sigma_{i}}=}+\widetilde{G% }_{i+1/2}-\widetilde{G}_{i-1/2}\vphantom{\Bigl{|}}.\end{array}$$ The entropy $\eta$ being strictly convex, the function (3.80) $$\sigma_{i}\mapsto\eta\left(U_{i}-\sigma_{i}(F_{i+1/2-}-F_{i-1/2+})\right)$$ is convex, and strictly convex if (3.81) $$F_{i+1/2-}-F_{i-1/2+}\not=0.$$ Assuming that this condition holds, we get that the right-hand side of (3.79) is strictly increasing with respect to $\sigma_{i}$. In particular, it will be strictly positive if the limit as $\sigma_{i}\rightarrow 0$ of this quantity vanishes. This limit is nothing else than the dissipation of the semi-discrete scheme (3.82) $$\begin{array}[]{l}\displaystyle-\eta^{\prime}(U_{i})(F_{i+1/2-}-F_{i-1/2+})+% \widetilde{G}_{i+1/2}-\widetilde{G}_{i-1/2}\\ \displaystyle-gz_{i}\bigl{(}\mathcal{F}^{0}(U_{i+1/2-},U_{i+1/2+})-\mathcal{F}% ^{0}(U_{i-1/2-},U_{i-1/2+})\bigr{)}\vphantom{\Bigl{|}}.\end{array}$$ Consider data such that (3.83) $$U_{i}=U_{l},\ z_{i}=z_{l}\mbox{ for }i\leq i_{0},\quad U_{i}=U_{r},\ z_{i}=z_{% r}\mbox{ for }i>i_{0},$$ for left and right states $U_{l}=(h_{l},h_{l}u_{l})$, $U_{r}=(h_{r},h_{r}u_{r})$ such that (3.84) $$u_{l}=u_{r}\not=0,\qquad h_{l}+z_{l}=h_{r}+z_{r},\qquad z_{r}-z_{l}>0.$$ Then one checks easily that (3.81) holds for $i=i_{0}$, and that (3.82) vanishes for all $i$. Therefore, ${\mathcal{D}}_{i_{0}}^{n}>0$, which proves the claim. ∎ Lemma 3.13. Let $U_{k}=(h_{k},h_{k}u_{k})$ for $k=1,2,3$ with $h_{k}\geq 0$. Then (3.85) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\int_{\mathbb{R}}M(U_{1},\xi)% \Bigl{(}M(U_{1},\xi)-M(U_{2},\xi)\Bigr{)}^{2}d\xi\\ \displaystyle\leq\frac{3}{g^{2}\pi^{2}}\Bigl{(}g(h_{2}-h_{1})^{2}+\min(h_{1},h% _{2})(u_{2}-u_{1})^{2}\Bigr{)},\end{array}$$ and (3.86) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\int_{\mathbb{R}}M(U_{3},\xi)% \Bigl{(}M(U_{1},\xi)-M(U_{2},\xi)\Bigr{)}^{2}d\xi\\ \displaystyle\leq\frac{6}{g^{2}\pi^{2}}\Bigl{(}g(h_{3}-h_{1})^{2}+g(h_{3}-h_{2% })^{2}\\ \displaystyle\mskip 20.0mu +\min(h_{1},h_{3})(u_{3}-u_{1})^{2}+\min(h_{2},h_{3% })(u_{3}-u_{2})^{2}\Bigr{)}.\end{array}$$ Proof. One has (3.87) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\int_{\mathbb{R}}M(U_{1},\xi)% \Bigl{(}M(U_{1},\xi)-M(U_{2},\xi)\Bigr{)}^{2}d\xi\\ \displaystyle\leq\frac{1}{2}\int_{\mathbb{R}}\Bigl{(}2M(U_{1},\xi)+M(U_{2},\xi% )\Bigr{)}\Bigl{(}M(U_{1},\xi)-M(U_{2},\xi)\Bigr{)}^{2}d\xi\\ \displaystyle=\frac{3}{g^{2}\pi^{2}}\int_{\mathbb{R}}\Bigl{(}H_{0}(M(U_{2},\xi% ),\xi)-H_{0}(M(U_{1},\xi),\xi)\\ \displaystyle\mskip 100.0mu -H^{\prime}_{0}(M(U_{1},\xi),\xi)(M(U_{2},\xi)-M(U% _{1},\xi))\Bigr{)}d\xi\\ \displaystyle\leq\frac{3}{g^{2}\pi^{2}}\int_{\mathbb{R}}\Bigl{(}H_{0}(M(U_{2},% \xi),\xi)-H_{0}(M(U_{1},\xi),\xi)\\ \displaystyle\mskip 100.0mu -\eta^{\prime}(U_{1})\begin{pmatrix}1\\ \xi\end{pmatrix}(M(U_{2},\xi)-M(U_{1},\xi))\Bigr{)}d\xi\\ \displaystyle=\frac{3}{g^{2}\pi^{2}}\Bigl{(}\eta(U_{2})-\eta(U_{1})-\eta^{% \prime}(U_{1})(U_{2}-U_{1})\Bigr{)}\\ \displaystyle=\frac{3}{g^{2}\pi^{2}}\Bigl{(}g\frac{(h_{2}-h_{1})^{2}}{2}+h_{2}% \frac{(u_{2}-u_{1})^{2}}{2}\Bigr{)}.\end{array}$$ We can also estimate $M(U_{1},\xi)$ by $M(U_{1},\xi)+2M(U_{2},\xi)$, giving the same estimate as (3.87) with $U_{1}$ and $U_{2}$ exchanged and with an extra factor $2$. This proves (3.85). Then, denoting $M_{k}\equiv M(U_{k},\xi)$, according to the Minkowsky inequality, (3.88) $$\begin{array}[]{l}\displaystyle\hphantom{\leq}\left(\int_{\mathbb{R}}M_{3}% \bigl{(}M_{1}-M_{2}\bigr{)}^{2}d\xi\right)^{1/2}\\ \displaystyle\leq\left(\int_{\mathbb{R}}M_{3}\bigl{(}M_{1}-M_{3}\bigr{)}^{2}d% \xi\right)^{1/2}+\left(\int_{\mathbb{R}}M_{3}\bigl{(}M_{3}-M_{2}\bigr{)}^{2}d% \xi\right)^{1/2},\end{array}$$ Using (3.85), we obtain (3.86). ∎ Acknowledgments The authors wish to express their warmful thanks to Carlos Parés Madroñal for many fruitful discussions. This work has been partially funded by the ANR contract ANR-11-BS01-0016 LANDQUAKES. References [1] E. Audusse, F. Bouchut, M.-O. Bristeau, R. Klein, B. Perthame, A fast and stable well-balanced scheme with hydrostatic reconstruction for shallow water flows, SIAM J. Sci. Comp. 25 (2004), 2050-2065. [2] E. Audusse, M.-O. Bristeau, A well-balanced positivity preserving second-order scheme for Shallow Water flows on unstructured meshes, J. Comput. Phys. 206 (2005), 311-333. [3] E. Audusse, M.-O. Bristeau, M. Pelanti, J. Sainte-Marie, Approximation of the hydrostatic Navier-Stokes system for density stratified flows by a multilayer model. Kinetic interpretation and numerical validation, J. Comp. Phys. 230 (2011), 3453-3478. [4] E. Audusse, M.-O. Bristeau, B. Perthame, Kinetic schemes for Saint Venant equations with source terms on unstructured grids, Technical Report 3989, INRIA, Unité de recherche de Rocquencourt, France, 2000. http://www.inria.fr/rrrt/rr-3989.html. [5] E. Audusse, M.-O. Bristeau, B. Perthame, J. Sainte-Marie, A multilayer Saint-Venant system with mass exchanges for Shallow Water flows. Derivation and numerical validation, ESAIM: M2AN 45 (2011), 169-200. [6] F. Berthelin, Convergence of flux vector splitting schemes with single entropy inequality for hyperbolic systems of conservation laws, Numer. Math. 99 (2005), 585-604. [7] F. Berthelin, F. Bouchut, Relaxation to isentropic gas dynamics for a BGK system with single kinetic entropy, Meth. and Appl. of Analysis 9(2002), 313-327. [8] F. Bouchut, Construction of BGK models with a family of kinetic entropies for a given system of conservation laws, J. Stat. Phys. 95 (1999), 113-170. [9] F. Bouchut, Entropy satisfying flux vector splittings and kinetic BGK models, Numer. Math. 94 (2003), 623-672. [10] F. Bouchut, Nonlinear stability of finite volume methods for hyperbolic conservation laws, and well-balanced schemes for sources, Birkhäuser, 2004. [11] F. Bouchut, T. Morales, A subsonic-well-balanced reconstruction scheme for shallow water flows, Siam J. Numer. Anal. 48 (2010), 1733-1758. [12] F. Bouchut, V. Zeitlin, A robust well-balanced scheme for multi-layer shallow water equations, Discrete and Continuous Dynamical Systems - Series B, 13 (2010), 739-758. [13] M.-O. Bristeau, N. Goutal, J. Sainte-Marie, Numerical simulations of a non-hydrostatic Shallow Water model, Computers & Fluids 47 (2011), 51-64. [14] F. Coquel, K. Saleh, N. Seguin, A Robust and Entropy-Satisfying Numerical Scheme for Fluid Flows in Discontinuous Nozzles, to appear in Math. Models Meth. Appl. Sci. (M3AS), 2014. [15] L. Gosse, Computing qualitatively correct approximations of balance laws. Exponential-fit, well-balanced and asymptotic-preserving, SIMAI Springer Series, 2. Springer, Milan, 2013. [16] N. Goutal, J. Sainte-Marie, A kinetic interpretation of the section-averaged Saint-Venant system for natural river hydraulics, Int. J. Numer. Meth. Fluids 67 (2011), 914-938. [17] S. Jin, Asymptotic preserving (AP) schemes for multiscale kinetic and hyperbolic equations: a review, Lecture Notes for Summer School on “Methods and Models of Kinetic Theory” (M&MKT), Porto Ercole (Grosseto, Italy), June 2010. Rivista di Matematica della Universite di Parma 3 (2012), 177-216. [18] B. Perthame, C. Simeoni, A kinetic scheme for the Saint Venant system with a source term, Calcolo 38 (2001), 201-231. [19] Y. Xing, C.-W. Shu, A survey of high order schemes for the shallow water equations, to appear in Journal of Mathematical Study, 2014.
COLO-HEP-269 A Numerical Test of KPZ Scaling: Potts Models Coupled to Two-Dimensional Quantum Gravity C.F. Baillie Physics Dept. University of Colorado Boulder, CO 80309, USA and D.A. Johnston Dept. of Mathematics Heriot-Watt University Riccarton Edinburgh, EH14 4AS, Scotland Abstract We perform Monte Carlo simulations using the Wolff cluster algorithm of the q=2 (Ising), 3, 4 and q=10 Potts models on dynamical phi-cubed graphs of spherical topology with up to 5000 nodes. We find that the measured critical exponents are in reasonable agreement with those from the exact solution of the Ising model and with those calculated from KPZ scaling for q=3,4 where no exact solution is available. Using Binder’s cumulant we find that the q=10 Potts model displays a first order phase transition on a dynamical graph, as it does on a fixed lattice. We also examine the internal geometry of the graphs generated in the simulation, finding a linear relationship between ring length probabilities and the central charge of the Potts model. To appear in Modern Physics Letters A. 1 Introduction There has been considerable activity recently in the field of two-dimensional matter coupled to two-dimensional gravity, motivated initially by string theory. Both the continuum Liouville theory and matrix models have been used in these investigations. The work in [1] by Knizhnik, Polyakov and Zamolodchikov (KPZ) and in [2] by Distler, David and Kawai (DDK), with the light-cone and conformal gauge-fixed Liouville theories respectively, allowed the calculation of critical exponents for conformal field theories with central charge $c<1$ coupled to two-dimensional quantum gravity. Both [1],[2] showed that the effect of coupling such theories to gravity was to “dress” an operator of conformal weight $\Delta_{0}$ in the original theory without gravity yielding a new weight $\Delta$ given by $$\Delta-\Delta_{0}=-{\alpha^{2}\over 2}\Delta(\Delta-1),$$ (1) where $$\alpha=-{1\over 2\sqrt{3}}(\sqrt{25-c}-\sqrt{1-c}).$$ (2) From eq. 1, which is called the KPZ scaling relation, we can see that the weights are modified by the gravitational dressing in a manner that depends only on the central charge. We can thus calculate the weights of operators when coupled to gravity by referring to the usual Kac table [3] to get $\Delta_{0}$ and then using eq. 1 to find $\Delta$. The $q=2,3,4$ Potts models which have $c={1\over 2},{4\over 5},1$ respectively fall within the framework discussed above, with the $q=4$ model lying on the boundary of the strong-coupling region $1<c<25$ where KPZ scaling breaks down. The $q=10$ model has a first order transition on a fixed lattice and conformal field theory methods are therefore not applicable. If we denote the critical temperature for a continuous spin-ordering phase transition by $T_{c}$ and the reduced temperature $|T-T_{c}|/T_{c}$ by $t$ then the critical exponents $\alpha,\beta,\gamma,\nu,\delta,\eta$ can be defined in the standard manner as $t\rightarrow 0$ $$\displaystyle C\simeq t^{-\alpha}$$ $$\displaystyle;$$ $$\displaystyle\;M\simeq t^{\beta},T<T_{c}$$ $$\displaystyle\chi\simeq t^{-\gamma}$$ $$\displaystyle;$$ $$\displaystyle\;\xi\simeq t^{-\nu}$$ $$\displaystyle M(H,t=0)$$ $$\displaystyle\simeq$$ $$\displaystyle H^{1/\delta},\;H\rightarrow 0$$ $$\displaystyle<M(x)M(y)>$$ $$\displaystyle\simeq$$ $$\displaystyle{1\over|x-y|^{d-2+\eta}},\;t=0$$ (3) where $C$ is the specific heat, $M$ is the magnetization, $\chi$ is the susceptibility, $\xi$ is the correlation length and $H$ is an external field. In the theories without gravity it is possible to calculate $\alpha$ and $\beta$ using the conformal weights of the energy density operator and spin operator (for a review see [4]). Given these we can now use the various scaling relations [3] $$\displaystyle\alpha$$ $$\displaystyle=$$ $$\displaystyle 2-\nu d$$ $$\displaystyle\beta$$ $$\displaystyle=$$ $$\displaystyle{\nu\over 2}(d-2+\eta)$$ $$\displaystyle\gamma$$ $$\displaystyle=$$ $$\displaystyle\nu(2-\eta)$$ $$\displaystyle\delta$$ $$\displaystyle=$$ $$\displaystyle{d+2-\eta\over d-2+\eta}$$ (4) to obtain the other exponents. When we couple the conformal field theories to gravity we can still calculate $\alpha$ and $\beta$ using the new conformal weights given by KPZ scaling. Then, provided the scaling relations in eq. 4 are still valid, we can obtain the full set of exponents. For reference we have listed the critical exponents for the $q=2,3,4$ Potts models in Table 1 and the $q=2,3,4$ Potts models coupled to gravity in Table 2. $$q$$ $$c$$ $$\alpha$$ $$\beta$$ $$\gamma$$ $$\delta$$ $$\nu$$ $$\eta$$ $$2$$ $$\frac{1}{2}$$ $$0$$ $$\frac{1}{8}$$ $$\frac{7}{4}$$ $$15$$ $$1$$ $$\frac{1}{4}$$ $$3$$ $$\frac{4}{5}$$ $$\frac{1}{3}$$ $$\frac{1}{9}$$ $$\frac{13}{9}$$ $$14$$ $$\frac{5}{6}$$ $$\frac{4}{15}$$ $$4$$ $$1$$ $$\frac{2}{3}$$ $$\frac{1}{12}$$ $$\frac{7}{6}$$ $$15$$ $$\frac{2}{3}$$ $$\frac{1}{2}$$ Table 1: Analytical exponents for 2d Potts models. $$q$$ $$c$$ $$\alpha$$ $$\beta$$ $$\gamma$$ $$\delta$$ $$\nu$$ $$\eta$$ $$2$$ $$\frac{1}{2}$$ $$-1$$ $$\frac{1}{2}$$ $$2$$ $$5$$ $$\frac{3}{d}$$ $$2-\frac{2d}{3}$$ $$3$$ $$\frac{4}{5}$$ $$-\frac{1}{2}$$ $$\frac{1}{2}$$ $$\frac{3}{2}$$ $$4$$ $$\frac{5}{2d}$$ $$2-\frac{3d}{5}$$ $$4$$ $$1$$ $$0$$ $$\frac{1}{2}$$ $$1$$ $$3$$ $$\frac{2}{d}$$ $$2-\frac{d}{2}$$ Table 2: Analytical exponents for Potts models coupled to 2d quantum gravity. Note that for the latter $d$, the internal fractal dimension of the graph, is not known a priori so $\nu$ and $\eta$ are obtained as functions of $d$. Reassuringly, the standard scaling relations are satisfied for the one model that has been exactly solved when it is coupled to gravity - the Ising model. The critical exponents $\alpha$ and $\beta$ calculated from the exact solution agree with those calculated by KPZ and the full set of exponents satisfy the relations in eq. 4. The exact solution of the Ising model coupled to gravity [5],[6] made use of the matrix-model techniques pioneered in [7] by showing that the partition function on a random graph was equal to the free energy of a two-hermitean $N\times N$ matrix model. The matrix model with a cubic interaction generates phi-cubed graphs with two types of vertices representing the spins, so the sum over graphs is equivalent to integrating over the metric when coupling the spins to two-dimensional gravity. The model was solved exactly in the planar limit $N\rightarrow\infty$ with both a cubic interaction and a quartic interaction. Both interactions gave a third order magnetization transition with the critical exponents shown in Table 2. It was found that the inverse critical temperature $\beta_{c}={1\over 2}ln{108\over 23}=0.7733185$ for cubic interactions with no tadpoles or self-energies [8]. For the Potts models on random graphs with a fixed number of nodes $N$ the partition function $Z_{N}$ is $$Z_{N}=\sum_{G^{(N)}}\sum_{\sigma}\exp\left(-{\beta\over 2}\sum_{i,j=1}^{N}G^{(% N)}_{ij}\delta(\sigma_{i}\sigma_{j})\right)$$ (5) where $G^{(N)}$ is the adjacency matrix, $\delta$ is a Kronecker delta and $\beta$ is the inverse temperature $1/T$ (not to be confused with the critical exponent $\beta$!) We have, in general, $q$ species of spin taking the values $0,1,...,q-1$. The solution of these models along the lines of [5],[6] has so far proved elusive for $q>2$ [9] so we do not know the order of the phase transition or the critical temperature for $q=3,4$. Simulations of the Ising model have been carried out on both dynamical triangulations [10],[11] and phi-cubed graphs [12] (these are effectively a numerical evaluation of eq. 5) and satisfactory agreement between the measured and theoretical values of the critical exponents found. However, no previous numerical work has been carried out on the $q=3,4$ Potts models where only the KPZ results are available rather than the exact solution. The object of the simulations in this paper is to measure the critical exponents in the $q=3,4$ Potts models on dynamical phi-cubed graphs (i.e. coupled to two-dimensional quantum gravity) in order to see if they are in agreement with the values calculated using KPZ scaling. Our work can thus be considered as a numerical test of the validity of KPZ scaling in these models. We also investigate the $q=10$ Potts model to see if it has a first order transition on a dynamical phi-cubed graph. The Ising model, where we have the exact solution and previous simulations to compare with, is used to verify that our simulation is working properly. We have chosen to simulate phi-cubed graphs rather than the dual triangulation because the exact solution of the Ising model is couched in this form, although universality would lead us to believe that the two should give identical results. The spin model critical exponents are unaffected by the topology of the phi-cubed graphs so we use graphs of spherical topology for simplicity. 2 Simulations We perform a microcanonical (fixed number of nodes) Monte Carlo simulation on graphs with $N=50,100,200,300,500,1000,2000$ and $5000$ nodes, at various values of $\beta$ between $0.1$ and $1.5$. The Monte Carlo update consists of two parts: one for the spin model and one for the graph. For the Potts model we use a cluster update algorithm since it suffers from much less critical slowing down than the standard Metropolis algorithm (for a review see [13]). There are two popular cluster algorithm implementations - Wolff [14] and Swendsen-Wang [15]. As they are equivalent for the usual two-dimensional $q=2,3$ Potts models [16] we use Wolff’s variant because it is computationally faster. In order to keep the autocorrelation time, in terms of update sweeps, constant the number of times the Wolff cluster algorithm is applied per sweep, $W$, must be increased as the temperature increases (since the average cluster size decreases). At the critical temperature, where the correlation length diverges, the integrated autocorrelation time takes on its maximum value which scales as $$\tau_{int}\simeq N^{z\over d},$$ (6) where $z$ is the dynamical critical exponent. The measured values of $\tau_{int}$ and fitted values of $z$ are listed in Table 3. $$N$$ $$q=2$$ $$q=3$$ $$q=4$$ 500 1.38(19) 3.83(17) 8.31(5) 1000 1.45(13) 4.35(17) 9.17(5) 2000 1.51(10) 4.40(20) 10.18(16) 5000 1.58(8) 4.73(26) 9.2(2)$${}^{*}$$ $$z/d$$ 0.06(5) 0.09(3) 0.15(1) Table 3: $\tau_{int}$ at $\beta_{c}$ for $N=500,1000,2000,5000$ and fitted values of $z/d$ (last line); ${}^{*}$ indicates that value is not reliable due to $W$ being too large and run time being too short. If we assume that $d$ is $2$ (or $3$) then we obtain $z=0.12(10),0.18(6),0.30(2)$ (or $0.18(15),0.27(9),0.45(3)$) for $q=2,3,4$ Potts models, respectively. (This can be compared with the usual two-dimensional $q=2,3$ models, for which Baillie and Coddington measure $z=0.25(1),0.57(1)$ respectively [16].) Thus we see that the Wolff cluster algorithm almost eliminates critical slowing down even on dynamical graphs. For the graph update we use the Metropolis algorithm with the standard “flip” move [17]. As we are working with phi-cubed graphs the detailed balance condition involves checking that the rings at either ends of the link being flipped have no links in common. This check also eliminates all graphs containing tadpoles or self-energies. After each Potts model update sweep we randomly pick $NFLIP$ links one after another and try to flip them. After testing various values of $NFLIP$ to ensure that there were enough flips to make the graph dynamical on the time scale of the Potts model updates we set $NFLIP=N$ for all the simulations. 3 Results We measure all the standard thermodynamic quantities for the spin model: energy $E$, specific heat $C$, magnetization $M$, susceptibility $\chi$ and correlation length $\xi$; and several properties of the graph: acceptance rates for flips, distribution of ring lengths and internal fractal dimension $d$. To determine $\nu$ (actually $\nu d$) and $\beta_{c}$ separately, instead of from the usual three-parameter finite-size scaling fit, for example $\xi=\xi_{0}(|T-T_{c}|/T_{c})^{-\nu}$, we used Binder’s cumulant [18]. This is done as follows. Binder’s cumulant $U_{N}$ on graph with $N$ nodes is defined as $$U_{N}=1-{<M^{4}>\over 3<M^{2}>^{2}},$$ (7) where $<M^{4}>$ is the average of the fourth power of the magnetization and $<M^{2}>$ is the average of its square. For a normal temperature-driven continuous phase transition $U_{N}\rightarrow 0$ for $T>T_{c}$ because $M$ is gaussian distributed about 0 at high temperature, and $U_{N}\rightarrow{2\over 3}$ for $T<T_{c}$ because a spontaneous magnetization $M_{sp}$ develops in the low temperature phase. At $T=T_{c}$, $U_{N}$ has a non-trivial value which scales with $N$ according to $$U_{N}\simeq tN^{1\over\nu d}.$$ (8) Therefore the slope of $U_{N}$ with respect to $T$ (or $\beta$) at $T_{c}$ gives ${1\over\nu d}$. This is not much use as it stands since it involves knowledge of $T_{c}$, but the $maximum$ value of the slope scales in the same way, so we can extract $\nu d$ from $$\max({dU_{N}\over d\beta})\simeq N^{1\over\nu d}.$$ (9) We have used this successfully to obtain the values listed in column 2 of Table 4 from the fits shown in Fig. 1. They agree with values from KPZ scaling (Table 2). $$q$$ $$\nu d$$ $$\beta_{c}^{\infty}(U_{N})$$ $$\beta_{c}^{\infty}(C)$$ $$2$$ 3.20(21) 0.77(1) 0.7735(12) $$3$$ 2.46(12) 0.87(1) 0.868(1) $$4$$ 2.03(12) 0.92(1) 0.921(1) $$10$$ 1.5(4) 1.15(1) 1.141(1) Table 4: Fitted values of $\nu d$ and inverse critical temperature $\beta_{c}$ from Binder’s cumulant $U_{N}$ and specific heat $C$. Next, knowing $\nu d$, we use the standard finite-size scaling relation (with $L$ replaced by $N^{1\over d}$ since we do not know $d$ a priori) $$|\beta_{c}^{N}-\beta_{c}^{\infty}|\simeq N^{-{1\over\nu d}}$$ (10) to extract $\beta_{c}^{\infty}$, using $\beta_{c}^{N}$s obtained from the position of the maximum in the slope of $U_{N}$ or from the peak in the specific heat. The latter is an order of magnitude more accurate, as can be seen from the results in columns 3 and 4 of Table 4. The fits for $q=2$ are shown in Fig. 2. Lastly, we measure the other critical exponents either from the singular behavior of the thermodynamic functions $$C=B+C_{0}t^{-\alpha},\;M=M_{0}t^{\beta},\;\chi=\chi_{0}t^{-\gamma},\;\xi=\xi_{% 0}t^{-\nu}$$ (11) knowing $\beta_{c}$, or from the finite-size scaling relations $$C=B^{\prime}+C_{0}^{\prime}N^{\alpha\over\nu d},\;M=M_{0}^{\prime}N^{-\beta% \over\nu d},\;\chi=\chi_{0}^{\prime}N^{\gamma\over\nu d}$$ (12) using the previously obtained value of $\nu d$. Despite the fact that the quality of the former set of fits depends very strongly on the precise value of $\beta_{c}$, it turns out that they are better than the latter; we have listed the values of the exponents obtained in Table 5, columns 2-5 for the former and 6-8 for the latter. $$q$$ $$\alpha$$ $$\beta$$ $$\gamma$$ $$\nu$$ $$\alpha/\nu d$$ $$\beta/\nu d$$ $$\gamma/\nu d$$ $$d$$ $$2$$ -0.98(7) 0.275(4) 1.91(13) 0.87(2) 0.32(1) 0.155(10) 0.79(1) 2.375(19) $$3$$ -0.48(5) 0.217(3) 1.54(10) 0.82(1) 0.19(1) 0.128(5) 0.81(1) 2.376(19) $$4$$ log 0.304(3) 1.03(4) 0.65(1) log 0.207(6) 0.70(1) 2.356(19) Table 5: Measured values of critical exponents $\alpha,\beta,\gamma,\nu$ and internal fractal dimension $d$ from $N=5000$ graphs; ‘log’ signifies that a logarithmic fit was better than a power law fit implying that the corresponding exponent is $0$. In order to constrain the first set of fits (by reducing the number of free parameters from three to two) we fix $\beta_{c}$ to be the exact value for the Ising model and the values given in column 4 of Table 4 for the $q=3,4$ Potts models. These fits all use data from largest graphs simulated ($N=5000$). All of the fits for the specific heat are very good because there is an extra adjustable constant in eqs. 11,12 ($B,B^{\prime}$) – we easily obtain the values predicted by KPZ for both $\alpha$ and $\alpha/\nu d$. The next best fits are those for $\gamma$ which again yield the expected values. Unfortunately the same is not true for $\beta$, the exponent governing the vanishing of the magnetization as $T\rightarrow T_{c}$ from below, where we obtain values around $1/4$ or $1/3$ rather than $1/2$. Presumably either our graphs are not big enough to distinguish the singularity from the “background” finite-size rounding or there are large corrections to scaling (or both). We are currently running on larger graphs to check this. One reassuring sign is that the fitted value of $\beta$ does increase with $N$. For the Ising model, Catterall et al [12] also had difficulty with this exponent, estimating that $\beta=0.25(10)$. However, Ben-Av et al [11], who use dynamical triangulations rather than phi-cubed graphs, manage to obtain $\beta=0.45(10)$. Despite the fact that plots of the scaled magnetization and susceptibility, shown in Figs. 3 and 4, look fairly good (exhibiting the expected asymptotic slopes of $\beta$ and $\gamma$ respectively), we have a little difficulty with the finite-size scaling fits to both $\beta/\nu d$ and $\gamma/\nu d$. The former comes out somewhat low ($0.155(10),0.128(5),0.207(6)$ rather than $1/6,1/5,1/4$ for $q=2,3,4$ respectively) and the latter somewhat high ($0.79(1),0.81(1),0.70(1)$ rather than $2/3,3/5,1/2$ for $q=2,3,4$ respectively). Again, comparing with the previous Ising model simulations, we find that both Jurkiewicz et al [10] and Catterall et al [12] obtained less accurate but consistent values $0.16(3)$ and $0.16(1)$ respectively for $\beta/\nu d$, and $0.71(4)$ and $0.6(1)$ respectively for $\gamma/\nu d$. From this it appears that our data may be becoming accurate enough to allow examination of correction-to-scaling effects, and we shall do this when we have more data on larger systems. We also fitted the exponent $\nu$ from the power law divergence of the correlation length $\xi$ at $\beta_{c}$. However, $\xi$ itself must be obtained from a fit: the 2-point correlation function $\Gamma$ should behave as $$\Gamma(r)\ \equiv\ \sum\sigma_{i}\sigma_{i+r}\ =\ c\ e^{-mr},$$ (13) where $m\equiv 1/\xi$, and the sum is over some number of measurements made on each graph with the position of the spin $\sigma_{i}$ being chosen randomly. $r$ is the internal distance between two spins on the graph, i.e. the fewest links between them. As two fits are involved, the results are not particularly reliable: we obtain $\nu=0.87(2),0.82(1),0.65(1)$ for $q=2,3,4$ respectively. As discussed above, the KPZ predictions for $\nu$ involve the internal fractal dimension $d$ so we shall postpone further discussion of these $\nu$ values until we estimate $d$ below. We now turn to the properties of the graphs. The first thing we can investigate is the acceptance rate for the Metropolis flip move to confirm that our graphs are really dynamical. The flip can be forbidden either from the graph constraints coming from the detailed balance condition or from the energy change of the spin model, so we can decompose the flip acceptance rate into two parts: AL – the fraction of randomly selected links which can be flipped satisfying the graph constraints; and AF – the fraction of links satisfying the graph constraints which are actually flipped, i.e. pass the Metropolis test using the Potts model energy change. These quantities are shown for $q=2$ on a 2000 node graph in Fig. 5. We immediately see that both AF and AL dip at some $\beta<\beta_{c}$ but at different places. We can also examine the distribution of ring lengths in the graph, which is the discrete equivalent of measuring the distribution of local Gaussian curvatures in the continuum. For pure quantum gravity (no spin model living on the graph) it is possible to analytically calculate this [17]. The probability $P$ of finding a ring of length $l$ is given by $$P_{N\rightarrow\infty}(l)=16({3\over 16})^{l}{(l-2)(2l-2)!\over l!(l-1)!}$$ (14) which decays exponentially as $l$ increases. The minimum possible ring length is 3. If we plot the fraction of rings of length three (PR3) in Fig. 5 along with AF and AL we see that it has a peak very close to the dip in AL. This is reasonable since both PR3 and AL depend only on the graph, whereas AF depends on the Potts model. We plot PR3 as a function of the reduced temperature $t$ for all the $q$s in Fig. 6, where we see that the height of the peak increases and its position moves closer to $t=0$ as $q$ is increased. The $q=10$ model, for which there is no conformal field theory at all, appears to have a peak very close to $t=0$. Recalling that the ring length on the phi-cubed graph is equivalent to the coordination number $q_{i}$ of the point $i$ at the center of the ring on the dual triangulation, and that the local curvature $R_{i}$ at this point $i$ is given as $R_{i}=\pi(6-q_{i})/q_{i}$, we see that as $q$ (and hence $c$) increases the number of points with maximal positive curvature (i.e. $q_{i}=3$ so $R_{i}=\pi$) increases. These results lend some credence to the suggestion that the failure of KPZ scaling for $1<c<25$ may be due to the liberation of curvature singularities at $c=1$ [19] 111We investigate the interesting question of whether there is a sudden increase in singularities as the central charge is increased through one with multiple Potts models in [24].. We have no explanation as to why the flip acceptance rates dip and the fraction of rings of length three peaks away from the phase transition point of the Potts model for $q=2,3,4$. We can only make one interesting observation: in simulations of the crumpling transition of dynamically triangulated random surfaces (DTRS) a dip in the flip acceptance rate is also found away from the transition at $\lambda<\lambda_{c}$ in the crumpled phase (which corresponds to $\beta<\beta_{c}$ here) [20]. We now show that the distribution of ring lengths in the graphs is determined by the central charge of the Potts model living on them. If we plot the difference in the fraction of rings of length three at the critical point $\beta_{c}$ of the Potts model from the pure gravity fraction (eq. 14 with $l=3$) against the central charge of the Potts model (which we know for $q=2,3,4$), we find a straight line with slope $0.010(1)$ which passes through the origin. This is shown in Fig. 7, along with some results from simulations of multiple Ising models [24]. These also lie on the line, although their central charge places them in the strong-coupling region of Liouville theory where the KPZ results break down. We can also plot a difference using the peak height in the fraction of rings of length three against the central charge to obtain another straight line with slope $0.015(1)$ (this is also shown in Fig. 7). However, the peak does not occur at $\beta_{c}$ so the correlation length is not infinite and we cannot expect the results of conformal field theory to apply there. Hence it appears that if we have a model whose central charge we do not know we can look up the value of PR3 at the phase transition (or its peak value) on the y-axis of Fig. 7 and read off its “effective central charge” from the x-axis. If this relation holds in general it would provide a viable method of obtaining $c$ for any model coupled to quantum gravity either on a random graph or on a DTRS. Interestingly the $q=10$ Potts model still lies on the line despite the absence of a corresponding conformal field theory, giving an “effective central charge” consistent with $1$. To complete our discussion of the graph properties, we look at their internal fractal dimension $d$. We use the most naive definition of distance (the fewest links between two nodes) so we are considering the “mathematical geometry” rather than the “physical geometry” in the terminology of [21]. The values obtained for graphs with $5000$ nodes are listed in the last column of Table 5. These were measured at the critical point of the Potts model but the same results (within statistical error bars) were obtained for all $\beta$. Moreover the same value of $d$ (within errors) was obtained for each $q$. The values do, however, depend on $N$ and if we extrapolate to $N=\infty$ for the Ising model we obtain $d=2.78(4)$. On very large graphs of around 100,000 nodes with no matter (pure two-dimensional quantum gravity) Agishtein and Migdal [22] found that the relation between the area $V(r)$ and radius $r$ of a circle using the mathematical geometry was of the form $$\log V=a+b\log(r)+c(\log(r))^{2}$$ (15) so there was no fractal dimension at all. It is possible that a similar effect may be found when simulations such as ours incorporating matter are carried out on graphs some orders of magnitude bigger 222It is a much more demanding problem to generate huge graphs with matter as it is no longer possible to use graph enumeration formulae to generate them recursively as was done for pure quantum gravity.. If this is the case it is difficult to understand how critical behavior, which assumes some kind of scaling and hence fractal dimension, can appear at all. Nonetheless, the exact solution of the Ising model, the KPZ results and the simulations in this and other papers appear to show that phase transitions are taking place and critical exponents can be defined. A possible resolution of this problem is that the spin degrees of freedom are actually sensitive to the physical geometry which is less singular than the mathematical geometry. A sign that this is indeed the case may be found in the fact that Agishtein and Migdal obtain a value of $2.7$ for the internal fractal dimension of their pure quantum gravity graphs using the physical geometry [21], which is surprisingly close to our $2.78(4)$. Ignoring these qualms about the existence of the fractal dimension, we resume our discussion of the $\nu$ values obtained above from the correlation length. Taking our value of $d=2.78$ we estimate $\nu d=2.4,2.3,1.8$, whereas KPZ scaling predicts $3,2.5,2$ (for $q=2,3,4$ respectively). There is obviously some discrepancy but the numbers are fairly close, implying that our analysis is consistent. Finally, we briefly discuss our results for the $q=10$ Potts model. On a fixed graph the $q>4$ Potts models display first order transitions, so there is no corresponding conformal field theory. The fact that the $q=4$ Potts model lies at the boundary of the region where the KPZ formula applies ($c=1$) suggests that something similar might happen when $q>4$ Potts models are coupled to quantum gravity. By examining the behavior of Binder’s cumulant [18] it is clear that the $q=10$ Potts model retains its first order phase transition on a dynamical graph. As shown in Fig. 8, $U_{N}$ has a minimum (the position of which $\rightarrow\beta_{c}$ as $N\rightarrow\infty$) as expected for a first order transition, and tends to $1/2$ (rather than to $0$ which is the case for a second order transition) as $\beta\rightarrow 0$. From finite-size scaling we expect that the peaks in the specific heat and susceptibility grow as $L^{d}$, i.e. $N$, for first order phase transitions. If we fit $\max(\chi)$ versus $N$ then we obtain an exponent $0.93(1)$. (We can also fit to $\max(C)$ but as before the adjustable constant $B$ renders the fit insignificant.) 4 Conclusions To summarize, we have verified numerically that the critical exponents for the $q=2,3,4$ Potts models on dynamical phi-cubed graphs (i.e. coupled to two-dimensional quantum gravity) are in reasonable agreement with those predicted by KPZ, and that the $q=10$ Potts model appears to have a first order transition. We have also found some interesting behavior in the graphs themselves, namely that there is a peak in the ring (curvature) distribution which approaches $\beta_{c}$ from below as $c\rightarrow 1$, and that there is a linear relation between the probability of rings of length three and $c$. From the algorithmic point of view, our measurement of the dynamical critical exponent $z$ reveals that the Wolff algorithm is effective in alleviating critical slowing down for Potts models on dynamical graphs as well as on fixed lattices. In companion papers we explore the fractal properties of the spin-clusters that we use in our Wolff algorithm [23], comparing them with those on fixed graphs, and use multiple copies of the Potts models to explore the internal geometry of the graph in the strong coupling region [24]. Acknowledgements This work was supported in part by NATO collaborative research grant CRG910091. CFB is supported by DOE under contract DE-AC02-86ER40253 and by AFOSR Grant AFOSR-89-0422. We would like to thank M.E. Agishtein and A.A. Migdal for providing us with initial graphs generated by their two-dimensional quantum gravity code, and S. Catterall and A. Krzywicki for useful discussions. References [1] V.G. Knizhnik, A.M. Polyakov and A.B. Zamolodchikov, Mod. Phys. Lett. A3 (1988) 819. [2] F. David, Mod. Phys. Lett. A3 (1988) 1651 J. Distler and H. Kawai, Nucl. Phys. B321 (1989) 509. [3] C. Itzykson and J-M. Drouffe, “Statistical Field Theory”, Cambridge University Press 1989. [4] P. Ginsparg, “Applied Conformal Field Theory”, lectures at Les Houches June-August 1988. [5] V.A. Kazakov, Phys. Lett. A119 (1986) 140. [6] D.V. Boulatov and V.A. Kazakov, Phys. Lett. B186 (1987) 379. [7] E. Brezin, C. Itzykson, G. Parisi and J.B. Zuber, Commun. Math. Phys. 59 (1978) 35 M.L. Mehta, Commun. Math. Phys. 79 (1981) 327. [8] Z. Burda and J. Jurkiewicz, Acta Physica Polonica B20 (1989) 949. [9] V.A. Kazakov, Nucl. Phys. B ( Proc. Suppl.) 4 (1988) 93. [10] J. Jurkiewicz, A. Krzywicki, B. Petersson and B. Soderberg, Phys. Lett. B213 (1988) 511. [11] R. Ben-Av, J. Kinar and S. Solomon, Nucl. Phys. B ( Proc. Suppl.) 20 (1991) 711. [12] S.M. Catterall, J.B. Kogut and R.L. Renken, “Scaling Behaviour of the Ising Model Coupled to Two-Dimensional Gravity” Illinois preprint ILL-(TH)-91-19 (1991). [13] C.F. Baillie, Int. J. of Mod. Phys. C 1 (1990) 91. [14] U. Wolff, Phys. Rev. Lett. 62 (1989) 361. [15] R.H. Swendsen and J.-S. Wang, Phys. Rev. Lett. 58 (1987) 86. [16] C.F. Baillie and P.D. Coddington, Phys. Rev. B43 (1991) 10617. [17] D.V. Boulatov, V.A. Kazakov, I.K. Kostov and A.A. Migdal, Nucl. Phys. B275 (1986) 641. [18] K. Binder, Z. Phys. B43 (1981) 119 M.S.S. Challa, D.P. Landau and K. Binder, Phys. Rev. B34 (1986) 1841. [19] M.E. Cates, Europhys. Lett. 8 (1988) 719 A. Krzywicki, Phys. Rev. D41 (1990) 3086 F. David, “What is the Intrinsic Geometry of Two Dimensional Quantum Gravity?”, Rutgers preprint RU-91-25 (May 1991). [20] C.F. Baillie, D.A. Johnston and R.D. Williams, Nucl. Phys. B335 (1990) 469. [21] M.E. Agishtein and A.A. Migdal, “Simulations of Four Dimensional Simplicial Quantum Gravity”, Princeton preprint PUPT-1287 (October 1991). [22] M.E. Agishtein and A.A. Migdal, Nucl. Phys. B350 (1991) 690. [23] C.F. Baillie and D.A. Johnston, in preparation. [24] C.F. Baillie and D.A. Johnston, in preparation. Figure Captions Fig. 1. Fit to maximum slope of derivative of Binder’s cumulant versus $N$ to extract $\nu d$. Fig. 2. Extrapolation of $\beta_{c}^{N}$ from Binder’s cumulant and specific heat to estimate $\beta_{c}^{\infty}$ for Ising model. Fig. 3. Finite-size scaling plot of $M$ for (inverse temperature) $\beta<\beta_{c}$, with expected asymptotic slope of (exponent) $\beta=0.5$ for all models shown as line. Fig. 4. Finite-size scaling plot of $\chi$ for $\beta<\beta_{c}$, with expected asymptotic slopes of $\gamma=2,1.5,1$ for the $q=2,3,4$ Potts models respectively shown as lines. Fig. 5. AF, AL and PR3 for Ising model on graph with $N=2000$; the y-scale applies to AF only, AL and PR3 have been scaled appropriately to fit on plot; $\beta_{c}$ is indicated by vertical line. Fig. 6. Probabilities of rings of length three PR3 as function of reduced temperature $t$ for all $q$. Fig. 7. Difference in PR3 at $\beta_{c}$ of Potts model, and at its peak, from the pure quantum gravity value versus the central charge $c$, for multiple Ising models as well as for single Potts models. Fig. 8. Binder’s cumulant for $q=10$ Potts model (errors bars omitted for clarity); $\beta_{c}$ is indicated by vertical line.
Nonuniqueness of infinity ground states Ryan Hynd111Department of math, Courant institute, partially supported by NSF grant DMS-1004733., Charles K. Smart222Department of math, MIT, partially supported by NSF grant DMS-1004594., Yifeng Yu333Department of math, UC Irvine, partially supported by NSF grant DMS-0901460 and NSF CAREER award DMS-1151919. Abstract In this paper, we construct a dumbbell domain for which the associated principle $\infty$-eigenvalue is not simple. This gives a negative answer to the outstanding problem posed in [2]. It remains a challenge to determine whether simplicity holds for convex domains. 1 Introduction Let $\Omega$ be a bounded open set in ${\mathbb{R}}^{n}$. According to Juutinen-Lindqvist-Manfredi [2], a continuous function $u\in C(\bar{\Omega})$ is said to be an infinity ground state in $\Omega$ if it is a positive viscosity solution of the following equation: $${}\begin{cases}\max\left\{\lambda_{\infty}-{|Du|\over u},\ \Delta_{\infty}u% \right\}=0&\text{in $\Omega$}\\ u=0&\text{on $\partial\Omega$.}\end{cases}$$ (1.1) Here $$\lambda_{\infty}=\lambda_{\infty}(\Omega)={1\over\max_{\Omega}d(x,\partial% \Omega)}$$ is the principle $\infty$-eigenvalue, and $\Delta_{\infty}$ is the infinity Laplacian operator, i.e, $$\Delta_{\infty}u=u_{x_{i}}u_{x_{j}}u_{x_{i}x_{j}}.$$ The above equation is the limit as $p\to+\infty$ of the equation $${}\begin{cases}-{\mathrm{div}}(|Du|^{p-2}Du)=\lambda_{p}^{p}|u|^{p-2}u&\text{% in $\Omega$}\\ u=0&\text{on $\partial\Omega$},\end{cases}$$ (1.2) which is the Euler-Lagrange equation of the nonlinear Rayleigh quotient $${\int_{\Omega}|Du|^{p}\,dx\over\int_{\Omega}|u|^{p}\,dx}.$$ Precisely speaking, let $u_{p}$ be a positive solution of equation (1.2) satisfying $$\int_{\Omega}u^{p}_{p}\,dx=1.$$ If $u_{\infty}$ is a limiting point of $\{u_{p}\}$, i.e, there exists a subsequence $p_{j}\to+\infty$ such that $$u_{p_{j}}\to u_{\infty}\quad\text{uniformly in $\bar{\Omega}$},$$ it was proved in [2] that $u_{\infty}$ is a viscosity solution of the equation (1.1) and $$\lim_{p\to+\infty}\lambda_{p}=\lambda_{\infty}.$$ We say that $u$ is a variational infinity ground state if it is a limiting point of $\{u_{p}\}$. A natural problem regarding equation (1.1) is to deduce whether or not infinity ground states in a given domain are unique up to a multiplicative factor; in this case, $\lambda_{\infty}$ is said to be simple. The simplicity of $\lambda_{\infty}$ has only been established for those domains where the distance function $d(x,\partial\Omega)$ is an infinity ground state ([5]). Such domains includes the ball, stadium, and torus. It has been a significant outstanding open problem to verify if simplicity holds in general domains or to exhibit an example for which simplicity fails. In this paper, we resolve this problem by constructing a planar domain where simplicity fails to hold. For $\delta\in(0,1)$, denote the dumbbell $$D_{0}=B_{1}(\pm 5e_{1})\cup R$$ for $R=(-5,5)\times(-\delta,\delta)$ and $e_{1}=(1,0)$. Throughout this paper, $B_{r}(x)$ represents the open ball centered at $x$ with radius $r$. The following is our main result. Theorem 1.1 There exists $\delta_{0}>0$ such that when $\delta\leq\delta_{0}$, the dumbbell $D_{0}$ possesses an infinity ground state $u_{\infty}$ which satisfies $u_{\infty}(5,0)=1$ and $u_{\infty}(-5,0)\leq{1\over 2}$. In particular, $u$ is not a variational ground state and $\lambda_{\infty}(D_{0})$ is not simple. We remark that the infinity ground state described in the theorem is nonvariational simply because it is not symmetric with respect to the $x_{2}$-axis, which variational ground states can be showed to be. This immediately follows from the fact that $\lambda_{p}$ is simple, which implies any solution $u_{p}$ of (1.2) on $\Omega=D_{0}$ must be symmetric with respect to the $x_{2}$-axis. We also remark that the number “${1\over 2}$” in the above theorem is not special. By choosing a suitable $\delta_{0}$, we can in fact make $u_{\infty}(-5,0)$ less than any positive number. For the reader’s convenience, we sketch the idea of the proof. Consider the union of two disjoint balls with distinct radius $U_{\epsilon}=B_{1}(5e_{1})\cap B_{1-\epsilon}(-5e_{1})$ for $\epsilon\in(0,1)$. If $u$ is an infinity ground state of $U_{\epsilon}$, the uniqueness of $\lambda_{\infty}$ ([2]) immediately implies that $u\equiv 0$ in $B_{1-\epsilon}(-5e_{1})$. A similar conclusion also holds for the principle eigenfunction of $\Delta_{p}$. It is therefore natural to expect that such a degeneracy of $u$ on the smaller ball may change very little if we add a narrow tube connecting these two balls. The key is to get uniform control of the width of the tube as $\epsilon\to 0$ for variational infinity ground states in an asymmetric perturbation $D_{\epsilon}$ of $D_{0}$; this is proved in Lemma 2.3. Lemma 2.3 also implies the sensitivity of principle eigenfunctions of $\Delta_{p}$ when $p$ gets large. An important step is to show that, within the narrow tube, the $L^{p}$ norm of principle eigenfunction of $\Delta_{p}$ is uniformly controlled by its maximum norm (Lemma 2.2). We would like to point out that such a procedure as described above does not work for finite $p$. 2 Proof We first prove several lemmas. Throughout this section, we write $e_{1}=(1,0)$ and $e_{2}=(0,1)$. The following estimate follows easily from comparison with the fundamental solution of the $p$-Laplacian, i.e. $|x|^{p-2\over p-1}$. Lemma 2.1 Let $R=(-1,1)\times(-\delta,\delta)$ for $\delta\in(0,{1\over 2})$. Assume that $\lambda\in(0,2)$ and $u\leq 1$ is a positive solution of $$\begin{cases}-\Delta_{p}u=-\mathrm{div}(|Du|^{p-2}Du)={\lambda}^{p}u^{p-1}&% \text{in $R$}\\ u(t,\pm\delta)=0&\text{for $t\in[-1,1]$}.\end{cases}$$ (2.3) Then for $p\geq 7$ $$u(x)\leq 6|x\pm\delta e_{2}|^{p-2\over p-1}.$$ (2.4) Proof: Denote $w(x)=6|x-\delta e_{2}|^{\alpha}-{1\over 2}|x-\delta e_{2}|^{2\alpha}$ for $\alpha={p-2\over p-1}$. Note that if $w=f(u)$, then $$\Delta_{p}w=|f^{\prime}|^{p-2}f^{\prime}\Delta_{p}u+(p-1)|f^{\prime}|^{p-2}f^{% \prime\prime}|Du|^{p}.$$ Since $\Delta_{p}|x|^{\alpha}=0$ and $|x-\delta e_{2}|<2$, a direct computation using the above formula shows that for $p\geq 7$, $$-\Delta_{p}w=(p-1)|x-\delta e_{2}|^{-p\over p-1}\alpha^{p}(6-|x-\delta e_{2}|^% {\alpha})^{p-2}>{4^{p-3}\over 2}\geq 2^{p}\quad\text{in $R$}.$$ It is straightforward to check $w>0$ in $R$ and $$w(\pm 1,x_{2})\geq 4\quad\text{for $|x_{2}|\leq{\delta}$}.$$ Hence $$u(x)\leq w(x)\quad\text{on $\partial R$}.$$ Combining with $-\Delta_{p}u\leq 2^{p}$, (2.4) follows from the comparison principle. $\square$ The following estimate may not be optimal, but is sufficient for our purposes. Lemma 2.2 Let $R_{4}=(-4,4)\times(-\delta,\delta)$ for $\delta\in(0,{1\over 2})$. Assume that $\lambda\leq 2$ and $u\leq 1$ is a positive solution of $$\begin{cases}-\Delta_{p}u=-\mathrm{div}(|Du|^{p-2}Du)=\lambda^{p}u^{p-1}&\text% {in $R_{4}$}\\ u(t,\pm\delta)=0&\text{for $t\in[-4,4]$}.\end{cases}$$ (2.5) Then, for $p\geq 7$ and $R_{1}=(-1,1)\times(-\delta,\delta)$, $$\int_{R_{1}}u|Du|^{p-1}\,dx+\int_{R_{1}}|Du|^{p}\,dx\leq C_{0}^{p}.$$ (2.6) Here $C_{0}>1$ is a universal constant (independent of $p$ and $\delta$). Proof: For $i=1,2,3,4$, we write $R_{i}=(-i,i)\times(-\delta,\delta)$. Throughout the proof, $C>1$ represents various numbers which are independent of $p$ and $\delta$. We first prove an estimate which is a slight modification of a well know result ([3],[4]). Suppose that $\xi\in C_{0}^{\infty}(R_{4})$ and $0\leq\xi\leq 1$. Multiplying $u^{1-p}\xi^{p}$ on both sides of (2.5) and using Hölder’s inequality, we get $$S\leq{p\over p-1}S^{1-{1\over p}}||D\xi||_{L^{p}(R_{2})}+{2^{p+1}},$$ where $S=\int_{R_{4}}|{Du\over u}|^{p}\xi^{p}\,dx$. If ${S\over 2}\geq{2^{p+1}}$, then $${S\over 2}\leq{p\over p-1}S^{1-{1\over p}}||D\xi||_{L^{p}(R_{2})}.$$ Since $({p\over p-1})^{p}\leq 4$, we have that $$S=\int_{R_{4}}\left|{Du\over u}\right|^{p}\xi^{p}\,dx\leq\max\left\{2^{p+2},\ % 4\cdot 2^{p}\int_{R_{4}}|D\xi|^{p}\,dx\right\}.$$ (2.7) Let $g_{1}(t)\in C_{0}^{\infty}(-4,4)$ satisfy $0\leq g_{1}\leq 1$, $|g_{1}^{{}^{\prime}}|\leq 2$ and $$g_{1}(t)=1\quad\text{for $t\in[-3,3]$}.$$ Also, for $m\in{\mathbb{N}}$, denote $\delta_{m}=\delta(1-{1\over m})$. Choose $h_{m}(t)\in C_{0}^{\infty}(-\delta,\delta)$ such that $0\leq h_{m}\leq 1$, $|h_{m}^{{}^{\prime}}|\leq{2m\over\delta}$ and $$h_{m}(t)=1\quad\text{for $t\in[-\delta_{m},\delta_{m}$]}.$$ For $x=(x_{1},x_{2})$, let $\xi_{m}(x_{1},x_{2})=g_{1}(x_{1})h_{m}(x_{2})$. Then $$|D\xi_{m+1}|^{p}\leq 2^{p}(2^{p}+|h_{m+1}^{{}^{\prime}}|^{p})$$ and $$4\cdot 2^{p}\cdot\int_{R_{4}}|D\xi_{m+1}|^{p}\,dx\leq 32\cdot 8^{p}+32\cdot 8^% {p}\cdot{\left({m+1\over\delta}\right)}^{p-1}\leq C^{p}\left({m\over\delta}% \right)^{p-1}.$$ Hence by (2.7) $$\int_{[-3,3]\times[-\delta_{m+1},\delta_{m+1}]}\left|{Du\over u}\right|^{p}\,% dx\leq C^{p}\left({m\over\delta}\right)^{p-1}.$$ Owing to Lemma 2.1 and translation, we have that for $x=(x_{1},x_{2})\in[-3,3]\times(-\delta,\delta)$ $$u(x_{1},x_{2})\leq 6\min\{(\delta-x_{2})^{p-2\over p-1},\ (\delta+x_{2})^{p-2% \over p-1}\}.$$ In particular, we have $$u(x_{1},x_{2})\leq 6\left({\delta\over m}\right)^{p-2\over p-1}\quad\text{in $% A_{m}$},$$ where $A_{m}=[-3,3]\times[\delta_{m},\delta_{m+1}]$. Hence $$\int_{A_{m}}|Du|^{p}\,dx\leq C^{p}\cdot\left({m\over\delta}\right)^{p-1}\left(% {\delta\over m}\right)^{p(p-2)\over p-1}\leq C^{p}\cdot\left({m\over\delta}% \right)^{1\over p-1};$$ again we emphasize $C$ is independent of $p$ and $\delta$. Accordingly, $$\int_{[-3,3]\times[0,{\delta}]}u^{2}|Du|^{p}\,dx=\sum_{m=1}^{\infty}\int_{A_{m% }}u^{2}|Du|^{p}\,dx\leq 36\cdot C^{p}\sum_{m=1}^{\infty}{1\over m^{3\over 2}}% \leq C^{p}.$$ Similarly, we can prove that $$\int_{[-3,3]\times[-\delta,0]}u^{2}|Du|^{p}\,dx\leq C^{p},$$ and therefore $$\int_{R_{3}}u^{2}|Du|^{p}\,dx\leq C^{p}.$$ Using Hölder’s inequality and the assumption that $u\leq 1$, we also have that $$\begin{array}[]{ll}\int_{R_{3}}u^{2}|Du|^{p-1}\,dx&\leq 6^{1\over p}\cdot{(% \int_{R_{3}}u^{2p\over p-1}|Du|^{p}\,dx)}^{p-1\over p}\\ &\leq 2\cdot{(\int_{R_{3}}u^{2}|Du|^{p}\,dx)}^{p-1\over p}\\ &\leq C^{p}.\end{array}$$ Choose $g_{2}(t)\in C_{0}^{\infty}(-3,3)$ such that $0\leq g_{2}\leq 1$, $|g_{2}^{{}^{\prime}}|\leq 2$ and $$g_{2}(t)=1\quad\text{for $t\in[-2,2]$}.$$ Multiplying $w(x)=u^{2}\cdot g_{2}(x_{1})$ on both sides of (2.5) leads to $$\int_{R_{2}}u|Du|^{p}\,dx\leq pC^{p}.$$ Again, by Hölder’s inequality, we have that $$\int_{R_{2}}u|Du|^{p-1}\,dx\leq pC^{p}.$$ Finally, select $g_{3}(t)\in C_{0}^{\infty}(-2,2)$ satisfying $0\leq g_{3}\leq 1$, $|g_{3}^{{}^{\prime}}|\leq 2$ and $$g_{3}(t)=1\quad\text{for $t\in[-1,1]$}.$$ Multplying $w(x)=u\cdot g_{3}(x_{1})$ on both sides of (2.5) leads to $$\int_{R_{1}}|Du|^{p}\,dx\leq p^{2}C^{p}.$$ Since $3^{p}>p^{2}$, we have that $$\int_{R_{2}}u|Du|^{p-1}\,dx+\int_{R_{1}}|Du|^{p}\,dx\leq 2p^{2}C^{p}\leq(6C)^{% p}=C_{0}^{p}.$$ Consequently, (2.6) holds, as desired. $\square$ Now let $$\delta_{0}={1\over 16C_{0}}<{1\over 16}.$$ Here $C_{0}>1$ is the same number in Lemma 2.2. For $\epsilon\in(0,{1\over 2})$, write $$D_{\epsilon}=B_{1-\epsilon}(-5e_{1})\cup R\cup B_{1}(5e_{1})$$ and $R=(-5,5)\times(-\delta,\delta)$. Note that $D_{\epsilon}$ is not symmetric with respect to the $x_{2}$ axis and $\max_{D_{\epsilon}}d(x,\partial D_{\epsilon})=1$. The following lemma says that the principle eigenfunction of p-Laplacian, although unique up to multiplicative factor, is actually very sensitive to the domain when $p$ gets large. Lemma 2.3 Assume $0<\epsilon<{1\over 2}$. If $\delta\leq\delta_{0}$ and $u_{\infty}$ is a variational infinity ground state of $D_{\epsilon}$ satisfying $u_{\infty}(5,0)=1$, then $$u_{\infty}(-5,0)<{1\over 2}.$$ Note that $\delta_{0}$ is independent of $\epsilon$. Proof: We argue by contradiction and assume that $u_{\infty}(-5,0)\geq{1\over 2}$. Now fix $\delta$ and $\epsilon$. Since $\max_{D_{\epsilon}}u_{\infty}=u_{\infty}(5,0)=1$, $u_{\infty}(x)\leq d(x,\partial D_{\epsilon})$([2]). Hence $$u_{\infty}\leq\delta\leq\delta_{0}\quad\text{in $[-4,4]\times[-\delta,\delta]$}.$$ For $p>2$, let $u_{p}$ be the principle eigenfunction of $\Delta_{p}$ in $D_{\epsilon}$ satisfying $\max_{D_{\epsilon}}u_{p}=1$ and $$-\Delta_{p}u_{p}=-\mathrm{div}(|Du_{p}|^{p-2}Du_{p})=\lambda_{\epsilon,p}^{p}u% _{p}^{p-1}\quad\text{in $D_{\epsilon}$}.$$ (2.8) Here $\lambda_{\epsilon,p}$ is the principle eigenvalue of $\Delta_{p}$ associated with $D_{\epsilon}$. Passing to a subsequence if necesary, we may assume that $$\lim_{p\to+\infty}u_{p}=u_{\infty}\quad\text{uniformly in $D_{\epsilon}$}.$$ Hence, when $p$ is large enough, $$u_{p}\leq 2\delta_{0}\quad\text{in $[-4,4]\times[-\delta,\delta]$}.$$ (2.9) Since $\lim_{p\to+\infty}\lambda_{\epsilon,p}=\lambda_{\epsilon,\infty}=1$, we may assume that $\lambda_{\epsilon,p}\leq 2$. Now, define $g(t)$ by $$\begin{cases}g(t)=1&\text{for $t\leq-1$}\\ g(t)={1\over 2}(1-t)&\text{for $-1\leq t\leq 1$}\\ g(t)=0&\text{for $t\geq 1$}.\end{cases}$$ Let $$w(x)=u_{p}\cdot g(x_{1}).$$ and, for $\tilde{R}=(-5,4)\times(-\delta,\delta)$, let $$\Omega_{\epsilon}=B_{1-\epsilon}(-5e_{1})\cup\tilde{R}.$$ $\Omega_{\epsilon}$$(-5,0)$$1-\epsilon$ Note that $\{w\neq 0\}\subset\Omega_{\epsilon}$ and therefore $$\Lambda_{\epsilon,p}^{p}\leq{\int_{\Omega_{\epsilon}}|Dw|^{p}\,dx\over\int_{% \Omega_{\epsilon}}|w|^{p}\,dx}={\int_{D_{\epsilon}}|Dw|^{p}\,dx\over\int_{D_{% \epsilon}}|w|^{p}\,dx},$$ (2.10) where $\Lambda_{\epsilon,p}$ is the principle eigenvalue of $\Delta_{p}$ associated with $\Omega_{\epsilon}$. Since $u_{p}$ is uniformly Hölder continuous and $\lim_{p\to+\infty}u_{p}(-5e_{1})=u_{\infty}(-5e_{1})$, there exists $\tau\in(0,1)$ such that $$u_{p}(x)\geq{1\over 3}\quad\text{in $B_{\tau}(-5e_{1})$},$$ (2.11) for sufficiently large $p$. To simplify notation, we now drop the $p$ dependence and write $u_{p}=u$. Multiplying $ug^{p}(x_{1})$ on both sides of (2.8), we have that $${\int_{D_{\epsilon}}|Du|^{p}g^{p}\,dx\over\int_{D_{\epsilon}}|w|^{p}\,dx}\leq% \lambda_{\epsilon,p}^{p}+{p\int_{[-1,1]\times[-\delta,\delta]}u|Du|^{p-1}\,dx% \over\int_{D_{\epsilon}}|w|^{p}}.$$ Due to Lemma 2.2 and (2.9) $$\int_{[-1,1]\times[-\delta,\delta]}u|Du|^{p-1}\,dx\leq(2\delta_{0}C_{0})^{p}<{% 1\over 4^{p}}.$$ Therefore owing to (2.11), $${p\int_{[-1,1]\times[-\delta,\delta]}u|Du|^{p-1}\,dx\over\int_{D_{\epsilon}}|w% |^{p}}\leq\left({3\over 4}\right)^{p}{p\over\pi\tau^{2}}.$$ Since $Dw=gDu+uDg$ and $(a+b)^{p}\leq 2^{p}(a^{p}+b^{p})$, we have that $$\begin{array}[]{ll}\int_{D_{\epsilon}}|Dw|^{p}\,dx&\leq\int_{D_{\epsilon}}|Du|% ^{p}g^{p}\,dx+2^{p}\int_{[-1,1]\times[-\delta,\delta]}(|Du|^{p}g^{p}+{u^{p}% \over 2^{p}})\,dx\\ &\leq\int_{D_{\epsilon}}|Du|^{p}g^{p}\,dx+(\delta_{0}4C_{0})^{p}+(2\delta_{0})% ^{p}\\ &\leq\int_{D_{\epsilon}}|Du|^{p}g^{p}\,dx+2\cdot{1\over 4^{p}}.\end{array}$$ The first inequality is also due to the fact that $$Dw=gDu\quad\text{in $D_{\epsilon}\backslash[-1,1]\times[-\delta,\delta]$ }.$$ Therefore by (2.11) when $p$ is large enough $${\int_{D_{\epsilon}}|Dw|^{p}\,dx\over\int_{D_{\epsilon}}|w|^{p}\,dx}\leq% \lambda_{\epsilon,p}^{p}+3\cdot\left({3\over 4}\right)^{p}{p\over\pi\tau^{2}}% \leq\lambda_{\epsilon,p}^{p}+1.$$ (2.12) Since $\max_{D_{\epsilon}}d(x,\partial D_{\epsilon})=1$ and $\max_{\Omega_{\epsilon}}d(x,\partial\Omega_{\epsilon})=1-\epsilon$, we have $\Lambda_{\epsilon,p}\to(1-\epsilon)^{-1}$ and $\lambda_{\epsilon,p}\to 1$ as $p\to\infty$. Thus, for sufficiently large $p$, we have $$\Lambda_{\epsilon,p}\geq{1\over 1-{1\over 2}\epsilon}\quad\mathrm{and}\quad% \lambda_{\epsilon,p}\leq{1\over 1-{1\over 4}\epsilon}.$$ Owing to (2.10) and (2.12), we have $$\left({2\over 2-\epsilon}\right)^{p}\leq\left({4\over 4-\epsilon}\right)^{p}+1,$$ for all large enough $p$. Since this is a contradiction, the lemma follows. $\square$ Proof of Theorem 1.1: For $\epsilon\in(0,{1\over 2})$, let $u_{\epsilon,\infty}$ be a variational infinity ground state of $D_{\epsilon}$ satisfying $u_{\epsilon,\infty}(5,0)=1$. Since $\Delta_{\infty}u_{\epsilon,\infty}\leq 0$, according to [1], the sequence $\{u_{\epsilon,\infty}\}_{\epsilon>0}$ is uniformly Lipschitz continuous within any compact subset of $D_{0}$ when $\epsilon$ is small. It is also controlled by $0\leq u_{\epsilon,\infty}\leq d(x,\partial D_{\epsilon})$ near the boundary. Upon a subsequence if necessary, we may assume that $$\lim_{\epsilon\to 0}u_{\epsilon,\infty}=u_{\infty}.$$ Then according to Lemma 2.3, $u_{\infty}$ is an infinity ground state of $D_{0}$ satisfying $$u_{\infty}(-5,0)\leq{1\over 2}\quad\mathrm{and}\quad u_{\infty}(5,0)=1.$$ As $u_{\infty}$ is not symmetric about the $x_{2}$-axis, it cannot be a variational infinity ground state associated to $D_{0}$. As there exists at least one variational ground state [2], it follows that $\lambda_{\infty}(D_{0})$ is not simple. References [1] M. G. Crandall, L. C. Evans, R. F. Gariepy, Optimal Lipschitz extensions and the infinity Laplacian, Calc. Var. Partial Differential Equations 13 (2001), no. 2, 123-139. [2] P. Juutinen, P. Lindqvist, J. Manfredi, The $\infty$-eigenvalue problem, Arch. Ration. Mech. Anal. 148 (1999), no. 2, 89–105. [3] P. Lindqvist, On the definition and properties of p-superharmonic functions, J. Reine angew. Math. 365 (1986), 67-79. [4] P. Lindqvist, J. Manfredi, The Harnack inequality for $\infty$-harmonic functions, Electron. J. Differential Equations 1996, No. 04, approx. 5 pp. [5] Y. Yu, Some properties of the infinity ground state, Indiana University Mathematics Journal. 56 No. 2 (2007), 947-964.
On the use of the first-order moment approach for measurements of $H_{\rm eff}$ from LSD profiles J.C. Ramírez Vélez${}^{1}$ ${}^{1}$Instituto de Astronomía - Universidad Nacional Autónoma de México, APO. Postal 877, 22860, Ensenada B.C., Mexico E-mail: jramirez@astro.unam.mx (Accepted XXX. Received TY; in original form ZZZ) Abstract The big majority of the reported measurements of the stellar magnetic fields that have analysed spectropolarimetric data have employed the least-square-deconvolution method (LSD) and the first-order moment approach. We present a series of numerical tests in which we review some important aspects of this technique. First, we show that the selection of the profile widths, i.e. integration range in the first-order moment equation, is independent of the accuracy of the magnetic measurements, meaning that for any arbitrary profile width it is always possible to properly determine the longitudinal magnetic field. We also study the interplay between the line depth limit adopted in the line mask and the normalisation values of the LSD profiles. We finally show that the rotation of the stars has to be considered to correctly infer the intensity of the magnetic field, something that has been neglected up to now. We show that the latter consideration is crucial, and our test shows that the magnetic intensities differ by a factor close to 3 for a moderate fast rotator star with $vsini$ of 50 ${\rm km\,s^{-1}}$. Therefore, it is expected that in general the stellar magnetic fields reported for fast rotators are stronger than what was believed. All the previous results shows that the first-order moment can be a very robust tool for measurements of magnetic fields, provided that the weak magnetic field approximation is secured. We also show that when the magnetic field regime breaks down, the use of the first-order moment method becomes uncertain. keywords: Stars : magnetic field – Technique: spectroscopic and polarimetric – Method : numerical. ††pubyear: 2019††pagerange: On the use of the first-order moment approach for measurements of $H_{\rm eff}$ from LSD profiles–LABEL:lastpage 1 Introduction In the context of data analysis of circular polarisation in spectral lines, the development of the Centre-of-Gravity technique (CoG) was initially motivated to cater to a method for the measurement of the magnetic field of spatially resolved structures present in the solar photosphere (for example sunspots), without recourse to detailed theoretical modeling of circular polarisation in line profiles (Semel, 1967). This approach establishes a linear relation between the component of the magnetic field vector projected along the line-of-sight ($B_{\rm LOS}$) and the relative shift between the centres of gravity of the left and right components of the observed circular polarisation: $$\lambda_{+}-\lambda_{-}\,=\,2{\bar{g}}\Delta\lambda_{B_{LOS}},$$ (1) where ${\bar{g}}$ is the Lander factor of the transition line, $\Delta\lambda_{B_{LOS}}$ is the wavelength shift due to the Zeeman splitting, and the centres of gravity for the left and right polarisation are respectively defined as (Rees & Semel, 1979): $$\lambda_{\pm}=\int_{-\infty}^{\infty}\left(I_{c}-(I\pm V)\right)\lambda d% \lambda\,\,/\,\,\int_{-\infty}^{\infty}\left(I_{c}-(I\pm V)\right)d\lambda,$$ (2) where $I$ and $V$ are the intensity and circular Stokes parameters, and $I_{c}$ is the (assumed unpolarised) continuum. While from the previous definition the integration limits go from $-\infty$ to $\infty$, in practice the integration spans only around the (full) width of the line profiles; therefore, the selection of the integration range –which can be subjective–, has an important impact in the accuracy of the magnetic field measurements. Since Eq. (2) corresponds to the first-order moment in $\lambda$, the CoG method is also known as the integral method for measurements of magnetic fields or simply as the first-order moment approach (e.g. Mathys, 1989). Proven to be very useful, the CoG method was also applied in the stellar domain (e.g. Mathys, 1991) to measure the mean longitudinal magnetic field –integrated over the visible hemisphere of the star–, also referred as the effective magnetic field ($H_{\rm eff}$). The CoG method was initially applied using the so-called photographic technique, however, with the development of new instrumentation –CCDs and spectrographs of high resolution with better throughputs–, the helpful information contained in the shape of line profiles came at this disposal. Nowadays it is possible to simultaneously obtain a huge number of lines in spectropolarimetric mode with very high resolution. The use of mean polarised profiles resulting from the addition of multiple individuals lines in combination with the CoG method to infer $H_{\rm eff}$, possible through the use of the LSD technique (Donati et al., 1997), was a benchmark in studies related to the stellar magnetism domain. By adding hundreds to thousands of individual lines, the signal-to-noise ratio of the mean circular polarised profile is increased by several orders of magnitude allowing the detection of extremely weak stellar magnetic fields with intensities in the order of few Gauss (e.g. Marsden et al., 2014). The use of the LSD lead to finding very interesting results in many types of stars and it also gave the opportunity to shape our current knowledge of the stellar magnetism by observational methods using multi-line spectropolarimetric data analysis (see e.g. Donati & Landstreet, 2009). For the addition of lines it is convenient to apply a variable transformation from wavelength to doppler velocity coordinates ($v$) (Semel, 1995), such that the longitudinal stellar field would be given by (Mathys, 1989; Donati et al., 1997): $$H_{\rm eff}=\frac{-7.145\times 10^{5}}{\lambda_{0}g_{0}}\frac{\int v\,\,\frac{% V(v)}{I_{c}(v)}\,\,dv}{\int(1-\frac{I(v)}{I_{c}(v)}\,)\,\,dv},$$ (3) where $H_{\rm eff}$ is expressed in G, $v$ in ${\rm km\,s^{-1}}$. If only weak and unblended lines are considered, $\lambda_{0}$ (expressed in nm) and $g_{0}$ would correspond respectively to the means of the wavelengths and Lander factors of the lines employed for the establishment of the mean profiles. In fact, the CoG and the first-order moment approaches are valid under the following assumptions (Mathys, 1989): 1) an atmospheric Milne-Eddington model, 1) a weak-line formation regime (that is when the line profile is similar in shape to the absorption coefficient ($\eta$), i.e. $\eta$ $\ll$ 1) and 3) are considered only weak magnetic fields (i.e. where the Zeeman splitting is much lower than the natural width of the line). For the establishment of the mean profiles, if any of the 3 assumptions listed above is not fulfilled, or if blended lines are included, the value of $\lambda_{0}\,g_{0}$ has to be found by independent calibration methods. This important statement will be in fact the subject of this paper, namely, we estimate $H_{\rm eff}$ through the first-order moment expressed in Eq. (3), and we inspect different criteria used during the establishment of the mean profiles. Example criteria include the line depth limit, the normalisation of the mean profiles and the integration limits. We also investigate the role played by projected rotational velocity of the stars ($vsini$) in the accuracy of the measurements of $H_{\rm eff}$. Finally, all the results are discussed beyond the context of the weak field regime. 2 Numerical tests The employment of the linear relation given by Eq. (3) requires a proper calibration regulated solely by the product of the normalisation parameters $g_{0}$ and $\lambda_{0}$. In this section we will present a series of tests in which we obtained an optimal calibration through the use of theoretical spectra. We will denote these values by ${\mathbf{\lambda_{0}}\,\mathbf{g_{0}}}$ to indicate that they were found by the methodology described below. We have used the cossam code (Stift, 2000) to synthesise a sample of 50 polarised spectra considering an oblique centred magnetic dipolar model (Stibbs, 1950; Stift, 1975). We have employed a solar atmospheric model: $T_{\rm eff}=5750$ K, [M/H]=0, log (g) = 4.5 ${\rm cm\,s^{-2}}$, and microturbulence of zero, covering a wavelength range from 365 to 1010 nm in steps of 1 ${\rm km\,s^{-1}}$. For our first test, we adopted a slow rotator model in which we assigned to $vsini$ a value of 5 ${\rm km\,s^{-1}}$. For the synthesis of each spectrum, we have randomly varied the inclination between the 3 principal axis of the reference system, namely, the rotation axis, the magnetic dipolar axis and the line-of-sight direction. Considering only as free parameter these 3 angles that determine the orientation of the system, and setting the magnetic dipolar moment to 30 G, we obtained that in the synthetic sample the $H_{\rm eff}$ varies between -20 and 20 G. For the establishment of the LSD profiles we have obtained from the vald database (Ryabchikova et al., 2015) the information required to create the mask, i.e. for each line we retrieved from vald the Lander factor, the line depth ($d$) and the wavelength. Using a line depth limit of 0.1 with respect the continuum as a threshold criteria, the total number of lines amounts to 8314. Of course, the same line list used in the mask was employed for the synthesis of the theoretical spectra in cossam. Since the synthetic sample of spectra is noiseless, we employed a cross-correlation between the mask and each spectrum to establish the sample of synthetic LSD profiles, and the mask weights that we assigned for the Stokes I and V parameters are those included in the original LSD-paper of Donati et al. (1997): $$w_{\textsc{i}\,i}=d_{i}\,;\\ w_{\textsc{v}\,i}=d_{i}\,\lambda_{i}\,{\bar{g}_{i}}\,,$$ (4) where the index $i$ runs over the total number of lines. The spectral resolution at which the theoretical spectra was synthesised (in wavelength steps of 1 ${\rm km\,s^{-1}}$), has to be comparable to the instrumental one. Current observing facilities in spectropolarimetric mode have resolving powers (R) between 55,000 (caos) to 115,000 (harps). We thus decided to use the intermediate resolution of R=65,000 that corresponds to the twin spectrographs espadons and narval (and similar to the one in boes, R=60,000). In consequence, we have decreased the resolution in the synthetic spectra to constant wavelength steps of 1.8 ${\rm km\,s^{-1}}$ to be consistent with the adopted resolution of these two spectrographs, reducing the total number of lines to 8088. Finally, to account for the instrumental broadening we convolved the spectra with a Gaussian kernel in which we considered a standard deviation in the Gaussian profile of 4.4 ${\rm km\,s^{-1}}$. In Fig. 1 we show some examples of the LSD profiles: one for the Stokes I (upper panel) and two for Stokes V in which the respective input magnetic models are such that the $H_{\rm eff}$ are -3.0 G (middle panel) and 8.5 G (lower panel). Even in this ideal case where no noise was added, by visual inspection it is not clear if the width of the two circular polarised profiles is the same. In other words, regarding the shown V profiles, we must consider whether to use the same width in both cases, and if yes, how to find it. Before tackling this issue in detail, let us note that many studies have employed a visual inspection to determine the width of the observed profiles (e.g. Wade et al., 2000; Silvester et al., 2009), and also that very early on Mathys (1988) was remarked the importance of a proper determination of the line width in an analysis under a $n$-order moment approach even in the case of one single line. 2.1 Profile width We thus proceed to inspect how the considered integration limits in the doppler space –i.e. the LSD profiles width– can affect the inference of $H_{\rm eff}$ when using the first-order moment technique. For this purpose, we have varied the width of the profiles from 7.2 to 97.2 ${\rm km\,s^{-1}}$ around the line centre, in steps of 3.6 ${\rm km\,s^{-1}}$. The adopted width variation at each step corresponds to considering one more point at each side of the line profile, i.e., the minimal possible difference when the profile width is varied symmetrically around the centre. For each considered profile width, we performed a linear regression over the 50 synthetic spectra to obtain the optimal value of ${\mathbf{\lambda_{0}}\,\mathbf{g_{0}}}$ that gives the best results to determine $H_{\rm eff}$ through Eq. (3). Note that it is not possible to obtain separately the values ${\mathbf{\lambda_{0}}}$ and ${\mathbf{g_{0}}}$, but only their product. The results are shown in Fig. 2. In the upper panel we show the Mean Absolute Percental Error (MAPE) obtained by comparing the original values of $H_{\rm eff}$ and the ones derived by the regressions111$MAPE(\%)=\left|\frac{H_{\rm eff}^{original}-\,H_{\rm eff}^{regression}}{H_{\rm eff% }^{original}}\right|$, while the lower panel shows the respective fitted values of ${\mathbf{\lambda_{0}}}\,{\mathbf{g_{0}}}$, both as function of the integration limits (width profiles). The first remark of this test is that for all considered profile widths, it is always possible to fit a value of ${\mathbf{\lambda_{0}}}\,{\mathbf{g_{0}}}$ that allows us to infer very accurately $H_{\rm eff}$: the MAPE remains inferior to 1.0% in the big majority of the cases. This result is quite unexpected since even underestimating the width of the profiles up to the extreme case in which the profiles consist of only 5 central points (from -3.6 to 3.6 ${\rm km\,s^{-1}}$), one can obtain very precise estimations of $H_{\rm eff}$. Analogously, the same behaviour is obtained when the profiles are highly overestimated, with very small MAPE values. The fact that it is possible to consider deliberately only a part of the profiles to infer $H_{\rm eff}$ can be useful in some practical applications, as for example in binary systems or in stars surrounded by circumstellar envelopes with strong stellar winds that can generate shocks visible as bumps. The bump produced by the shock is in turn blended with the intensity profile of the star (e.g. Sabin et al., 2015), so to consider only a fraction of the intensity profile of the star could be of interest. Another important result of this test is that it shows that the value of $\mathbf{\lambda_{0}\,g_{0}}$ is very sensitive to the integration range, see lower panel of Fig. 2. The fitted values of $\mathbf{\lambda_{0}\,g_{0}}$ start at 329 nm (profile width of 7.2 ${\rm km\,s^{-1}}$) and they increase very quickly to reach a maximum of 918 nm (profile width of 21.6 ${\rm km\,s^{-1}}$), to then decrease following a likely exponential-type curve, finishing at 286 nm for the broadest profile width of 97.2 ${\rm km\,s^{-1}}$. Additionally, to illustrate the relative changes in the values of $\mathbf{\lambda_{0}\,g_{0}}$, we take as reference a width of 39.6 ${\rm km\,s^{-1}}$ (from -19.8 to 19.8 ${\rm km\,s^{-1}}$), which seems a plausible selection by visual inspection of the profiles in Fig. 1. In the right Y-axis of the lower panel in Fig. 2 are shown the percentage variations of $\mathbf{\lambda_{0}\,g_{0}}$: as example, if one or two more points are considered at each side of the profiles, then the respective errors will overestimate the inferred $H_{\rm eff}$ by 7.5% and 16.0 %. Similarly, an underestimation of $H_{\rm eff}$ of 6.5% and 12.5 % will be induced if the width of the profiles is reduced by one and two points respectively. Note that the polarised V profiles are almost zero around $\pm$ 20 ${\rm km\,s^{-1}}$, and at first glance it could be surprising the fact that ${\mathbf{\lambda_{0}}}\,{\mathbf{g_{0}}}$ does not remain constant when the profile width continues to increase (integration range > |20| ${\rm km\,s^{-1}}$). The reason for this is due to the fact that the value of the integral in the denominator of Eq. (3) continues to vary even in the regions where the polarised signal is zero, and in consequence also ${\mathbf{\lambda_{0}}}\,{\mathbf{g_{0}}}$ varies to get an optimal fit. One more interesting aspect to look at is if it is possible to consider asymmetric ranges of integration for the inference of $H_{\rm eff}$. To answer this question, we have resized the sample of synthetic profiles from -19.8 to 7.2 ${\rm km\,s^{-1}}$, and then we repeated the test. We found that in this case the errors are considerably highers: MAPE of 67%. Nevertheless, we verified that the inversion errors decrease as the asymmetry in the profiles decreases, reaching a value of 0.5% for the fully symmetric case (from -19.8 to 19.8 ${\rm km\,s^{-1}}$). The conclusion is thus that the integration ranges have to be symmetric around the centre of profiles, but, as we showed above, the integration ranges do not necessarily have to include the full width of the profiles. The results of Fig. 2 allows to infer $H_{\rm eff}$ considering different profile widths, which in turn can be used for a check of the self-consistency of the measurements and to derive an associated uncertainty. Table 1, shows the obtained values of $H_{\rm eff}$ considering 7 different profile widths for the two LSD profiles shown in bottom of Fig. 1. The top of the central columns indicate the values of $\mathbf{\lambda_{0}\,g_{0}}$ and the respective profile widths. The extremely high precision of $H_{\rm eff}$ reported in Table 1 are due to the fact that the sample of LSD profiles is noiseless. However, we consider that the mean and standard deviation values reflect a realistic value of the measurement of $H_{\rm eff}$ and the associated error, and could be especially useful for real observed data. Let us note that previous studies have shown that is very probable that the errors reported in studies based in LSD are underestimated (Carroll & Strassmeier, 2014; Ramírez Vélez et al., 2018). In this sense, the multi-inversions strategy presented in Table 1, could be a good alternative to estimate the uncertainties. Note that as it is customary, the calibration presented for the given values in $T_{\rm eff}$ and log (g), and for the given line list, can consider small variations; for example, in temperature a variation $\pm$ 125 K is still considered acceptable. Given that the profiles used in this test are noiseless, we proceed to consider real data but not to continue the topic of this section, but to discuss another two important considerations as are the line depth limit adopted when establishing the LSD profiles, and the inclusion of noise-weighted masks. 2.2 Line depth and signals weighted by noise When the analysis is applied to real data, the noise associated with the observations can be taken into account in different ways. In this section we will present some of them, which we consider the most employed ones. Let us first introduce the so-called mean weights (MW) defined as (Marsden et al., 2014): $$MW_{\textsc{i}}=\frac{\sum_{i}S_{i}^{2}\,\,w_{\textsc{i}\,i}^{2}}{\sum_{i}S_{i% }^{2}\,\,w_{\textsc{i}\,i}}\,;\\ MW_{\textsc{v}}=\frac{\sum_{i}S_{i}^{2}\,\,w_{\textsc{v}\,i}^{2}}{\sum_{i}S_{i% }^{2}\,\,w_{\textsc{v}\,i}}\,,$$ (5) where $S_{i}$ is the inverse of the incertitude derived from the data reduction process associated to the $ith$ line (i.e. the signal-to-noise ratio of the $ith$ line), and the weights $w_{\textsc{i}\,i}$ and $w_{\textsc{v}\,i}$ are given by : $$w_{\textsc{i}\,i}=\frac{d_{i}}{d_{0}^{n}}\,;\\ w_{\textsc{v}\,i}=\frac{d_{i}\,\lambda_{i}\,{\bar{g}_{i}}}{d_{0}^{n}\,\lambda_% {0}^{n}\,{g_{0}^{n}}}\,,$$ (6) where the parameters $d_{0}^{n}$, $\lambda_{0}^{n}$ and $g_{0}^{n}$ are referred as the normalisation values (e.g. Kochukhov et al., 2010) or scaling factors (e.g. Petit et al., 2014). We have here adopted a different notation of the normalisation values, normally expressed also as $\lambda_{0}$ and ${g}_{0}$. The reason of our notation is to avoid confusion because the normalisation values $\lambda_{0}^{n}$ and $g_{0}^{n}$ of Eq. (6) are not always the same as those used to derive $H_{\rm eff}$ in Eq. (3), i.e., $\lambda_{0}^{n}g_{0}^{n}\neq\lambda_{0}g_{0}$. Nowadays, when the line mask of the LSD profiles are established it is normally chosen that the mean weights are numerically equal or very close to unity. It is then assumed that the amplitude of the resulting LSD profiles is properly normalised by the definition adopted for the mean weights, and in consequence the normalisation parameters are directly used to measure $H_{\rm eff}$ in Eq. (3), i.e. $\lambda_{0}^{n}g_{0}^{n}=\lambda_{0}g_{0}$. As we mentioned, the mean weights are not the only way to normalise the LSD profiles. In fact, in the original LSD-paper of Donati et al. (1997), the authors proposed to scale the amplitudes of the profiles to $\lambda_{0}^{n}g_{0}^{n}d_{0}^{n}$ = 500 nm and, to use the mean values of $\lambda_{i}$ and $\bar{g_{i}}$ to measure $H_{\rm eff}$ in Eq. (3), i.e., $\lambda_{0}$ = <$\lambda_{i}$> and $g_{0}$ = <$\bar{g_{i}}$>. This way of normalising the LSD profiles was used for some years, but the normalisation values changed by introducing a new constraint: $d_{0}^{n}=0.7$ and at the same time $\lambda_{0}^{n}g_{0}^{n}d_{0}^{n}$ = 500 nm (e.g. Wade et al., 2000; Shorlin et al., 2002; Donati et al., 2003, and others). It is important to mention that this approach to normalising the LSD profiles is not used anymore. Alternatively, a noise weighted (NW) definition of the average values of $\lambda_{0}$ and $g_{0}$ has also been used in which the signal-to-noise ratio of the lines is included by the following expressions (e.g Kochukhov et al., 2010; Grunhut et al., 2013): $$\lambda_{0}=\frac{\sum_{i}S_{i}^{2}\,\,\lambda_{i}}{\sum_{i}S_{i}^{2}}\,;\\ \bar{g}_{0}=\frac{\sum_{i}S_{i}^{2}\,\,\bar{g}_{i}}{\sum_{i}S_{i}^{2}}\,.$$ (7) In this noise weighted approach the average values are used to both, normalise the amplitude of the LSD profiles and to measure $H_{\rm eff}$, i.e., $\lambda_{0}$ = $\lambda_{0}^{n}$ and $g_{0}$ = $g_{0}^{n}$. We next compare two of the normalisation strategies presented above, namely, the mean weighted and the noise weighted. For the latter, the normalisation values, denoted by $\lambda_{0}^{\textsc{nw}}\,g_{0}^{\textsc{nw}}$, will be directly obtained through Eq. (7). For the former, given that $MW_{\textsc{i}}$ and $MW_{\textsc{v}}$ are by construction equal to 1, the normalisation values, denoted by $\lambda_{0}^{\textsc{mw}}\,g_{0}^{\textsc{mw}}$, are given by: $$d_{0}^{\textsc{mw}}=\frac{\sum_{i}S_{i}^{2}\,\,d_{i}^{2}}{\sum_{i}S_{i}^{2}d_{% i}}\,;\\ \lambda_{0}^{\textsc{mw}}g_{0}^{\textsc{nw}}=\frac{\sum_{i}S_{i}^{2}\,\,d_{i}^% {2}\lambda_{i}^{2}\bar{g}_{i}^{2}}{d_{0}\sum_{i}S_{i}^{2}d_{i}\lambda_{i}\bar{% g}_{i}}\,.$$ (8) Additionally, we will include different values of the line depth limit, which is another important criteria when establishing the LSD profiles. In the published studies based in LSD, different line depth threshold values have been employed, from 5% to 40 % with respect to the continuum, but the most commonly used ones are 10%, 20% and 40%. We now proceed to quantify how much the value of the product of $\lambda_{0}\,g_{0}$ varies when considering different line depth limits. For this purpose, it is necessary to consider noise-affected data. Despite that it is always possible to model the noise following random or Poisson distributions, here we prefer to use real data. We have therefore obtained from PolarBase (Petit et al., 2014) the files associated with the solar twin type star HD63433, observed with the ESPaDOnS spectrograph at the CFHT telescope the 10 January 2010 222The block reference number of this observation in the PolarBase database is 8450.. In the files associated with the data reduction, we find the uncertainty in intensity that are the inverse of the $S_{i}$ values required in Eqs. (7) and (8). We applied a linear interpolation to the wavelength sampling of the observed data to match the exact $w_{i}$ values of the mask, which is a standard procedure. By considering the same spectral region in the synthetic sample and the observed data, the number of lines reduces to 7757 (with a line depth > 0.1). Besides, we have considered a value of $vsini$ = 7.0 ${\rm km\,s^{-1}}$ in the synthetic sample of spectra to be consistent with the value reported by Valenti & Fischer (2005) for the projected rotational velocity of HD63433. Finally, in the previous section we have shown that $\lambda_{0}\,g_{0}$ is dependent of the integration range in the doppler space (profile width). Therefore for this numerical test, we have fixed the profile width to 28.8 ${\rm km\,s^{-1}}$, as indicated in the reduction log files of this observation. The integration limits that we considered vary from -14.4 to 14.4 ${\rm km\,s^{-1}}$ in the rest frame of the star. The results are shown in Fig. 3. The first remark of the results of this figure is that both, the mean weights and the noise weighted normalisation values, remain almost constant. By changing the threshold of the line depth from 0.1 to 0.4, the values of $\lambda_{0}^{\textsc{mw}}\,g_{0}^{\textsc{mw}}$ shows a decrease of only $\sim$ 1% (from 703 to 697 nm), while for $\lambda_{0}^{\textsc{mw}}\,g_{0}^{\textsc{mw}}$ the decrement is $\sim$ 5% (from 659 to 629 nm). On the contrary, the optimal fit obtained for $\mathbf{\lambda_{0}\,g_{0}}$ increases in 32% (from 835 to 1104 nm). In consequence, it is shown from left to right in the lower panels of Fig. 3, that the errors increase from 18% to 56% for the mean weight estimations of $H_{\rm eff}$, while for the noise weight approach the errors are even higher passing from 26% to 74%. The conclusion of this test is that for each observation, or equivalently for each set of given values of $S_{i}$, there is only one value of line depth that fulfils $\lambda_{0}\,g_{0}$ = $\lambda_{0}^{\textsc{MW}}\,g_{0}^{\textsc{MW}}$, and analogously only one other such that $\lambda_{0}\,g_{0}$ = $\lambda_{0}^{\textsc{NW}}\,g_{0}^{\textsc{NW}}$. For the case presented here, those two values are inferior to a line depth limit of 0.1. Therefore, it is not convenient to search for which line depth value the mean weights or noise weighted are good normalisations, but the contrary, that given any limit for the line depth, the values of $\lambda_{0}\,g_{0}$ must be found through synthetic spectra, as we have proceeded here. Currently, some studies indicate that $\lambda_{0}\,g_{0}$ are derived through synthetic spectra, however the procedure is not described and the values of $\lambda_{0}\,g_{0}$ are in general only announced. 2.3 Vsini In this section, we will focus on inspecting whether the estimations of the stellar longitudinal magnetic field can be affected by the rotation of the star, an effect normally ignored, and if yes how important it is to consider the $vsini$ value in the synthetic sample of spectra. In fact, by increasing the projected rotational speed of the star, two important effects appear. First, the blending of the lines increases, and second, the weak-line regime is less justified up to the case in which the shape of the line profile is rotationally dominated. With the aim of investigating the resultant interplay of these two effects, we present an estimation of the $\lambda_{0}\,g_{0}$ values as function of $vsini$. We have considered the same solar atmospheric model as in the previous tests, with the only difference that now the projected rotational values are varied from 0 to 50 ${\rm km\,s^{-1}}$ in steps of 5 ${\rm km\,s^{-1}}$. For each of these $vsini$ values, we synthesised a sample of 50 spectra to then fit a linear regression obtaining in each case $\mathbf{\lambda_{0}\,g_{0}}$. Before continuing, it is essential to define the integration limits for each of the $vsini$ values. We have visually inspected simultaneously both Stokes profiles, I and V, to define their widths. Strategies other than visual inspection could be adopted but for the main purpose of this test (to determine the variation of $\mathbf{\lambda_{0}\,g_{0}}$ as function of $vsini$), we consider that the employed strategy is enough. Fig. 4 shows in colour the selected profile widths, and in black the remaining part of the profiles for all the rotational values. For comparison purposes, in the establishment of the LSD profiles we have considered two different values of the line depth limit, 0.1 and 0.4. The results shown in Fig. 5 are presented as before: the upper panel shows the precision of the inversions quantified through the MAPE values and the lower one shows the fitted values $\mathbf{\lambda_{0}\,g_{0}}$, both as function of $vsini$. For completeness, below the curve of line depth limit of 0.1 of the lower panel, we have included the profile width employed at each rotational value: For example, for a $vsini$ of 10 ${\rm km\,s^{-1}}$, the indicated profile width is 43.2 ${\rm km\,s^{-1}}$, which means that the integration range goes from -21.6 to 21.6 ${\rm km\,s^{-1}}$ in the rest frame of the star. The results from Fig. 5 indicate that for all the rotational values the retrieved values of $H_{\rm eff}$ are very good: except in one case the MAPE is always less than 3%. Therefore, with this test it is shown that even in the case with the strongest rotation velocity of 50 ${\rm km\,s^{-1}}$, whose shape profile is clearly dominated by the rotation and the blending process are the highers, the first-order moment is still a very good tool to measure the longitudinal magnetic field. However, good precision of estimations of $H_{\rm eff}$ requires an appropriate value $\mathbf{\lambda_{0}\,g_{0}}$ for each rotational value. In fact, the normalisation value changes by a factor close to 3 when passing from extreme to the other of the $vsini$ values, from 0 to 50 ${\rm km\,s^{-1}}$. In other words, if the $\mathbf{\lambda_{0}\,g_{0}}$ corresponding to a null rotational value is employed to infer $H_{\rm eff}$ for a moderate fast rotator star with a $vsini$ of 50 ${\rm km\,s^{-1}}$, our results indicate the magnetic fields will be underestimated by 287% (the same percentage was found for both line depth limits of 0.1 and 0.4). The conclusion of this test is therefore that it is mandatory to consider the projected rotational velocity of the stars when determining the normalisation values $\lambda_{0}\,g_{0}$, something that has been neglected up to today. Please note that similar results have been obtained not on the basis of the integral form (Eq. 3) but rather on the derivate one. Recently, using the slope method, Scalia et al. (2017) have also shown that the the errors of the longitudinal magnetic field increase when the $vsini$ increases, while Leone et al. (2017) indicate that $H_{\rm eff}$ is properly estimated only for rotational velocities lower than 12 ${\rm km\,s^{-1}}$. 2.4 Magnetic field intensities In the previous sections we have purposely performed the tests in a regime where the weak magnetic field assumption was assured : $|H_{\rm eff}|$ $<$ 20 G, and the magnetic moment was fixed to 30 G. We are now interested in considering stronger intensities in the magnetic fields. For the next numerical exercise, we synthesised a sample of 200 stellar spectra with the same atmospheric model as before, adopting a $vsini$ value of 5 ${\rm km\,s^{-1}}$, but now the magnetic moment was increased up to two more orders of magnitude, varying between 0.1 and 10 kG. The reason that we have considered a higher number of synthetic spectra is to better sample $H_{\rm eff}$, which varies in a wider range of intensities, between -6 and 6 kG. For the inference of $H_{\rm eff}$, in Eq. (3) we have used the value $\mathbf{\lambda_{0}\,g_{0}}$ = 675.5 nm that was found previously when only weak magnetic fields were considered, the line depth limit was 0.1, $vsini$ = 5 ${\rm km\,s^{-1}}$, and the integration limits varied between -19.6 to 19.6 ${\rm km\,s^{-1}}$, see Table 1. The sample of spectra for this test was in consequence inverted using this same range of the integration. For completeness, we have also considered the line depth threshold values of 0.2, 0.3 and 0.4. For each of these cases, we have considered the values of $\mathbf{\lambda_{0}\,g_{0}}$ derived previously under a weak magnetic field regime, which are 749.0, 822.6 and 899.6 nm, respectively. The results are shown in Fig. 6. Before to discuss the results, it is important to note that these do not depend on the integration range. We have verified that by changing the profiles width (integration range), the results are in essence the same. The first remark of this test is that the results are the same despite the line depth cut-off value considered in the mask (indicated in top each column). The second is that it is not possible (in function of $H_{\rm eff}$) to constrain where the weak magnetic field regime breaks down, given that for both the weakest (< 500 G) and the strongest (> 2 kG) intensities, it is possible achieve very accurate values of $H_{\rm eff}$ in some cases, but the errors can reach 25% to 30% in others. Thus, the conclusion of this test is that with a dipolar magnetic configuration, and considering random orientations of the principal axis of the system, it is unfortunately not possible to a priori set a limit to the validity of the weak field approximation using as constrain $H_{\rm eff}$. A clear interpretation of the results of Fig. 6 can be obtained in terms of the surface magnetic field ($H_{\rm surf}$), defined as the mean of the local magnetic field moduli. In Fig. 7, we show the inversion errors of $H_{\rm eff}$ as function of $H_{\rm surf}$. In this case, it is evident that when the weak magnetic field regime is assured the inversions errors are extremely low. However, around 1 kG the weak magnetic field assumption starts to not be valid and in consequence the inversion errors –which overestimate $H_{\rm eff}$– begin to increase, reaching a maximum of 20% for surface fields around 5 kG. Surprisingly, when the surface intensities continue to increase, the overestimation of $H_{\rm eff}$ starts to decrease, is close to zero around 10 kG and then inversion errors begin to underestimate $H_{\rm eff}$. There is no clear explanation for this empirical behaviour, and more tests should be carried out to inspect it in detail, something that is beyond the scope of this study. Unfortunately once more, the fact that inversions error of the magnetic longitudinal field can be constrained by the surface field intensity, it is not useful in practice for the analysis of snapshot spectropolarimetric observations, since for this it is required to know the distribution of the local magnetic fields over the star. However, the results of Fig. 6 are helpful for the magnetic imaging technique. In order to disentangle the limitations of LSD profiles separately of those of the the first-order moment approach, we will now employ other technique to derive $H_{\rm eff}$, namely, machine learning algorithms. In Ramírez Vélez et al. (2018), we have shown that these algorithms are indeed very accurate for measurements of longitudinal magnetic fields, with MAPE values similar to the ones found in previous sections. Using the same sample of 200 synthetic spectra of Figs. 6 and 7, we have trained an Artificial Neuronal Network (ANN). The proper functioning of the ANN was performed using a K-fold = 5 validation test, which means that the full sample of 200 spectra is divided in five subsamples. Then, each subsample (consisting of 40 spectra) is used to train the ANN, and the inversions are performed over the remaining 160 spectra. This, process is repeated for all subsamples, and the obtained validation coefficient was 0.99 (Ramírez Vélez et al., 2018, in particular, all the technical details about the ANN are given in the appendix of that work). The results obtained with the ANN trained with the LSD profiles are shown in Fig. 8. It is clear from the right columns of Fig. 8 that to use a sample of LSD profiles to train a machine learning algorithm allows to determine $H_{\rm eff}$ very precisely for all magnetic intensities, from -6 to 6 kG. In consequence, it is demonstrated with this test that the inaccuracy of the results of Figs. 6 and 7 is due to the use of the first-order moment approach and not to the LSD profiles. 3 Conclusions The present study is not the first one to highlight the crucial role played by normalisation parameters when measuring the magnetic stellar field through the LSD profiles. Kochukhov et al. (2010) have already stated that the same values to normalise the LSD profiles must be the same ones used in the first-order moment formula, i.e. $\lambda_{0}\,g_{0}$ = $\lambda_{0}^{n}g_{0}^{n}$. That is in fact what we found in the controlled tests of the previous section. We stress that it is always possible to use any of the mean strategies – either the simple mean $<\lambda_{i}><\bar{g_{i}}>$, the mean weighted or the mean noise weighted–, but for these approaches it is required to a priori find the value of line depth for which the mean values will be equal to $\lambda_{0}\,g_{0}$. Regardless of the employed normalisation strategy and considering that $\lambda_{0}\,g_{0}$ are found through synthetic spectra, there are still two important considerations normally set aside. The first one, already discussed, is that the normalisation values are dependent on $vsini$. One of the reasons that the dependence in $vsini$ was ignored is due to the fact that it was assumed that all lines are autosimilars, something that is clearly not the case for different rotational values with different line blendings. The second one is that many recent publications have analysed samples of stars in which the mask of each star is carefully established but the normalisation values are the same for all the stars, despite that in the sample the stars have different atmospheric parameters as $T_{\rm eff}$, log (g), $vsini$, etc, (e.g. Alecian et al., 2013; Villebrun et al., 2019; Hill et al., 2019). Our results concerning the integration range of Eq. (3), is contrary to what other previous studies announced where it was stated that there is only one appropriate width of the LSD profiles that allows to correctly determine $H_{\rm eff}$ (e.g. Neiner et al., 2012). Here, we have shown that even taking a small portion of the full width of the profiles, or to highly overestimate the width of the profiles, is useful to correctly determine $H_{\rm eff}$ (if $\lambda_{0}\,g_{0}$ is previously found for each profile width and if the integration limits are symmetrical around the centre of profiles). Moreover, given that it is possible to use different profile widths, i.e. different integration limits in Eq. (3), we proposed a method to estimate the incertitude of the measurements using the multi-inversions strategy (see Table 1). In fact, the main conclusion of this work is that the use of the first-order moment technique for the measurement of $H_{\rm eff}$ from multi-line LSD profiles is a very robust approach if and only if the parameters $\lambda_{0}\,g_{0}$ of Eq. (3) are properly determined and provided that the weak magnetic field regime is fulfilled. We showed that a sound methodology to find $\lambda_{0}\,g_{0}$ is through the use of a small sample of theoretical spectra calculated with the physical parameters as close as possible to the data one wish to analyse. In this respect, please note that we have inspected only those physical parameters that we consider to be the ones which have more impact in the first-order approach, but we have left out many other parameters such as micro and macro turbulence, metallicity, log (g), and others, which we considered to have a less impact on the linear relation given by Eq. (3). There is no doubt that with the results shown in this work some of the previous reported measurements of stellar magnetic fields, through a combined analysis of LSD profiles and the first-order moment method, deserve to be revised. More importantly, new studies using the first-order moment technique must properly calibrate $\lambda_{0}\,g_{0}$ in order to give more accuracy to the results. This fact is likely crucial for fast rotator stars, in which it seems that the magnetic fields reported for these stars have been systematically underestimated. With case study presented here where a solar atmospheric model was considered, the underestimation reached almost 300% around $vsini\,\sim\,50\,{\rm km\,s^{-1}}$. The final conclusion is that in general the intensities of magnetic fields in fast rotators stars, where $H_{\rm eff}$ has been measured through the first-order moment and the LSD profiles, is expected to be more intense than believed. Finally, we also showed that very good measurements of $H_{\rm eff}$ are no longer possible if the magnetic weak field assumption is not valid. With the magnetic dipolar model employed to establish the polarised synthetic samples, we could not find a critical value of $H_{\rm eff}$ from which the weak field approximation breaks down. In other words, extremely weak measured values of $H_{\rm eff}$ do not assure the weak field regime. This fact is a consequence of the well known effect of attenuation of circular polarised signals due to the balance of positive and negative polarities of the magnetic field over the visible hemisphere of the star. We also showed how to overcome this problem by using alternative methods as are the machine learning algorithms (Ramírez Vélez et al., 2018). Using the same sample of LSD profiles, we could properly infer the values of $H_{\rm eff}$ for the full sample of LSD profiles, including strong intensities in the order of kG (Fig. 8). This demonstrates that the main constrain when deriving the stellar longitudinal magnetic fields are not the LSD profiles, but the use of the first-order moment approach, which is based in assumptions that can be very restrictives in practice given that the value of $H_{\rm eff}$ does not allow a piori determine if the weak field approximation is assured. Acknowledgements The author thanks to Franco Leone for helpful discussions that helped to improve the content of the manuscript. This study has been supported by UNAM through the PAPIIT grant number IN103320. References Alecian et al. (2013) Alecian E., et al., 2013, MNRAS, 429, 1001 Carroll & Strassmeier (2014) Carroll T. A., Strassmeier K. G., 2014, A&A, 563, A56 Donati & Landstreet (2009) Donati J.-F., Landstreet J. D., 2009, ARA&A, 47, 333 Donati et al. (1997) Donati J.-F., Semel M., Carter B. D., Rees D. E., Collier Cameron A., 1997, MNRAS, 291, 658 Donati et al. (2003) Donati J. F., et al., 2003, MNRAS, 345, 1145 Grunhut et al. (2013) Grunhut J. H., et al., 2013, MNRAS, 428, 1686 Hill et al. (2019) Hill C. A., Folsom C. P., Donati J. F., Herczeg G. J., Hussain G. A. J., Alencar S. H. P., Gregory S. G., Matysse Collaboration 2019, MNRAS, 484, 5810 Kochukhov et al. (2010) Kochukhov O., Makaganiuk V., Piskunov N., 2010, A&A, 524, A5 Leone et al. (2017) Leone F., Scalia C., Gangi M., Giarrusso M., Munari M., Scuderi S., Trigilio C., Stift M. J., 2017, ApJ, 848, 107 Marsden et al. (2014) Marsden S. C., et al., 2014, MNRAS, 444, 3517 Mathys (1988) Mathys G., 1988, A&A, 189, 179 Mathys (1989) Mathys G., 1989, Fundamentals Cosmic Phys., 13, 143 Mathys (1991) Mathys G., 1991, A&AS, 89, 121 Neiner et al. (2012) Neiner C., Alecian E., Briquet M., Floquet M., Frémat Y., Martayan C., Thizy O., Mimes Collaboration 2012, A&A, 537, A148 Petit et al. (2014) Petit P., Louge T., Théado S., Paletou F., Manset N., Morin J., Marsden S. C., Jeffers S. V., 2014, PASP, 126, 469 Ramírez Vélez et al. (2018) Ramírez Vélez J. C., Yáñez Márquez C., Córdova Barbosa J. P., 2018, A&A, 619, A22 Rees & Semel (1979) Rees D. E., Semel M. D., 1979, A&A, 74, 1 Ryabchikova et al. (2015) Ryabchikova T., Piskunov N., Kurucz R. L., Stempels H. C., Heiter U., Pakhomov Y., Barklem P. S., 2015, Phys. Scr., 90, 054005 Sabin et al. (2015) Sabin L., Wade G. A., Lèbre A., 2015, MNRAS, 446, 1988 Scalia et al. (2017) Scalia C., Leone F., Gangi M., Giarrusso M., Stift M. J., 2017, MNRAS, 472, 3554 Semel (1967) Semel M., 1967, Annales d’Astrophysique, 30, 513 Semel (1995) Semel M., 1995, in Comte G., Marcelin M., eds, Astronomical Society of the Pacific Conference Series Vol. 71, IAU Colloq. 149: Tridimensional Optical Spectroscopic Methods in Astrophysics. p. 340 Shorlin et al. (2002) Shorlin S. L. S., Wade G. A., Donati J. F., Land street J. D., Petit P., Sigut T. A. A., Strasser S., 2002, A&A, 392, 637 Silvester et al. (2009) Silvester J., et al., 2009, MNRAS, 398, 1505 Stibbs (1950) Stibbs D. W. N., 1950, MNRAS, 110, 395 Stift (1975) Stift M. J., 1975, MNRAS, 172, 133 Stift (2000) Stift M. J., 2000, A Peculiar Newletter, 33, 27 Valenti & Fischer (2005) Valenti J. A., Fischer D. A., 2005, ApJS, 159, 141 Villebrun et al. (2019) Villebrun F., et al., 2019, A&A, 622, A72 Wade et al. (2000) Wade G. A., Donati J. F., Landstreet J. D., Shorlin S. L. S., 2000, MNRAS, 313, 851
Fractional regularity for conservation laws with discontinuous flux Shyam Sundar Ghoshal ghoshal@tifrbng.res.in Stéphane Junca stephane.junca@univ-cotedazur.fr Akash Parmar akash@tifrbng.res.in Centre for Applicable Mathematics, Tata Institute of Fundamental Research, Post Bag No 6503, Sharadanagar, Bangalore - 560065, India. Université Côte d’Azur, LJAD, Inria & CNRS, Parc Valrose, 06108 Nice, France. Abstract This article deals with the regularity of the entropy solutions of scalar conservation laws with discontinuous flux. It is well-known [Adimurthi et al., Comm. Pure Appl. Math. 2011] that the entropy solution for such equation does not admit $\displaystyle BV$ regularity in general, even when the initial data belongs to $\displaystyle BV$. Due to this phenomenon fractional $\displaystyle BV^{s}$ spaces wider than $\displaystyle BV$ are required, where the exponent $\displaystyle 0<s\leq 1$ and $\displaystyle BV=BV^{1}$. It is a long standing open question to find the optimal regularizing effect for the discontinuous flux with $\displaystyle L^{\infty}$ initial data. The optimal regularizing effect in $\displaystyle BV^{s}$ is proven on an important case using control theory. The fractional exponent $\displaystyle s$ is at most $\displaystyle 1/2$ even when the fluxes are uniformly convex. keywords: Conservation laws, Interface, Discontinuous flux, Cauchy problem, Regularity, $\displaystyle BV$ functions, Fractional $\displaystyle BV$ spaces. MSC: [2020] 35B65, 35L65, 35F25, 35L67, 26A45, 35B44. \xpatchcmd\MaketitleBox  \xpatchcmd\MaketitleBox  Contents 1 Introduction 1.1 Optimal regularity results in $\displaystyle BV^{s}$ spaces for a smooth flux: $\displaystyle f=g$ 1.2 Previous regularity results for discontinuous flux 1.3 Questions on the $\displaystyle BV^{s}$ regularity for discontinuous flux 2 Main Results 3 Preliminaries 4 Proof of main results 4.1 Regularity when traces are far from critical values 4.2 Spatial $\displaystyle BV^{s}$ estimates for values originating from the interface 4.3 Smoothing effect for restricted nonlinear fluxes 4.4 Generalization for $\displaystyle BV^{s}$ initial data 4.5 Non restricted fluxes 4.6 Propagation of the initial regularity outside the interface 5 Construction of counter-example A Hölder continuity of singular maps B $\displaystyle BV^{s}$ embedding C Backward construction 1 Introduction This article deals with the regularity aspects of the solution for the following scalar conservation law with discontinuous flux: $$\displaystyle\displaystyle\left\{\begin{array}[]{rlll}u_{t}+f(u)_{x}&=0,&\mbox{ if }&x>0,t>0,\\ u_{t}+g(u)_{x}&=0,&\mbox{ if }&x<0,t>0,\\ u(x,0)&=u_{0}(x),&\mbox{ if }&x\in\mathbb{R},\end{array}\right.$$ (1.4) where $\displaystyle u:\mathbb{R}\times[0,\infty)\rightarrow\mathbb{R}$ is unknown, $\displaystyle u_{0}(\cdot)\in L^{\infty}(\mathbb{R})$ is the initial data and the fluxes $\displaystyle f$, $\displaystyle g$ are $\displaystyle C^{1}(\mathbb{R})$ and strictly convex (that means $\displaystyle f^{\prime},g^{\prime}$ are increasing functions). The conservation law (1.4) arises in several frameworks of real-life phenomena, physical situations and applied subjects. For example, the equation (1.4) occurs naturally in the two-phase flow of a heterogeneous porous medium in the petroleum reservoir petro . The equation (1.4) is also useful to understand the ideal clarifier thickener burger2 , traffic flow model with varying road surface conditions traffic and ion etching accustomed for semiconductor industry ion . The above examples are just a little glance at the broad applicability of the equation (1.4) in the fields of applied sciences. For more details, one can see burger2 ; burger4 ; diehl ; diehl4 . The equation (1.4) does not have a global classical solution even for smooth initial data, so one needs to consider the following notion of a weak solution: Definition 1.1 (Weak solution). A function $\displaystyle u\in C(0,T;L^{1}_{loc}(\mathbb{R}))$ is said to be a weak solution of the problem (1.4) if $$\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}}u\frac{\partial\phi}{\partial t}+F(x,u)\frac{\partial\phi}{\partial x}\mathrm{d}x\ \mathrm{d}t+\int\limits_{\mathbb{R}}u_{0}(x)\phi(x,0)\mathrm{d}x=0,$$ for all $\displaystyle\phi\in C^{\infty}_{c}(\mathbb{R}\times\mathbb{R}^{+})$, where the flux $\displaystyle F(x,u)$ is given as $\displaystyle F(x,u)=H(x)f(u)+(1-H(x))g(u)$, and $\displaystyle H(x)$ is the Heaviside function. From the above defined weak formulation it can be derived that if interface traces $\displaystyle u^{\pm}(t)=\lim\limits_{x\rightarrow 0\pm}u(x,t)$ exist then at $\displaystyle x=0$, $\displaystyle u$ satisfies Rankine-Hugoniot condition, namely, for almost all $\displaystyle t$ $$f(u^{+}(t))=g(u^{-}(t)).$$ (1.5) For equation (1.4), the left and right traces $\displaystyle u^{-},u^{+}$ play important roles in well-posedness theory and also in determining the regularity of solutions. In Kyoto , authors proved the existence of the interface traces via Hamilton-Jacobi type equation. Because of the non-uniqueness of weak solutions, one needs some extra condition called the “entropy condition” to get the unique solution even for the case $\displaystyle f=g$ in (1.4). For $\displaystyle f=g$, Kružkov Kruzkov gave a generalized entropy condition and proved the uniqueness. But due to the discontinuity of flux at the interface, the Kružkov entropy is not good enough to prove the uniqueness of (1.4). Hence another condition is needed near the interface, called the “interface entropy condition”. Throughout this article, we use the following notion of the entropy solution. Definition 1.2 (Entropy solution, Kyoto ). A weak solution $\displaystyle u\in L^{\infty}(\mathbb{R}\times[0,T])$ of the problem (1.4) is said to be an entropy solution if the following holds. 1. $\displaystyle u$ satisfies Kruzkov entropy conditions on each side of the interface $\displaystyle x=0$, that is, in $\displaystyle\mathbb{R}\setminus\{0\}$. 2. The interface traces $\displaystyle u^{\pm}(t)=\lim\limits_{x\rightarrow 0\pm}u(x,t)$ exist for almost all $\displaystyle t>0$ and they satisfy the following “interface entropy condition” for almost all $\displaystyle t>0$, $$|\{t:f^{\prime}(u^{+}(t))>0>g^{\prime}(u^{-}(t))\}|=0.$$ (1.6) Uniqueness has been proved in Kyoto when interface traces exist for a weak solution and it satisfies the entropy condition (1.6). In the same article, the authors obtained the useful Lax-Oleinik type explicit formulas for (1.4). The notion of ‘A-B entropy solution’ is introduced in ASG and it coincides with (1.6) when $\displaystyle A=\theta_{g},B=\theta_{f}$. The number $\displaystyle\theta_{f}$ is defined by $\displaystyle f(\theta_{f})=\min f$ when $\displaystyle f$ admits a minimum and $\displaystyle g(\theta_{g})=\min g$. Lax-Oleinik type formula is also available Explicit for the ‘A-B-entropy solutions’. It has been observed AS that for the case $\displaystyle A<\theta_{g}$ or $\displaystyle B>\theta_{f}$, ‘A-B-entropy solutions’ belong to BV space for BV initial data and for $\displaystyle A=\theta_{g},B=\theta_{f}$ total variation of entropy solution can blow up at finite time $\displaystyle t_{0}>0$ even for some BV initial data (see section 1.2 for more details). Therefore, we work with the choice $\displaystyle A=\theta_{g},B=\theta_{f}$. In this article, we rely on the interface entropy condition (1.6), and we use the analysis of characteristics developed as in Kyoto . The well-posed theory from the numerical and theoretical aspects has been extensively studied. We refer to ASG ; boris2 ; BGG ; KT ; P09 and the references therein. The existence of a solution of (1.4) has been proved by several numerical schemes AJG ; boris1 ; SAT ; tower . Due to the absence of total variation bound of solutions even for BV data, the singular mapping technique becomes useful to show the convergence of numerical schemes (see AJG ; tower ). Very recently, in SAT authors generalize the Godunov type scheme in the case when discontinuities of flux may have limit point even when the set of discontinuities is dense. Due to the lack of the BV regularity of the entropy solution of (1.4), one needs to study the regularity aspects of the solution in some bigger space than $\displaystyle BV$. More precisely, in this paper, we quantify the sharp regularity of entropy solution of (1.4) in suitable fractional spaces. Structure of the paper This paper is organized as follows: in sections 1.1 and 1.2, we have discussed regularity results for scalar conservation laws and for (1.4) respectively. Then, it leads to section 1.3 where we precisely state the regularity problems corresponding to the equation (1.4). In section 2 we describe our main results with some remarks. To make this article self-contained, in section 3 some definitions and preliminary results have been recalled from Kyoto ; junca1 . The detailed proofs of main results are written in section 4. Proofs of these results utilize the Hopf-Lax type formula and some results from Kyoto and techniques from AS ; S . Proofs here are a little more involved technically. In the last section, the construction of a counter-example shows that the main results of the present article cannot be improved. Two appendixes contain basic useful lemmas and explanations regarding our adaptation of the result from control theory AG . 1.1 Optimal regularity results in $\displaystyle BV^{s}$ spaces for a smooth flux: $\displaystyle f=g$ In this subsection, we discuss the entropy solution of (1.4) for $\displaystyle f=g$. Even for Lipschitz flux, the theory is well-posed godunov ; Kruzkov ; lax1 ; ol in the $\displaystyle L^{\infty}$ setting and many methodologies are available to understand the regularity of entropy solutions finer ; junca1 ; CJLO ; CJJ ; SAJJ ; SA ; GJC ; lax1 ; P05 ; P07 ; ol . Natural function space for scalar conservation law is $\displaystyle BV$ since the fundamental work of A. I. Volpert Volpert in 1967. This space allows to get compactness and it makes convenient to describe the structure of a shock wave with traces on each side of the singularity AFP . Information on trace helps to study finer qualitative properties of solutions. The occurrence of the $\displaystyle BV$ regularity for entropy solution appeared for the first time in lax1 ; ol independently by P. D. Lax and O. Oleinik. The entropy solution becomes instantaneously $\displaystyle BV$ even when the data is in $\displaystyle L^{\infty}$ and the flux is uniformly convex, i.e., $\displaystyle\inf f^{\prime\prime}>0$. This is the well known smoothing effect as a consequence of the one-sided Lipschitz-Oleinik inequality ol . Unfortunately, the ‘$\displaystyle BV$ space is not enough’ Cheng83 when the flux is not uniformly convex. There are many examples of entropy solutions that are not in $\displaystyle BV$ finer ; CJ1 ; SAJJ . Though non-vanishing property of second derivative of the flux is proved SA to be necessary and sufficient condition for BV regularizing, but a smoothing effect still occurs in fractional Sobolev spaces JaX ; LPT for a nonlinear flux. To keep the advantages of space $\displaystyle BV$: regularity and some traces, the fractional $\displaystyle BV$ spaces were introduced for conservation laws in junca1 . The Lax-Oleinik smoothing effect was generalized in $\displaystyle BV^{s}$ for a flux with power-law nonlinearity like $\displaystyle|u|^{p+1}$ and $\displaystyle p=1/s\geq 1$, for $\displaystyle C^{1}$ or strictly convex flux in junca1 ; CJLO ; GJC . Fractional BV spaces, denoted by $\displaystyle BV^{s}$, $\displaystyle 0<s\leq 1$, were first defined for all $\displaystyle s\in(0,1)$ in love ; MO ; MO1 . Let $\displaystyle I$ be a non-empty interval of $\displaystyle\mathbb{R}$ and $\displaystyle s\in(0,1]$. The space of fractional bounded variation functions, denoted as $\displaystyle BV^{s}(I)$ is a generalization of space of functions with a bounded variation on $\displaystyle I$, denoted as $\displaystyle BV(I)$. In the sequel, we denote $\displaystyle S(I)$ the set of the subdivisions of $\displaystyle I$, that is the set of finite subsets $\displaystyle\sigma=({x_{0},x_{1},...,x_{n}})$ in $\displaystyle I$ with $\displaystyle(x_{0}{<}x_{1}{<}x_{2}{<}...{<}x_{n})$. Definition 1.3 ($\displaystyle BV^{s}$ love ; MO ; MO1 ). Let $\displaystyle\sigma=(x_{0},x_{1},...,x_{n})$ be in $\displaystyle S(I)$ and let $\displaystyle u$ be real function on $\displaystyle I$. The s-total variation of $\displaystyle u$ with respect to $\displaystyle\sigma$ is $$TV^{s}u({\sigma})=\sum_{i=1}^{n}{|u(x_{i})-u(x_{i-1})|}^{1/s},$$ then define, $$TV^{s}u(I)=\sup_{\sigma\in S(I)}TV^{s}u({\sigma}).$$ The set $\displaystyle BV^{s}(I)$ is the set of functions $\displaystyle u:I\rightarrow\mathbb{R}$ such that $\displaystyle TV^{s}u(I)<\infty$. 1.2 Previous regularity results for discontinuous flux To study the convergence and existence of traces of the solution, BV regularity plays a very important role. Without total variation bound, the convergence of numerical scheme is difficult. It is to be noted that one can not expect the total variation diminishing of the solution to (1.4) since non-constant solution can arise from constant initial data. Despite extensive study of the equation (1.4) for several decades, optimal regularity results were missing for the solution of (1.4). There are few results known about the regularity aspects of the solution to (1.4) and we describe them in the following paragraph. Though away from interface solution has been proved burger1 to be BV in space, the regularity of solution near interface was unknown for long time. First breakthrough result AS comes in 2009. By constructing explicit example when $\displaystyle\min f\neq\min g$, authors AS have shown that total variation of entropy solution to (1.4) blows up at time $\displaystyle t_{0}>0$ for BV initial data. To build the example, they have utilized the failure of Lipschitz continuity of $\displaystyle f^{-1}g$ near the critical point of $\displaystyle f$. Here $\displaystyle g_{-}^{-1}$, $\displaystyle f_{+}^{-1}$ are the inverse of $\displaystyle g,f$ in appropriate domains, more precisely, they are defined as $$g_{-}^{-1}:\;\;](g^{\prime})^{-1}(-\infty),(g^{\prime})^{-1}(0)]\rightarrow\mathbb{R}\qquad\&\qquad f_{+}^{-1}:[(f^{\prime})^{-1}(0),(f^{\prime})^{-1}(+\infty)[\rightarrow\mathbb{R}.$$ (1.7) The key functions $\displaystyle f^{-1}_{+}g(\cdot)$ and $\displaystyle g^{-1}_{-}f(\cdot)$ transmit information via the interface from left-to-right and right-to-left respectively. On the other hand in S ; S2 the author proved several regularity results. More surprisingly, the author showed that one can prove that solution of (1.4) belongs to $\displaystyle BV$ if fluxes have the same minimum value, i.e., $\displaystyle\min f=f(\theta_{f})=\min g=g(\theta_{g})$. The author also proved that if $\displaystyle f(\theta_{f})\not=g(\theta_{g})$ and initial data is compactly supported then there exists a time $\displaystyle T$ such that for all $\displaystyle t>T$ solution of (1.4) admits the BV regularity. The assumption of compact support can not be relaxed as it has been shown by example that there exists a sequence of time, $\displaystyle T_{n}$, for which the total variation of solution to (1.4) blows up. Earlier referred publications have uniform convexity assumption on the fluxes, in S2 it has been proved that even for non-uniform convex flux (but with a special structure when the flux losses its uniform convexity) any $\displaystyle L^{\infty}$ initial data gives the solution which is $\displaystyle BV_{loc}$ near the interface when the connection $\displaystyle(A,B)$ as in Explicit are far from the critical point. This discussion leads to conclude that working in the BV space setup is not enough for scalar conservation law with discontinuous flux (1.4). Hence, working in larger space appears appropriate and we work in the space of functions of fractional bounded variation, denoted as $\displaystyle BV^{s}$, which is more generalised space than the BV space. In the following subsection we discuss the questions which are answered in the present paper. 1.3 Questions on the $\displaystyle BV^{s}$ regularity for discontinuous flux As we discussed so far, the entropy solution of (1.4) lacks the following properties: 1. If $\displaystyle u_{0}\in BV(\mathbb{R})$ then $\displaystyle u(\cdot,t)\in BV(\mathbb{R})$, for any $\displaystyle t>0$. 2. If $\displaystyle f,g$ are uniformly convex fluxes, $\displaystyle\min f\neq\min g$ and $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$ then for any $\displaystyle t>0$, $\displaystyle u(\cdot,t)\in BV_{\mbox{loc}}$. By motivating the subject on the basis of the above facts, we settle the following issues regarding regularity of the solution of (1.4). Question 1.1. Can we expect that for a well chosen $\displaystyle 0<s\leq 1$, if the given initial data belongs to $\displaystyle BV^{s}$ then the solution of (1.4) stays in $\displaystyle BV^{s}$? Question 1.2. Can we expect that for any $\displaystyle 0<s\leq 1$ there exists $\displaystyle 0<s_{1}$ such that if the given initial data belongs to $\displaystyle BV^{s}$ then the solution of (1.4) belongs to $\displaystyle BV^{s_{1}}$? In particular, when $\displaystyle u_{0}$ is in $\displaystyle BV=BV^{1}$, $\displaystyle s=1$, in which $\displaystyle BV^{s_{1}}$, $\displaystyle 0<s_{1}\leq 1$, is the solution? Question 1.3. What is the Lax-Oleinik type regularizing effect, for uniformly convex fluxes $\displaystyle f$ and $\displaystyle g$? In other words, does entropy solution of (1.4) belong to $\displaystyle BV^{s}$ for some $\displaystyle s\in(0,1)$ and for any given $\displaystyle L^{\infty}$ initial data? Question 1.4. Can we choose $\displaystyle 0<s<1$ sharply and an initial data $\displaystyle u_{0}\in BV^{s}$ space for which the generalized total variation blows up for the corresponding solution of (1.4)? We are able to answer all questions 1.1-1.4 affirmatively under certain assumptions on the fluxes $\displaystyle f,g$. We then show by counter-examples that the assumptions of our main results are optimal. Moreover, we provide explicit estimates of $\displaystyle s$-total variation of the solution with respect to time variable $\displaystyle t$ with some sufficient conditions on initial data. 2 Main Results Throughout the paper, $\displaystyle f$ and $\displaystyle g$ are $\displaystyle C^{1}$ strictly convex functions admitting a critical point. Let $\displaystyle\theta_{f}$ and $\displaystyle\theta_{g}$ be the unique critical points of $\displaystyle f$ and $\displaystyle g$ respectively, i.e., $\displaystyle f^{\prime}(\theta_{f})=0$ and $\displaystyle g^{\prime}(\theta_{g})=0$ and $\displaystyle g_{-}^{-1}$, $\displaystyle f_{+}^{-1}$ denote the inverse of $\displaystyle g,f$ for domain where $\displaystyle g^{\prime}(u)\leq 0$ and $\displaystyle f^{\prime}(u)\geq 0$ respectively. Notice that the existence of a minimum for $\displaystyle f$ and $\displaystyle g$ are always assumed in this paper as it allows the critical behaviour of admissible solution. If $\displaystyle f$ and $\displaystyle g$ have no minimum but both are strictly increasing or decreasing, the situation is simpler AS . Thus, throughout the paper, it is assumed that, $$f(\theta_{f})=\min f\neq\min g=g(\theta_{g}).$$ (2.1) In the best case when $\displaystyle f$ and $\displaystyle g$ are uniformly convex and (2.1) is satisfied, we obtain a smoothing in $\displaystyle BV^{1/2}$ instead of $\displaystyle BV$. In the non uniformly convex case the situation is worse. The smoothing depends on the nonlinear flatness of the fluxes. Let us introduce the following non-degeneracy flux condition which is a power-law degeneracy condition junca1 , there exist two numbers $\displaystyle p\geq 1$, $\displaystyle q\geq 1$, such that, for any compact set $\displaystyle K$, there exist positive numbers $\displaystyle C_{1},C_{2}$ such that for all $\displaystyle u\not=v$, $\displaystyle u,v\in K$, $$\displaystyle\displaystyle\frac{|f^{\prime}(u)-f^{\prime}(v)|}{|u-v|^{p}}>C_{1}>0$$ and $$\displaystyle\displaystyle\frac{|g^{\prime}(u)-g^{\prime}(v)|}{|u-v|^{q}}>C_{2}>0.$$ (2.2) For $\displaystyle p=1$ this is the classical uniformly convex condition for $\displaystyle f$ and for $\displaystyle p>1$ it corresponds to a less nonlinear convex flux like $\displaystyle f(u)=|u|^{p+1}$. An interesting subcase is when the loss of uniform convexity of the fluxes occurs only at the minimum. That is, if $\displaystyle f$ belongs to $\displaystyle C^{2}$ that convex power laws, $\displaystyle f(u)=|u|^{p+1}$, $\displaystyle p>1$ are the typical example. The same assumption can be made for the other flux $\displaystyle g$. $$f^{\prime\prime},g^{\prime\prime}\mbox{ vanish only at $\displaystyle\theta_{f}$ and $\displaystyle\theta_{g}$ respectively.}$$ (2.3) The assumption (2.3) combined with the previous one (2.2) is also called the restricted non-degeneracy condition and the fluxes, restricted fluxes. In the subcase (2.3) both satisfied by $\displaystyle f$ and $\displaystyle g$, stronger results are obtained and first presented in following Theorem 2.1 for an $\displaystyle L^{\infty}$ initial data and Theorem 2.2 for a $\displaystyle BV^{s}$ initial data. Two quantities are fundamental to express the fractional regularity of the solutions, $\displaystyle\gamma$ and $\displaystyle\nu$, $$\displaystyle\displaystyle\gamma=\left\{\begin{array}[]{l}\frac{1}{q+1}\\ \frac{1}{p+1}\end{array}\right.$$ $$\displaystyle\displaystyle\nu=\left\{\begin{array}[]{l}\frac{1}{p}\\ \frac{1}{q}\end{array}\right.$$ $$\displaystyle\displaystyle\begin{array}[]{lll}\mbox{ if }&\min f<\min g,\\ \\ \mbox{ if }&\min f>\min g.\end{array}$$ (2.11) The constant $\displaystyle\gamma\leq 1/2$ can be understood as a loss of regularity due to the interface and $\displaystyle\nu\leq 1$ as the smoothing effect outside the interface. More precisely, $\displaystyle\gamma$ comes from the singular mapping technique as explained in following remark. Remark 2.1. Let $\displaystyle f$ and $\displaystyle g$ be the fluxes satisfying the non-degeneracy condition (2.2) and $\displaystyle f(\theta_{f})\not=g(\theta_{g})$. Then one of $\displaystyle f^{-1}_{+}g(\cdot)$ and $\displaystyle g^{-1}_{-}f(\cdot)$ is Lipschitz continuous and the other one is Hölder continuous with exponent $\displaystyle\gamma$ depending on $\displaystyle p,q$ from the non-degeneracy condition (2.2) and given by (2.11). The proof of the above fact is done in Lemma A.3. Theorem 2.1 (Smoothing effect for restricted nonlinear fluxes and $\displaystyle L^{\infty}$ initial data). Let $\displaystyle f$ and $\displaystyle g$ be two $\displaystyle C^{2}$ fluxes satisfying the restricted non-degeneracy condition $\displaystyle f(\theta_{f})\not=g(\theta_{g})$ (2.1), (2.2) and (2.3). Let $\displaystyle u(\cdot,t)$ be the entropy solution of (1.4) corresponding to an initial data $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$. Then, $\displaystyle u(\cdot,t)\in BV^{s}(-M,M)$ for each $\displaystyle t>0,\,M>0$, where $\displaystyle s$ is determined as follows $$s=\min(\gamma,\nu)$$ (2.12) and the following estimate holds with a positive constant $\displaystyle C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ depending only on the fluxes and the range of the initial data, $$TV^{s}(u(\cdot,t),[-M,M])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+3(2||u_{0}||_{\infty})^{1/s}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ (2.13) Remark 2.2 (Uniform convex fluxes and $\displaystyle BV^{1/2}$). If the fluxes $\displaystyle f$ and $\displaystyle g$ are uniformly convex then the solution belongs to $\displaystyle BV^{1/2}$. So even for the uniformly convex case the solution goes into a fractional $\displaystyle BV$ space. Hence in the following theorem for $\displaystyle BV^{s}$ initial data, $\displaystyle 0<s\leq 1$ the previous result states as follow. Indeed, previous Theorem 2.1 can be seen as a limiting case of following Theorem 2.2 with $\displaystyle s=0$ and stating $\displaystyle BV^{0}=L^{\infty}$. Theorem 2.2 (Smoothing effect for restricted nonlinear fluxes and $\displaystyle BV^{s}$ initial data). Let $\displaystyle f$ and $\displaystyle g$ be two $\displaystyle C^{2}$ fluxes such that $\displaystyle f(\theta_{f})\not=g(\theta_{g})$ and fluxes satisfy the restricted non-degeneracy condition (2.2) and (2.3). Let $\displaystyle u(\cdot,t)$ be the entropy solution of (1.4) corresponding to an initial data $\displaystyle u_{0}\in BV^{s}(\mathbb{R})$ for $\displaystyle s\in(0,1)$. Then, $\displaystyle u(\cdot,t)\in BV^{s_{1}}(-M,M)$ for each $\displaystyle t>0,\,M>0$ with $$s_{1}:=\min\{\gamma,\max\{\nu,s\}\}$$ (2.14) the following estimate holds with a positive constant $\displaystyle C_{f,g,||u_{0}||_{\infty}}$ depending only on fluxes and the range of the initial data and a constant $\displaystyle D>0$, $$TV^{s_{1}}(u(\cdot,t),[-M,M])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}+2\left|\left|2u_{0}\right|\right|_{\infty}^{\frac{1}{s_{1}}}+D\cdot TV^{s}(u_{0}).$$ (2.15) We note that the assumption on vanishing points of $\displaystyle f^{\prime\prime},g^{\prime\prime}$ is restrictive. We can relax this assumption at the cost of smaller $\displaystyle s_{1}$. More precisely, we have the following result. Theorem 2.3 (Smoothing effect for $\displaystyle L^{\infty}$ initial data). Let $\displaystyle f$ and $\displaystyle g$ be two $\displaystyle C^{2}$ fluxes such that $\displaystyle f(\theta_{f})\not=g(\theta_{g})$ satisfying the non-degeneracy condition (2.2) with exponent $\displaystyle p,q$ respectively. Let $\displaystyle u(\cdot,t)$ be the entropy solution of (1.4) corresponding to an initial data $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$. Then, for each $\displaystyle t>0,M>0$ and there exists positive constant $\displaystyle C_{f,g,||u_{0}||_{\infty}}$ such that $$TV^{s}(u(\cdot,t),[-M,M])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+3(2||u_{0}||_{\infty})^{1/s}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}$$ where $\displaystyle s$ is determined as follows $$s=\gamma\,\nu.$$ (2.16) Theorem 2.4. With the same assumption as Theorem 2.3, if $\displaystyle u_{0}\in BV^{s_{0}}$. Then, $\displaystyle u(\cdot,t)\in BV^{s_{2}}$ and there exists positive constants $\displaystyle C_{f,g,||u_{0}||_{\infty}}$ and $\displaystyle D$ such that $$\displaystyle\displaystyle TV^{s_{2}}(u(\cdot,t),[-M,M])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}+2\left|\left|2u_{0}\right|\right|_{\infty}^{\frac{1}{s_{2}}}+D\cdot TV^{s_{0}}(u_{0}),$$ (2.17) where, $$\displaystyle\displaystyle s_{2}=\gamma\max(s_{0},\nu).$$ (2.18) In general, away from the interface, the expected fractional regularity is $\displaystyle\min(1/p,1/q)$ junca1 which is always bigger than $\displaystyle s$ in (2.12). In particular, near the interface, for $\displaystyle BV$ initial data, a $\displaystyle BV$ regularity for the entropy solution cannot be expected AS . At most, a $\displaystyle BV^{1/2}$ regularity is possible. Getting $\displaystyle BV$ regularity of entropy solution can be impossible near the interface. The situation is better far from the interface. Far from the interface, the constant $\displaystyle\gamma$ plays no role. The following theorem gives estimates which are sharp for small time. Theorem 2.5 (Regularity outside the interface). Let $\displaystyle f$ and $\displaystyle g$ be the fluxes with $\displaystyle f(\theta_{f})\not=g(\theta_{g})$. Let $\displaystyle u(\cdot,t)$ be the entropy solution of (1.4) corresponding to an initial data $\displaystyle u_{0}\in BV^{s}(\mathbb{R})$ for $\displaystyle s\in(0,1)$. Then the following holds. 1. If $\displaystyle f,g$ satisfies (2.2) with exponent $\displaystyle p,q$ respectively, for any $\displaystyle t>0$, $\displaystyle\epsilon>0$, then there exists a constant $\displaystyle C_{f,g,||u_{0}||_{\infty}}>0$ $$TV^{s_{1}}(u(\cdot,t),(-\infty,\epsilon]\cup[\epsilon,\infty))\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}t}{\epsilon}+2TV^{s_{1}}(u_{0})+2(2||u_{0}||_{\infty})^{1/s_{1}}$$ (2.19) for $\displaystyle s_{1}=\min\{p^{-1},q^{-1},s\}$. 2. If we assume (2.3) that $\displaystyle f^{\prime\prime},g^{\prime\prime}$ vanish only at $\displaystyle\theta_{f}$ and $\displaystyle\theta_{g}$ respectively then we have $$\displaystyle\displaystyle TV^{s}(u(\cdot,t),(-\infty,\epsilon]\cup[\epsilon,\infty))$$ $$\displaystyle\displaystyle\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}}{\min\{f^{\prime\prime}(v);[(f^{\prime})^{-1}(\epsilon/t),S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}]\}}\frac{t}{\epsilon}+TV^{s}(u_{0})$$ $$\displaystyle\displaystyle+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}}{\min\{g^{\prime\prime}(v);[-S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}},(g^{\prime})^{-1}(-\epsilon/t)]\}}\frac{t}{\epsilon}+2(2||u_{0}||_{\infty})^{1/s}$$ (2.20) for any $\displaystyle t>0,\epsilon>0$ where $\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ is defined as $$\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}=\max\left\{\|u_{0}\|_{\infty},\sup_{|v|\leq\|u_{0}\|_{\infty}}|f^{-1}_{+}(g(v))|,\sup_{|v|\leq\|u_{0}\|_{\infty}}|g^{-1}_{-}(f(v))|\right\}.$$ Remark 2.3. All the regularity results in Theorems 2.1, 2.2, 2.3, 2.5 are extendable to fractional Sobolev space $\displaystyle W^{s,p}$ with the same exponent $\displaystyle s$, up to any $\displaystyle\varepsilon>0$, thanks to the embedding $\displaystyle BV^{s}\subset W^{s-\varepsilon,1/s}$ for all $\displaystyle\varepsilon\in(0,s)$ junca1 . Now we discuss about the optimality result. The assumption $\displaystyle\min f\not=\min g$ forbids the favourable case $\displaystyle f=g$, that is without interface. Here, the optimality of Theorem 2.2 is proved in the best case with uniformly convex fluxes. For this purpose examples are built with the optimal regularity and not more up. The same construction is valid with a power law on one side of the interface. These examples highlight the sharpness of Theorem 2.2. Theorem 2.6 (Blow-up for critical $\displaystyle BV^{s}$ semi-norms). Let $\displaystyle p\geq 1$ and $\displaystyle\epsilon>0$. Then there exists fluxes $\displaystyle f,g$ and an initial data $\displaystyle u_{0}\in BV(\mathbb{R})$ such that 1. the flux $\displaystyle f$ satisfies the non-degeneracy condition (2.2) with exponent $\displaystyle p$, 2. the function $\displaystyle g$ is uniformly convex, 3. the corresponding entropy solution $\displaystyle u(\cdot,T)\notin BV^{s}_{loc}(\mathbb{R})$ for some $\displaystyle T>0$ and $\displaystyle s=\frac{1}{p+1}+\epsilon$. The proof of Theorem 2.6 is postponed in Section 5 and C. 3 Preliminaries The fundamental paper used here is Kyoto where Adimurthi and Gowda settled an important foundation of the theory on scalar conservation laws with an interface and two convex fluxes. In this paper the author proposed the natural entropy condition (1.6) at the interface which means that no information comes only from the interface but crosses or go towards the interface. Such entropy condition is in the spirit of Lax-entropy conditions for shock waves. To make this paper self-contained, we recall some definitions and results. The following theorem can be found in Kyoto Lemma 4.9 at page 51. It is a Lax-Oleinik or Lax-Hopf formula for the initial value problem (1.4). Theorem 3.1 (Kyoto ). Let $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$, then there exists the entropy solution $\displaystyle u(\cdot,t)$ of (1.4) corresponding to an initial data $\displaystyle u_{0}$. Furthermore, there exist Lipschitz curves $\displaystyle R_{1}(t)\geq R_{2}(t)\geq 0$ and $\displaystyle L_{1}(t)\leq L_{2}(t)\leq 0$, monotone functions $\displaystyle z_{\pm}(x,t)$ non-decreasing in $\displaystyle x$ and non-increasing in $\displaystyle t$ and $\displaystyle t_{\pm}(x,t)$ non-increasing in $\displaystyle x$ and non-decreasing in $\displaystyle t$ such that the solution $\displaystyle u(x,t)$ can be given by the explicit formula for almost all $\displaystyle t>0$, $$\displaystyle\displaystyle u(x,t)=\left\{\begin{array}[]{lllll}(f^{\prime})^{-1}\left(\frac{x-z_{+}(x,t)}{t}\right)&\mbox{ if }&x\geq R_{1}(t),\\ (f^{\prime})^{-1}\left(\frac{x}{t-t_{+}(x,t)}\right)&\mbox{ if }&0\leq x<R_{1}(t),\\ (g^{\prime})^{-1}\left(\frac{x-z_{-}(x,t)}{t}\right)&\mbox{ if }&x\leq L_{1}(t),\\ (g^{\prime})^{-1}\left(\frac{x}{t-t_{-}(x,t)}\right)&\mbox{ if }&L_{1}(t)<x<0.\end{array}\right.$$ Furthermore, if $\displaystyle f(\theta_{f})\geq g(\theta_{g})$ then $\displaystyle R_{1}(t)=R_{2}(t)$ and if $\displaystyle f(\theta_{f})\leq g(\theta_{g})$ then $\displaystyle L_{1}(t)=L_{2}(t)$. We also have only three cases and following formula to compute the solution: Case 1: $\displaystyle L_{1}(t)=0$ and $\displaystyle R_{1}(t)=0$, $$\displaystyle\displaystyle u(x,t)=\left\{\begin{array}[]{lllll}u_{0}(z_{+}(x,t))&\mbox{ if }&x>0,\\ u_{0}(z_{-}(x,t))&\mbox{ if }&x<0.\end{array}\right.$$ Case 2: $\displaystyle L_{1}(t)=0$ and $\displaystyle R_{1}(t)>0$, then $$\displaystyle\displaystyle u(x,t)=\left\{\begin{array}[]{lllll}f_{+}^{-1}g(u_{0}(z_{+}(x,t)))&\mbox{ if }&0<x<R_{2}(t),\\ f_{+}^{-1}g(\theta_{g})&\mbox{ if }&R_{2}(t)\leq x\leq R_{1}(t),\\ u_{0}(z_{-}(x,t))&\mbox{ if }&x<0.\end{array}\right.$$ Case 3: $\displaystyle L_{1}(t)<0$, $\displaystyle R_{1}(t)=0$, then $$\displaystyle\displaystyle u(x,t)=\left\{\begin{array}[]{lllll}g_{-}^{-1}f(u_{0}(z_{-}(x,t)))&\mbox{ if }&L_{2}(t)<x<0,\\ u_{0}(z_{-}(x,t))&\mbox{ if }&x\leq L_{1}(t),\\ g_{-}^{-1}f(\theta_{f})&\mbox{ if }&L_{1}(t)<x<L_{2}(t).\end{array}\right.$$ $\displaystyle t_{+}(x,t)$$\displaystyle R_{1}(t)$$\displaystyle\bullet$$\displaystyle\bullet$$\displaystyle R_{2}(t)$$\displaystyle\bullet$$\displaystyle R_{1}(t)$$\displaystyle L_{1}(t)=L_{2}(t)=0$$\displaystyle z_{+}(x,t)$$\displaystyle z_{-}(x,t)$$\displaystyle z_{+}(x,t)$ There is a maximum principle for such entropy solutions, but more complicate than for $\displaystyle f=g$, $$\|u\|_{\infty}\leq\max\left(\|u_{0}\|_{\infty},\sup_{|v|\leq\|u_{0}\|_{\infty}}|f^{-1}_{+}(g(v))|,\sup_{|v|\leq\|u_{0}\|_{\infty}}|g^{-1}_{-}(f(v))|\right)=:S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (3.5) $\displaystyle f$$\displaystyle g$$\displaystyle u$$\displaystyle f(u)$$\displaystyle\theta_{f}$$\displaystyle\theta_{g}$$\displaystyle\bar{\theta}_{f}$$\displaystyle\tilde{\theta}_{f}$ Without loss of generality, as it is shown in the Figure 3.5, we can assume that $\displaystyle\min f<\min g$ for the proofs of all main results below. This choice inforces the values of the entropy solution at the interface being outside $\displaystyle(\tilde{\theta}_{f},\bar{\theta}_{f})$. Thus the function $\displaystyle f^{\prime}$ is far from $\displaystyle 0$ at the interface. Moreover, the function $\displaystyle f^{\prime-1}$ is Lipschitz outside $\displaystyle(\tilde{\theta}_{f},\bar{\theta}_{f})$. For restricted fluxes the function $\displaystyle f_{+}^{-1}$ is also Lipschitz outside $\displaystyle(\tilde{\theta}_{f},\bar{\theta}_{f})$. The Figure 3.5 illustrate that singular maps $\displaystyle f_{+}^{-1}g$ and $\displaystyle g_{-}^{-1}f$ are Lipschitz and Hölder continuous respectively which are proved in A, Lemma A.3. 4 Proof of main results This long section is devoted to prove the fractional $\displaystyle BV$ regularity of the entropy solution depending on the degeneracy of the fluxes. A key point is first to estimate the regularity of the traces at the interface. In the next subsection 4.1 we start to study the fractional regularity in a favourable case when the traces at the interface are not near the critical values $\displaystyle\theta_{f}$ or $\displaystyle\theta_{g}$. Spatial $\displaystyle BV^{s}$ estimates for trace values issued from the interface is studied in Subsection 4.2. Moreover, only traces issuing from the initial data are considered. The crossing of the interface is studied later in Subsection 4.3. 4.1 Regularity when traces are far from critical values We first prove fractional $\displaystyle BV$ estimates when the traces at $\displaystyle x=0$ are far from the critical values $\displaystyle\theta_{f}$ or $\displaystyle\theta_{g}$. Lemma 4.1 (Fractional $\displaystyle BV$ estimate for the traces of the solution). Let $\displaystyle f,g$ be satisfying (2.2) with exponents $\displaystyle p,q$ respectively. Let $\displaystyle 0<a<b<\infty$. Then the following holds: 1. If $\displaystyle u(0-,t)>\theta_{g}$ for a.e. $\displaystyle t\in(a,b)$, then we have $$TV^{\frac{1}{q}}(u(0-,\cdot),(a,b))\leq C_{g}\frac{b}{a},$$ (4.1) where $\displaystyle C_{g}>0$ is constant depending only on $\displaystyle g$. 2. If $\displaystyle u(0+,t)<\theta_{f}$ for a.e. $\displaystyle t\in(a,b)$, then we have $$TV^{\frac{1}{p}}(u(0+,\cdot),(a,b))\leq C_{f}\frac{b}{a},$$ (4.2) where $\displaystyle C_{f}>0$ is a constant depending on $\displaystyle f$. Proof. Since $\displaystyle u(0-,t)>\theta_{g}$ and $\displaystyle g^{\prime}\geq 0$ on $\displaystyle(\theta_{g},+\infty)$ the value of the left trace comes from the left. From Theorem 3.1, $\displaystyle u(0-,t)=(g^{\prime})^{-1}\left(\frac{-z_{-}(0-,t)}{t}\right)$ for $\displaystyle t\in(a,b)$ where $\displaystyle t\mapsto z_{-}(0-,t)$ is non-increasing. Since $\displaystyle g$ satisfies the non-degeneracy condition (2.2), from Lemma A.1 $\displaystyle(g^{\prime})^{-1}$ is a $\displaystyle 1/q$-Hölder function and there exists a constant $\displaystyle H_{g}$ such that $$\left|u(0-,t_{1})-u(0-,t_{2})\right|\leq H_{g}\left|\frac{z_{-}(0-,t_{1})}{t_{1}}-\frac{z_{-}(0-,t_{2})}{t_{2}}\right|^{\frac{1}{q}}.$$ We observe that $$\left|\frac{z_{-}(0-,t_{1})}{t_{1}}-\frac{z_{-}(0-,t_{2})}{t_{2}}\right|\leq\left|z_{-}(0-,t_{1})\right|\left|\frac{1}{t_{1}}-\frac{1}{t_{2}}\right|+\frac{1}{t_{2}}\left|z_{-}(0-,t_{1})-z_{-}(0-,t_{2})\right|.$$ For any partition $\displaystyle a\leq t_{1}<t_{2}<\cdots<t_{m}\leq b$, $$\displaystyle\displaystyle\sum\limits_{j=1}^{m-1}\left|u(0-,t_{j})-u(0-,t_{j+1})\right|^{q}$$ $$\displaystyle\displaystyle\leq{H_{g}}^{q}\sum\limits_{j=1}^{m-1}\left[\left|z_{-}(0-,t_{j})\right|\left|\frac{1}{t_{j}}-\frac{1}{t_{j+1}}\right|+\frac{1}{t_{j+1}}\left|z_{-}(0-,t_{j})-z_{-}(0-,t_{j+1})\right|\right]$$ $$\displaystyle\displaystyle\leq{H_{g}}^{q}\left[\left|z_{-}(0-,b)\right|\sum\limits_{j=1}^{m-1}\left|\frac{1}{t_{j}}-\frac{1}{t_{j+1}}\right|+\frac{1}{a}\sum\limits_{i=1}^{m-1}\left|z_{-}(0-,t_{j})-z_{-}(0-,t_{j+1})\right|\right]$$ $$\displaystyle\displaystyle\leq{H_{g}}^{q}\left[\frac{\left|z_{-}(0-,b)\right|(b-a)}{ab}+\frac{\left|z_{-}(0-,a)-z_{-}(0-,b)\right|}{a}\right].$$ Since $\displaystyle\left|z_{-}(0-,a)-z_{-}(0-,b)\right|\leq\left|z_{-}(0-,b)\right|$ and $\displaystyle b-a\leq b$ we have, $$\sum\limits_{j=1}^{m-1}\left|u(0-,t_{j})-u(0-,t_{j+1})\right|^{q}\leq 2{H_{g}}^{q}\frac{\left|z_{-}(0-,b)\right|}{a}.$$ From finite speed of propagation we have $\displaystyle\left|z(0-,b)\right|\leq M_{g}b$ where $\displaystyle K_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}=\sup\{\left|g^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u_{0}\right|\right|_{\infty}\}$. Hence, we get a new constant $\displaystyle C_{g}$ $$\sum\limits_{j=1}^{m-1}\left|u(0-,t_{j})-u(0-,t_{j+1})\right|^{q}\leq C_{g}\frac{b}{a}.$$ This proves (4.1). Similarly, we can prove the (4.2). ∎ Better fractional $\displaystyle BV$ estimates for the traces of the solution are available for less singular fluxes. Lemma 4.2 (Fractional $\displaystyle BV$ estimate for traces away from critical values). Let $\displaystyle r>0$ and $\displaystyle f,g$ be satisfying (2.2) with exponent $\displaystyle p,q$ respectively. Let $\displaystyle 0<a<b<\infty$. 1. If $\displaystyle u(0-,t)\geq\theta_{g}+r$ and $\displaystyle g^{\prime\prime}$ vanishes only at $\displaystyle\theta_{g}$ (2.3), then there exists a constant $\displaystyle C_{g}>0$ independent of $\displaystyle r$ such that the following inequality holds, $$\displaystyle\displaystyle TV(u(0-,\cdot),(a,b))\leq\frac{C_{g}}{\min\{g^{\prime\prime}(v)|v\in[\theta_{g}+r,||u_{0}||_{\infty}]\}}\frac{b}{a}.$$ (4.3) 2. If $\displaystyle u(0+,t)\leq\theta_{f}+r$ and $\displaystyle f^{\prime\prime}$ vanishes only at $\displaystyle\theta_{g}$ (2.3), then there exists a constant $\displaystyle C_{f}>0$ independent of $\displaystyle r$ such that the following inequality holds, $$\displaystyle\displaystyle TV(u(0+,\cdot),(a,b))\leq\frac{C_{f}}{\min\{f^{\prime\prime}(v)|v\in[-||u_{0}||_{\infty},\theta_{f}-r]\}}\frac{b}{a}.$$ (4.4) Lemma 4.2 will be used later with constant $\displaystyle r$ given by either $\displaystyle\theta_{f}-\tilde{\theta}_{f}$ or $\displaystyle\bar{\theta}_{f}-{\theta}_{f}$ as shown in Figure 3.5. The fact that $\displaystyle r$ is a positive constant is crucial to get uniform estimates later. Proof. For any $\displaystyle x,y\in\mathbb{R}$ consider $$\displaystyle\displaystyle\left|x-y\right|=\left|g^{\prime}(g^{\prime-1}(x)-g^{\prime}(g^{\prime-1}(y)\right|=g^{\prime\prime}(\xi)\left|g^{\prime-1}(x)-g^{\prime-1}(y)\right|,$$ where $\displaystyle\xi\in(x,y)$. Now for (4.3), Theorem 3.1 gives, $\displaystyle u(0-,t)=(g^{\prime})^{-1}\left(\frac{-z_{-}(0-,t)}{t}\right)$ for $\displaystyle t\in(a,b)$ where $\displaystyle t\mapsto z_{-}(0-,t)$ is non-increasing. Thus, $$\displaystyle\displaystyle\left|u(0-,t_{1})-u(0-,t_{2)}\right|$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\left|g^{\prime-1}\left(\frac{-z_{-}(0-,t_{1})}{t_{1}}\right)-g^{\prime-1}\left(\frac{-z_{-}(0-,t_{2})}{t_{2}}\right)\right|$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle\min\{g^{\prime\prime}(v);v\in[\theta_{g}+r,\left|\left|u_{0}\right|\right|_{\infty}]\}^{-1}\left|\frac{z_{-}(0-,t_{1})}{t_{1}}-\frac{z_{-}(0-,t_{2})}{t_{2}}\right|.$$ Now the similar calculation as to prove (4.1) gives (4.3). By similar arguments (4.4) can be proven for $\displaystyle f$. ∎ 4.2 Spatial $\displaystyle BV^{s}$ estimates for values originating from the interface Now, far from the interface and restricted flux, when the values of the solution are far from the critical values of $\displaystyle f$ and $\displaystyle g$, a $\displaystyle BV$ estimate is available. The following inequality are also valid in $\displaystyle BV^{s}$ for free and used later with other $\displaystyle BV^{s}$ estimates. Lemma 4.3 ($\displaystyle BV$ and $\displaystyle BV^{s}$ estimates for the solution). Let $\displaystyle u$ be an entropy solution and $\displaystyle R_{1}(t)>0$ for some fixed $\displaystyle t>0$. Let $\displaystyle 0<a<b<R_{1}(t)$ and $\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ be as in (3.5). Let $\displaystyle r>0$, $\displaystyle f$ satisfies (2.2) and $\displaystyle f^{\prime\prime}$ vanishes only on $\displaystyle\theta_{f}$ (2.3). If $\displaystyle u(x,t)\geq\theta_{f}+r$ for $\displaystyle a\leq x\leq b$, then there exists a constant $\displaystyle C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}>0$ such that $$TV^{s}(u(\cdot,t),[a,b])\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}}{\min\{f^{\prime\prime}(v);v\in[\theta_{f}+r,S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}]\}^{\frac{1}{s}}}\left(\frac{t-t_{+}(b,t)}{t-t_{+}(a,t)}\right)^{\frac{1}{s}},$$ (4.5) for all $\displaystyle 0<s\leq 1$. The same result holds for the left side of the interface as follows: Lemma 4.4 ($\displaystyle BV$ and $\displaystyle BV^{s}$ estimate for the solution). Let $\displaystyle u$ be an entropy solution and $\displaystyle L_{1}(t)<0$ for some $\displaystyle t>0$. Let $\displaystyle L_{1}(t)<a<b<0$ and $\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ be as in (3.5). Let $\displaystyle r>0$, flux $\displaystyle g$ satisfies (2.2) and $\displaystyle g^{\prime\prime}$ vanishes only on $\displaystyle\theta_{g}$. If $\displaystyle u(x,t)\leq\theta_{g}-r$ for $\displaystyle a\leq x\leq b$, then there exists a constant $\displaystyle C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}>0$ such that $$TV^{s}(u(\cdot,t),[a,b])\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}}{\min\{g^{\prime\prime}(v);v\in[-S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}},\theta_{g}-r]\}^{\frac{1}{s}}}\left(\frac{t-t_{-}(b,t)}{t-t_{-}(a,t)}\right)^{\frac{1}{s}},$$ for all $\displaystyle 0<s\leq 1$. Proof. Theorem 3.1 gives, $$u(x,t)=(f^{\prime})^{-1}\left(\frac{x}{t-t_{+}(x,t)}\right)\mbox{ for }x\in(0,R_{1}(t)).$$ Fix a partition $\displaystyle a\leq x_{1}<x_{2}<\cdots<x_{m}\leq b$. Then, as in the proof of inequality (4.3), it follows, $$\displaystyle\displaystyle\sum\limits_{j=1}^{m-1}\left|u(x_{j},t)-u(x_{j+1},t)\right|^{\frac{1}{s}}$$ $$\displaystyle\displaystyle=\sum\limits_{j=1}^{m-1}\left|(f^{\prime})^{-1}\left(\frac{x_{j}}{t-t_{+}(x_{j},t)}\right)-(f^{\prime})^{-1}\left(\frac{x_{j+1}}{t-t_{+}(x_{j+1},t)}\right)\right|^{\frac{1}{s}}$$ $$\displaystyle\displaystyle\leq\frac{1}{\min\{f^{\prime\prime}(v);v\in[\theta_{f}+r,S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}]\}^{\frac{1}{s}}}\sum\limits_{j=1}^{m-1}\left|\frac{x_{j}}{t-t_{+}(x_{j},t)}-\frac{x_{j+1}}{t-t_{+}(x_{j+1},t)}\right|^{\frac{1}{s}}.$$ We calculate $$\displaystyle\displaystyle\left|\frac{x_{j}}{t-t_{+}(x_{j},t)}-\frac{x_{j+1}}{t-t_{+}(x_{j+1},t)}\right|$$ $$\displaystyle\displaystyle\leq\left|x_{j}\right|\left|\frac{1}{t-t_{+}(x_{j},t)}-\frac{1}{t-t_{+}(x_{j+1},t)}\right|+\frac{1}{t-t_{+}(x_{j+1},t)}\left|x_{j}-x_{j+1}\right|$$ $$\displaystyle\displaystyle\leq b\left|\frac{1}{t-t_{+}(x_{j},t)}-\frac{1}{t-t_{+}(x_{j+1},t)}\right|+\frac{1}{t-t_{+}(a,t)}\left|x_{j}-x_{j+1}\right|.$$ Hence, by the convexity yields, $\displaystyle(a+b)^{\frac{1}{s}}\leq 2^{\frac{1-s}{s}}\left(a^{\frac{1}{s}}+b^{\frac{1}{s}}\right)$ and we get $$\displaystyle\displaystyle\sum\limits_{j=1}^{m-1}\left|\frac{x_{j}}{t-t_{+}(x_{j},t)}-\frac{x_{j+1}}{t-t_{+}(x_{j+1},t)}\right|^{\frac{1}{s}}$$ $$\displaystyle\displaystyle\leq\frac{1}{2^{\frac{s-1}{s}}}\left(\sum\limits_{j=1}^{m-1}b^{\frac{1}{s}}\left|\frac{1}{t-t_{+}(x_{j},t)}-\frac{1}{t-t_{+}(x_{j+1},t)}\right|^{\frac{1}{s}}+\sum\limits_{j=1}^{m-1}\frac{1}{(t-t_{+}(a,t))^{\frac{1}{s}}}\left|x_{j}-x_{j+1}\right|^{\frac{1}{s}}\right)$$ $$\displaystyle\displaystyle\leq\frac{1}{2^{\frac{s-1}{s}}}\left(b^{\frac{1}{s}}\left|\frac{1}{t-t_{+}(a,t)}-\frac{1}{t-t_{+}(b,t)}\right|^{\frac{1}{s}}+\left(\frac{b-a}{t-t_{+}(a,t)}\right)^{\frac{1}{s}}\right)$$ $$\displaystyle\displaystyle\leq 2^{\frac{1}{s}}\left(\frac{b}{t-t_{+}(a,t)}\right)^{\frac{1}{s}}.$$ In the last step we have used $\displaystyle b-a\leq b$ and $\displaystyle(t-t_{+}(b,t))-(t-t_{+}(a,t))\leq t-t_{+}(b,t)$. Note that $\displaystyle b\leq K_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}(t-t_{+}(b,t))$ where $\displaystyle K_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}=\sup\{\left|f^{\prime}\right|;\left|v\right|\leq S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\}$ where $\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ is defined as in (3.5). ∎ The following lemma deals with spatial regularity of the entropy solution for the right side of the interface. Inequality (4.5) does not used the restricted non-degeneracy condition. Lemma 4.5. Let $\displaystyle u$ be an entropy solution and $\displaystyle R_{1}(t)>0$ for some fixed $\displaystyle t>0$. Let $\displaystyle 0<a<b<R_{1}(t)$ and $\displaystyle S_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}$ be as in (3.5). If $\displaystyle f$ only satisfies (2.2) with exponent $\displaystyle p$ then we have $$TV^{\frac{1}{p}}(u(\cdot,t),[a,b])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\frac{t-t_{+}(b,t)}{t-t_{+}(a,t)}.$$ (4.6) The same result holds for the left side of the interface as follows. Lemma 4.6. Let $\displaystyle u$ be an entropy solution and $\displaystyle L_{1}(t)<0$ for some $\displaystyle t>0$. Let $\displaystyle L_{1}(t)<a<b<0$. If $\displaystyle g$ satisfies (2.2) with exponent $\displaystyle q$ then we have $$TV^{\frac{1}{q}}(u(\cdot,t),[a,b])\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\frac{t-t_{-}(b,t)}{t-t_{-}(a,t)}.$$ (4.7) By a similar argument as previous Lemma 4.3 the inequality (4.6) of Lemma 4.5 can be proven, so it is not written here. 4.3 Smoothing effect for restricted nonlinear fluxes Now we are ready to prove Theorem 2.1. To end this, an arbitrary partition is fixed and divided in several parts. Some are far from the interface and the generalized variation is estimated with a regularizing effect for a scalar conservation laws without a boundary. Some others are near the interface where the Lax-Oleinik formula for the solution Kyoto is used with previous lemmas. Proof of Theorem 2.1:. Since $\displaystyle f(\theta_{f})\not=g(\theta_{g})$, without loss of generality assume that $\displaystyle f(\theta_{f})<g(\theta_{g})$ as in Figure 3.5. It is enough to consider the following two cases, the other cases are similar. Case(i): $\displaystyle L_{1}(t)=0$ and $\displaystyle R_{1}(t)\geq 0$. Consider an arbitrary partition $\displaystyle\{-M=x_{-n}<\cdots<x_{-1}<x_{0}\leq 0<x_{1}<\cdots<x_{l}\leq R_{2}(t)<x_{l+1}<\cdots<x_{m}\leq R_{1}(t)<x_{m+1}<\cdots<x_{n}=M\}$. Then, $$\sum_{i=-n}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}=\sum_{i=-n}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}+\sum_{i=m+1}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}\\ +\sum_{i=1}^{l-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}+\sum_{i=l+1}^{m-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}\\ +|u(x_{0},t)-u(x_{1},t)|^{1/s}+|u(x_{l},t)-u(x_{l+1},t)|^{1/s}+|u(x_{m},t)-u(x_{m+1},t)|^{1/s}.$$ From Theorem 3.1, solution $\displaystyle u$ is constant between $\displaystyle R_{2}(t)$ to $\displaystyle R_{1}(t)$ which means variation is zero for this interval. Now from the Lax-Oleinik formula in Theorem 3.1 and bounding the last three terms yield, $$\sum_{i=-n}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}\leq\underbrace{\sum_{i=-n}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}}_{\text{I}}+\underbrace{\sum_{i=m+1}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}}_{\text{III}}\\ +\underbrace{\sum_{i=1}^{l-1}|f^{-1}_{+}g(u_{0}(z_{+}(x_{i},t)))-f^{-1}_{+}g(u_{0}(z_{+}(x_{i+1},t)))|^{1/s}}_{\text{II}}+3(2||u_{0}||_{\infty})^{1/s}.$$ Now we wish to estimate the terms I, II, and III. The simplest terms I, III are estimated as in junca1 ; CJLO . First taking the I into the account. Since $\displaystyle f$ and $\displaystyle g$ are satisfying the flux non-degeneracy condition (2.2), by Lemma A.1, the maps $\displaystyle u\mapsto(g^{\prime})^{-1}(u)$ and $\displaystyle u\mapsto(f^{\prime})^{-1}(u)$ are Hölder continuous with exponents $\displaystyle q^{-1}$ and $\displaystyle p^{-1}$ respectively. From Theorem 3.1, $$\displaystyle\displaystyle u(x,t)=(g^{\prime})^{-1}\left(\frac{x-z_{-}(x,t)}{t}\right),$$ $$\displaystyle\displaystyle\mbox{ for }x<0,$$ then for $\displaystyle-M\leq x_{i}<x_{i+1}\leq 0$, from Lemma A.1 $$\displaystyle\displaystyle|u(x_{i},t)-u(x_{i+1},t)|^{q}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\left|(g^{\prime})^{-1}\left(\frac{x_{i}-z_{-}(x_{i},t)}{t}\right)-(g^{\prime})^{-1}\bigg{(}\frac{x_{i+1}-z_{-}(x_{i+1},t)}{t}\bigg{)}\right|^{q}$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle\bigg{(}C_{2}^{-q^{-1}}\bigg{|}\frac{x_{i}-z_{-}(x_{i},t)}{t}-\frac{x_{i+1}-z_{-}(x_{i+1},t)}{t}\bigg{|}^{q^{-1}}\bigg{)}^{q},$$ using triangle inequality we obtain, $$|u(x_{i},t)-u(x_{i+1},t)|^{q}\leq C_{2}^{-1}\bigg{|}\frac{x_{i}-x_{i+1}}{t}\bigg{|}+C_{2}^{-1}\bigg{|}\frac{z_{-}(x_{i},t)-z_{-}(x_{i+1},t)}{t}\bigg{|}.$$ Since $\displaystyle|x_{i}|,|x_{i+1}|\leq M$ and $\displaystyle x=z_{-}(x,t)+g^{\prime}(u(x,0))t$ hence, we get $$TV^{q^{-1}}u(\sigma\cap[-M,0])\leq\frac{4M}{C_{2}t}+\frac{1}{C_{2}}\sup\left\{\left|g^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u_{0}\right|\right|_{L^{\infty}(\mathbb{R})}\right\}.$$ (4.8) In similar fashion, for the term III we have, $$TV^{p^{-1}}u(\sigma\cap[R_{1}(t),M])\leq\frac{4M}{C_{1}t}+\frac{1}{C_{1}}\sup\left\{\left|f^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u_{0}\right|\right|_{L^{\infty}(\mathbb{R})}\right\}.$$ (4.9) Now we will estimate the II term. From the definition of $\displaystyle s$, $\displaystyle s\leq 1/p$ and $\displaystyle s\leq 1/(q+1)$. Rest of the proof for this case is divided into two sub-cases. 1. Consider the situation when $\displaystyle t_{+}^{{min}}(t)=\inf\{t_{+}(x,t);x\in(0,R_{1}(t))\}\geq t/2$. The fact $\displaystyle t_{+}^{min}>t/2>0$ implies that the characteristics reaching the left side of the interface at $\displaystyle(0-,t_{+})$ has a positive speed, hence $\displaystyle u(0-,t_{+}(x,t))>\theta_{g}$ for all $\displaystyle x\in(0,R_{1}(t))$ (Figure 3.5). Therefore, the inequality (4.1) of Lemma 4.1 gives $\displaystyle TV^{\frac{1}{q}}(u(0-,\cdot)(t_{+}^{min},t))\leq C_{g}\frac{t}{t/2}=2C_{g}$. Since $\displaystyle s\leq\frac{1}{q+1}<\frac{1}{q}$, Lemma B.1 yields $\displaystyle TV^{s}(u(0-,\cdot)(t_{+}^{min},t))\leq\mathcal{O}(1)$ and then $\displaystyle\textup{II}\leq\mathcal{O}(1)$. 2. Next focus on the sub-case when $\displaystyle t_{+}^{{min}}(t)=\inf\{t_{+}(x,t);x\in(0,R_{1}(t))\}<t/2$. As previous subcase we already have $\displaystyle TV^{s}(u(0-,\cdot)(t/2,t))\leq 2C_{g}$. Let $\displaystyle j_{0}>0$ such that $\displaystyle t_{+}(x_{j},t)\geq t/2$ for $\displaystyle 0<j\leq j_{0}$ and $\displaystyle t_{+}(x_{j},t)<t/2$ for $\displaystyle j_{0}<j\leq l-1$. Since $\displaystyle u(x_{j},t)=u(0+,t_{+}(x_{j},t))=f_{+}^{-1}g(u(0-,t_{+}(x_{j},t))$ for $\displaystyle 0<j<l-1$, from Lemma A.3, $\displaystyle f_{+}^{-1}g$ is Lipschitz function, hence $$\sum\limits_{j=1}^{j_{0}}\left|u(x_{j},t)-u(x_{j-1},t)\right|^{\frac{1}{s}}\leq\mathcal{O}(1).$$ Let $\displaystyle\bar{\theta}_{f}>\theta_{f}$ be such that $\displaystyle f(\bar{\theta}_{f})=g(\theta_{g})$ as shown in Figure 3.5. Then by RH condition (1.5) observe that $\displaystyle u(x_{j},t)\geq\bar{\theta}_{f}$. From the inequality (4.5) of Lemma 4.3 we get $$\sum\limits_{j=j_{0}+1}^{l-2}\left|u(x_{j},t)-u(x_{j+1},t)\right|^{\frac{1}{s}}\leq\mathcal{O}(1).$$ Subsequently, we get $$\textup{II}\leq\mathcal{O}(1).$$ (4.10) Hence combining the estimates on I, II and III for constant $\displaystyle C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}>0$ we have $$\sum_{i=-n}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\left(1+\frac{1}{t}\right).$$ $\displaystyle R_{1}(t)=0$, $\displaystyle L_{1}(t)<0$. Unlike previous case, this case is not as good due to the fact that $\displaystyle g_{-}^{-1}f$ is only Hölder continuous and not Lipschitz. Let us consider the partition $\displaystyle\sigma=\{-M=x_{-n}<\cdots<x_{m}\leq L_{2}(t)=L_{1}(t)<x_{m+1}<\cdots<x_{0}\leq R_{2}(t)=R_{1}(t)=0<x_{1}<\cdots\leq x_{n}=M\}$ and then $$\displaystyle\displaystyle\sum_{i=-n}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\sum_{i=-n}^{m-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}+\sum_{i=1}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\sum_{i=m+1}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}+|u(x_{0},t)-u(x_{1},t)|^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle|u(x_{m},t)-u(x_{m+1},t)|^{1/s}.$$ From Theorem 3.1 we get, $$\displaystyle\displaystyle\sum_{i=-\infty}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\underbrace{\sum_{i=-n}^{m-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}}_{\text{I}}+2(2||u_{0}||_{\infty})^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\underbrace{\sum_{i=m+1}^{-1}|g^{-1}_{-}(f(u_{0}(z_{-}(x_{i},t))))-g^{-1}_{-}(f(u_{0}(z_{-}(x_{i+1},t))))|^{1/s}}_{\text{II}}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\underbrace{\sum_{i=1}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}}_{\text{III}}.$$ Similarly to Case (i) we bound I, III as in (4.8), (4.9) to get $$\textup{I}+\textup{III}\leq\frac{C_{f,g,||u_{0}||_{\infty}}M}{t}.$$ Now the term II consider as previous term II and divide into two sub-cases. 1. We first consider the situation when $\displaystyle t_{-}^{{min}}(t)=\inf\{t_{-}(x,t);x\in(L_{1}(t),0)\}\geq t/2$. The RH condition (1.5) implies that $\displaystyle u(0+,\cdot)\leq\tilde{\theta}_{f}$, see Figure 3.5, the inequality (4.4) of Lemma 4.2 gives $$TV(u(0+,\cdot)(t_{-}^{min},t))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (4.11) Note that $\displaystyle g_{-}^{-1}\circ f$ is Hölder continuous function with exponent $\displaystyle\frac{1}{q+1}$. Hence we have $$\textup{II}=\sum\limits_{j=m+1}^{-1}\left|u(x_{j},t)-u(x_{j+1},t)\right|^{\frac{1}{q+1}}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (4.12) 2. Next we focus on the sub-case when $\displaystyle t_{-}^{{min}}(t)=\inf\{t_{-}(x,t);x\in(L_{1}(t),0)\}<t/2$. Let $\displaystyle j_{0}<0$ such that $\displaystyle t_{+}(x_{j},t)\geq t/2$ for $\displaystyle j_{0}\leq j<0$ and $\displaystyle t_{+}(x_{j},t)<t/2$ for $\displaystyle{m+1}<j<j_{0}$. In previous sub-case we have $$\sum\limits_{j=j_{0}}^{-1}\left|u(x_{j},t)-u(x_{j+1},t)\right|^{\frac{1}{q+1}}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (4.13) Note that for $\displaystyle m+1<j<j_{0}$, $\displaystyle u(x_{j},t)=u(0-,t_{-}(x_{j},t))\leq\theta_{g}$. From the inequality (4.7) of Lemma 4.6 we have $$\sum\limits_{j=m+1}^{j_{0}-1}\left|u(x_{j},t)-u(x_{j+1},t)\right|^{\frac{1}{q}}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (4.14) Subsequently, we get $$\textup{II}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\left|\left|2u_{0}\right|\right|_{\infty}^{\frac{1}{q+1}}.$$ (4.15) Hence, from the estimates on I, II and III we get $$\sum_{i=-n}^{n}|u(x_{i},t)-u(x_{i+1},t)|^{\frac{1}{q+1}}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+3(2||u_{0}||_{\infty})^{\frac{1}{q+1}}+\frac{C_{f,g}M}{t}.$$ (4.16) ∎ 4.4 Generalization for $\displaystyle BV^{s}$ initial data Now we are able to prove Theorem 2.2. For this, again we divide the domain in several parts. Here initial data belongs to $\displaystyle BV^{s}$. If $\displaystyle s$ is very small then far from the interface estimates comes from the regularizing effect. If $\displaystyle s$ is near to $\displaystyle 1$ then outside interface initial data regularity propagates. For the estimate on the solution near interface again we use Lax-Oleinik formula from Kyoto . Proof of Theorem 2.2. Since $\displaystyle f(\theta_{f})\not=g(\theta_{g})$, without loss of generality we assume that $\displaystyle f(\theta_{f})<g(\theta_{g})$, see Figure 3.5 because other case can be done in a similar way. Hence, from Theorem 3.1 we have $\displaystyle L_{2}(t)=L_{1}(t)$ then it is enough to consider the following two cases. Case (i): If $\displaystyle L_{1}(t)=0$ and $\displaystyle R_{1}(t)\geq 0$. Consider the partition $\displaystyle\sigma=\{-M=x_{-n}\leq\cdots<x_{-1}<x_{0}\leq 0<x_{1}<\cdots<x_{l}\leq R_{2}(t)<x_{l+1}<\cdots<x_{m}\leq R_{1}(t)<x_{m+1}<\cdots\leq x_{n}=M\}$ and $$\displaystyle s_{1}=\min\{\gamma,\max\{\nu,s\}\}\in(0,1).$$ Then $$\displaystyle\displaystyle\sum_{i=-n}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\sum_{i=-n}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}+\sum_{i=m+1}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\sum_{i=1}^{l-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}+\sum_{i=l+1}^{m-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle|u(x_{0},t)-u(x_{1},t)|^{1/s_{1}}+|u(x_{l},t)-u(x_{l+1},t)|^{1/s_{1}}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle|u(x_{m},t)-u(x_{m+1},t)|^{1/s_{1}}.$$ From Theorem 3.1, the entropy solution is constant between $\displaystyle R_{2}(t)$ and $\displaystyle R_{1}(t)$ which means variation is zero for this interval. Hence, $$\displaystyle\displaystyle\sum_{i=-n}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}$$ $$\displaystyle\displaystyle=\underbrace{\sum_{i=-n}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}}_{\text{I}}+3(2||u_{0}||_{\infty})^{1/s_{1}}$$ $$\displaystyle\displaystyle+\underbrace{\sum_{i=1}^{l-1}|f^{-1}_{+}g(u_{0}(z_{+}(x_{i},t)))-f^{-1}_{+}g(u_{0}(z_{+}(x_{i+1},t)))|^{1/s_{1}}}_{\text{II}}$$ $$\displaystyle\displaystyle+\underbrace{\sum_{i=m+1}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}}_{\text{III}}.$$ From the choice of $\displaystyle s_{1}$, we get $\displaystyle s_{1}\leq\max\{s,1/q\}$. If $\displaystyle 1/q>s$, then $\displaystyle s_{1}<1/q$. By a similar argument as in (4.8) we have $$\sum_{i=-n}^{-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}\leq\frac{4M}{C_{2}t}+\frac{1}{C_{2}}\sup\left\{\left|g^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u\right|\right|_{L^{\infty}(\mathbb{R}\times[0,T])}\right\}.$$ If $\displaystyle s>1/q$ then $\displaystyle s_{1}<s$ and we use the regularity of initial data to estimate I so from Lemma B.1 $\displaystyle\textup{I}\leq D\cdot TV^{s}(u_{0})$. Combining both the estimate we can write $$\textup{I}\leq TV^{s}(u_{0})+\frac{4M}{C_{2}t}+\frac{1}{C_{2}}\sup\left\{\left|g^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u\right|\right|_{L^{\infty}(\mathbb{R}\times[0,T])}\right\}.$$ (4.17) Similarly we have $$\textup{III}\leq TV^{s}(u_{0})+\frac{4M}{C_{1}t}+\frac{1}{C_{1}}\sup\left\{\left|f^{\prime}(v)\right|;\,\left|v\right|\leq\left|\left|u\right|\right|_{L^{\infty}(\mathbb{R}\times[0,T])}\right\}.$$ (4.18) From Lemma A.3 we know that $\displaystyle f_{+}^{-1}g(\cdot)$ is a Lipschitz continuous. Hence, the term II can be estimated as II $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\sum_{i=1}^{l-1}|f^{-1}_{+}g(u_{0}(z_{+}(x_{i},t)))-f^{-1}_{+}g(u_{0}(z_{+}(x_{i+1},t)))|^{1/s_{1}}$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle C\cdot\sum_{i=1}^{l-1}|u_{0}(z_{+}(x_{i},t))-u_{0}(z_{+}(x_{i+1},t))|^{1/s_{1}}.$$ If $\displaystyle s>1/q$ then we have $\displaystyle s_{1}<s$, from Lemma B.1, $\displaystyle\textup{II}\leq D\cdot TV^{s}(u_{0})$. For the case $\displaystyle s<1/q$ we do not know whether $\displaystyle s_{1}<s$ holds or not but we surely have $\displaystyle s_{1}<1/q$. For this case we use the regularizing effect for solutions of conservation laws due to non-degeneracy of $\displaystyle g$ junca1 . For term II we note that the estimate (4.10) in proof of Theorem 2.1. Hence combining the estimates on I, II and III we get Hence, from the estimates on I, II and III we get $$\sum_{i=-n}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s_{1}}\leq D\cdot TV^{s}(u_{0})+3(2||u_{0}||_{\infty})^{1/s_{1}}+\frac{C_{f,g}M}{t}.$$ Case (ii): $\displaystyle R_{1}(t)=0$, $\displaystyle L_{1}(t)<0$. This case can be handled in a similar fashion as in previous case. Only difference is the estimation of II which can be done same as in (4.15). Hence, we have proven that $\displaystyle u(\cdot,t)\in BV^{s_{1}}(-M,M)$. To show that $\displaystyle u(\cdot,t)\in BV^{s_{1}}(\mathbb{R})$, we consider a partition $\displaystyle-\infty<x_{-n}<\cdots<x_{n}<\infty$ which is not necessarily contained in $\displaystyle[-M,M]$. We can choose $\displaystyle M=t\sup\{\left|f^{\prime}(v)\right|,\left|g^{\prime}(v)\right|;\left|v\right|\leq\left|\left|u_{0}\right|\right|_{\infty}\}$. Suppose $\displaystyle\left|x_{j}\right|\leq M$ for $\displaystyle-m_{1}\leq j\leq m_{2}$ for some $\displaystyle 0<m_{1},m_{2}\leq n$. From (4.16) we get $$\sum_{i=-m_{1}}^{m_{2}}|u(x_{i},t)-u(x_{i+1},t)|^{\frac{1}{q+1}}\leq C_{f,g}+2(2||u_{0}||_{\infty})^{\frac{1}{q+1}}.$$ From the choice of $\displaystyle M$, we can see that $\displaystyle R_{1}(t)\leq M,L_{1}(t)\geq-M$. Hence for $\displaystyle i\leq-m_{1}$, $\displaystyle u(x_{i},t)=u_{0}(z_{-}(x_{i},t))$ and for $\displaystyle i\geq m_{2}$, $\displaystyle u(x_{i},t)=u_{0}(z_{+}(x_{i},t))$. Subsequently, $$\sum_{i=-n}^{-m_{1}-2}|u(x_{i},t)-u(x_{i+1},t)|^{\frac{1}{s}}+\sum_{i=m_{2}+1}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{\frac{1}{s}}\leq TV^{s}(u_{0}).$$ Therefore, we obtain $$\sum_{i=-n}^{n-1}|u(x_{i},t)-u(x_{i+1},t)|^{\frac{1}{q+1}}\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+4(2||u_{0}||_{\infty})^{\frac{1}{q+1}}+TV^{s}(u_{0}).$$ This completes the proof of Theorem 2.2. ∎ 4.5 Non restricted fluxes Now we assume the weaker non-degeneracy on flux the estimates on solution near interface so Lemma 4.2, 4.2 and 4.3 can not be used here. So the regularity is weaker here. Proof of Theorem 2.3:. Fix a time $\displaystyle t>0$. We only show for the case when $\displaystyle R_{1}(t)>0$. Note that in this case $\displaystyle L_{1}(t)=L_{2}(t)=0$. Suppose $\displaystyle t_{0}=\lim\limits_{x\rightarrow R_{1}(t)-}t_{+}(x,t)$. First consider $\displaystyle t_{0}>t/2$. From Lemma 4.1, we have $$TV^{\frac{1}{q}}(u(0-,\cdot),(t_{0},t))\leq\frac{C_{g}t}{t_{0}}\leq 2C_{g}.$$ Since $\displaystyle u\mapsto f_{+}^{-1}(g(u))$ is Hölder continuous with exponent $\displaystyle\frac{1}{p+1}$, we get $$\left|u(0+,t_{1})-u(0+,t_{2})\right|\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\left|u(0-,t_{1})-u(0-,t_{2})\right|^{\frac{1}{p+1}}.$$ Subsequently, we have $$TV^{s}(u(0+,\cdot),(t_{0},t))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\mbox{ where }s=\frac{1}{q(p+1)}.$$ Note that for $\displaystyle x\in(0,R_{1}(t))$ we have $\displaystyle u(x,t)=u(0+,t_{+}(x,t))$. Therefore, $$TV^{s}(u(\cdot,t),(0,R_{1}(t)))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}.$$ (4.19) For $\displaystyle x>R_{1}(t)$ we have $\displaystyle u(x,t)=(f^{\prime})^{-1}\left(\frac{x-z_{+}(x,t)}{t}\right)$ for a non-decreasing $\displaystyle x\mapsto z_{+}(x,t)$. By using flux condition (2.2) of $\displaystyle f$, we obtain $$TV^{\frac{1}{p}}(u(\cdot,t),(R_{1}(t),M))\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ (4.20) Hence, $$\displaystyle\displaystyle TV^{s}(u(\cdot,t),(0,M))$$ $$\displaystyle\displaystyle\leq TV^{s}(u(\cdot,t),(0,R_{1}(t)))+\left|\left|2u\right|\right|_{L^{\infty}(\mathbb{R})}^{\frac{1}{s}}+TV^{s}(u(\cdot,t),(R_{1}(t),M))$$ $$\displaystyle\displaystyle\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\left|\left|2u_{0}\right|\right|_{L^{\infty}(\mathbb{R})}^{\frac{1}{s}}+TV^{\frac{1}{p}}(u(\cdot,t),(R_{1}(t),M))$$ $$\displaystyle\displaystyle\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\left|\left|2u_{0}\right|\right|_{L^{\infty}(\mathbb{R})}^{\frac{1}{s}}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ Next we consider the case when $\displaystyle t_{0}<t/2$. Let $\displaystyle x_{0}=\sup\{x;\,t_{+}(x,t)\geq t/2\}$. By Lemma 4.3 we have $$TV^{\frac{1}{p}}(u(\cdot,t);(x_{0},M))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ (4.21) Similar to (4.19) we get $$TV^{s}(u(\cdot,t),(0,x_{0}))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}\mbox{ with }s=\frac{1}{q(p+1)}.$$ Subsequently, we obtain $$TV^{s}(u(\cdot,t),(0,M))\leq C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}+\left|\left|2u_{0}\right|\right|_{L^{\infty}(\mathbb{R})}^{\frac{1}{s}}+\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ Note that for $\displaystyle x<0$ we have $\displaystyle u(x,t)=(g^{\prime})^{-1}\left(\frac{x-z_{-}(x,t)}{t}\right)$. Then by using flux condition (2.2) we can show that $$TV^{\frac{1}{q}}(u(\cdot,t);(-M,0))\leq\frac{C_{f,g,\left|\left|u_{0}\right|\right|_{\infty}}M}{t}.$$ (4.22) The other case when $\displaystyle L_{1}(t)<0$ follows from a similar argument. This completes the proof of Theorem 2.3. ∎ 4.6 Propagation of the initial regularity outside the interface The regularity of entropy solutions outside the interface is better than at the interface. It is proven in this section. Proof of Theorem 2.5. We consider the partition $\displaystyle\epsilon\leq x_{0}<x_{1}<\cdots<x_{l}\leq R_{1}(t)\leq x_{l+1}<\cdots$. Then $$\displaystyle\displaystyle\sum_{i=0}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\sum_{i=0}^{l-1}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}+\sum_{i=l}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}.$$ Now from Theorem 3.1 we get, $$\displaystyle\displaystyle\sum_{i=0}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle\sum_{i=0}^{l-1}\bigg{|}(f^{\prime})^{-1}\bigg{(}\frac{x_{i}}{t-t_{+}(x_{i},t)}\bigg{)}-(f^{\prime})^{-1}\bigg{(}\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{)}\bigg{|}^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\big{|}u(x_{l},t)-u(x_{l+1},t)\big{|}^{1/s}+\sum_{i=l+1}^{\infty}\big{|}u_{0}(y(x_{i},t))-u_{0}(y(x_{i+1},t))\big{|}^{1/s}.$$ Since $\displaystyle t_{+}(x,t)$ is monotone function and has bound and infimum of $\displaystyle t-t_{+}(x,t)$ positive. Hence, we get $$\frac{\epsilon}{T}\leq\frac{x}{t-t_{+}(x,t)}\leq\frac{M}{h(\epsilon,T)},$$ where $\displaystyle h(\epsilon,T)=\inf\{t-t_{+}(x,t):\epsilon\leq x\leq R_{1}(t),0<t\leq T\}$, which also implies that $\displaystyle(f^{\prime})^{-1}$ is Lipschitz continuous function on interval $\displaystyle\left[\frac{\epsilon}{T},\frac{M}{h(\epsilon,T)}\right]$. Then, $$\displaystyle\displaystyle\sum_{i=0}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle C(\epsilon,t)\sum_{i=0}^{l-1}\bigg{|}\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{|}^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\big{|}u(x_{l},t)-u(x_{l+1},t)\big{|}^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle\sum_{i=l+1}^{\infty}\big{|}u_{0}(y(x_{i},t))-u_{0}(y(x_{i+1},t))\big{|}^{1/s}.$$ The estimate on first sum follow from, $$\sum_{i=0}^{l-1}\bigg{|}\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{|}^{1/s}\\ =\sum_{i=0}^{l-1}\left|\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i}}{t-t_{+}(x_{i+1},t)}+\frac{x_{i}}{t-t_{+}(x_{i+1},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\right|^{1/s},\\ $$ from triangle inequality we get, $$\sum_{i=0}^{l-1}\bigg{|}\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{|}^{1/s}\\ \leq\sum_{i=0}^{l-1}\left(\left|\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i}}{t-t_{+}(x_{i+1},t)}\right|+\left|\frac{x_{i}}{t-t_{+}(x_{i+1},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\right|\right)^{1/s},$$ now from the inequality $\displaystyle a^{1/s}+b^{1/s}\leq(a+b)^{1/s}$ we get, $$\sum_{i=0}^{l-1}\bigg{|}\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{|}^{1/s}\\ \leq\left(\sum_{i=0}^{l-1}\left|\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i}}{t-t_{+}(x_{i+1},t)}\right|+\left|\frac{x_{i}}{t-t_{+}(x_{i+1},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\right|\right)^{1/s}.$$ Therefore, we get the following estimate, $$\sum_{i=0}^{l-1}\bigg{|}\frac{x_{i}}{t-t_{+}(x_{i},t)}-\frac{x_{i+1}}{t-t_{+}(x_{i+1},t)}\bigg{|}^{1/s}\leq\left(\frac{R_{1}(t)-\epsilon}{|t-t_{+}(\epsilon,t)|}+\frac{R_{1}(t)|t_{+}(\epsilon,t)-t_{+}(R_{1}(t),t)|}{|t-t_{+}(\epsilon,t)|^{2}}\right)^{1/s}.$$ Thus we have, $$\displaystyle\displaystyle\sum_{i=0}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle C\sup_{0\leq t\leq T}\bigg{(}\frac{R_{1}(t)-\epsilon}{|t-t_{+}(\epsilon,t)|}+\frac{R_{1}(t)|t_{+}(\epsilon,t)-t_{+}(R_{1}(t),t)|}{|t-t_{+}(\epsilon,t)|^{2}}\bigg{)}^{1/s}$$ $$\displaystyle\displaystyle+$$ $$\displaystyle\displaystyle TV^{s}(u_{0})+(2||u_{0}||)^{1/s},$$ $$\displaystyle\displaystyle\leq$$ $$\displaystyle\displaystyle C(\epsilon,t)+TV^{s}(u_{0})+(2||u_{0}||)^{1/s}.$$ In a similar way the other case $\displaystyle x\leq-\epsilon$ can be handle, $$\sum_{i=0}^{\infty}|u(x_{i},t)-u(x_{i+1},t)|^{1/s}\leq C(\epsilon,t)+TV^{s}(u_{0})+2(2||u_{0}||)^{1/s}.$$ ∎ 5 Construction of counter-example Next we proceed to construct a counter-example so that initial data is in $\displaystyle BV$ but the corresponding solution is not in $\displaystyle BV^{s}$ at a fixed positive time $\displaystyle T>0$ and for some specific choice of flux. We refer to the backward construction for conservation laws with discontinuous flux introduced in AG . In order to apply the method of backward construction we need to recall some notations and results from AG . Next result is borrowed from AG which says that given $\displaystyle h_{+},z$ functions we can construct an entropy solution satisfying Hopf-Lax type formula for (1.4) with $\displaystyle h_{+},z$. Proposition 5.1 (Backward construction, AG ). Let $\displaystyle f,g$ are $\displaystyle C^{1}$ strictly convex functions. Let $\displaystyle R>0$ and $\displaystyle z:[0,R]\rightarrow(-\infty,0]$ be a non-decreasing function with $\displaystyle z_{0}=z(0+)$ and $\displaystyle z_{1}=z(R-)$. Suppose $$\displaystyle\displaystyle h_{+}\left(\frac{R}{T-t_{1}}\right)$$ $$\displaystyle\displaystyle=-\frac{z_{1}}{t_{1}},$$ $$\displaystyle\displaystyle g^{\prime}(u_{-})$$ $$\displaystyle\displaystyle=\frac{z_{0}}{T},\,g^{\prime}(v_{-})=-\frac{z_{1}}{t_{1}},\,\bar{v}_{-}=f_{+}^{-1}(g(v_{-})),$$ (5.1) where $\displaystyle h_{+}$ is defined as $$h_{+}:=g^{\prime}\circ g_{+}^{-1}\circ f\circ(f^{\prime})^{-1}.$$ (5.2) We additionally assume that $\displaystyle h_{+}$ is a locally Lipschitz function. Then there exists an initial data $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$ and the corresponding entropy solution $\displaystyle u$ to (1.4) such that $$u(x,T)=(f^{\prime})^{-1}\left(\frac{x}{T-t_{+}(x)}\right)\mbox{ where }-\frac{z(x)}{t_{+}(x)}=h_{+}\left(\frac{x}{T-t_{+}(x)}\right)\mbox{ for }x\in[0,R]$$ (5.3) and additionally, it holds $\displaystyle u(x,T)=u_{-}$ for $\displaystyle x<0$ and $\displaystyle u(x,T)=\bar{v}_{-}$ for $\displaystyle x>R$. u$\displaystyle\bullet$$\displaystyle\bullet$$\displaystyle\theta_{f}$$\displaystyle\theta_{g}$$\displaystyle f(u)$$\displaystyle g(u)$ To be self-contained the main ingredients of the proof is given in C. Now the proof of Theorem 2.6 can be done. Proof of Theorem 2.6. Let $\displaystyle f(u)=\left|u\right|^{p+1}$ and $\displaystyle g(u)=u^{2}-1$. Note that by Lemma A.4 $\displaystyle f$ satisfies the non-degeneracy condition (2.2) with exponent $\displaystyle p$ and $\displaystyle g$ is uniformly convex. Let $\displaystyle\{a_{k}\}_{k\geq 1}$ be a sequence defined as $\displaystyle a_{2i}=i^{-\beta}$ and $\displaystyle a_{2i+1}=i^{-\alpha}$ with $\displaystyle\beta>\alpha>0$ which will be chosen later. Consider an increasing sequence $\displaystyle\{t_{k}\}$ such that $\displaystyle t_{k}\rightarrow 1$ and $$1-t_{2k+1}=\frac{1}{k^{\beta-\alpha}}(1-t_{2k})\mbox{ and }t_{2k+2}-t_{2k+1}=k^{-\lambda}$$ (5.4) where $\displaystyle\lambda>1$ will be chosen later. Then we have $$\frac{t_{2k+2}-t_{2k+1}}{t_{2k+1}}=\frac{1}{k^{\lambda}}\frac{1}{t_{2k+1}}\geq\frac{1}{k^{\lambda}}.$$ (5.5) We define $\displaystyle\{x_{i}\}$ as follows $$x_{i}=(1-t_{2i})a_{2i}=(1-t_{2i+1})a_{2i+1}.$$ (5.6) Since $\displaystyle\{t_{2i}\}_{i\geq 1}$ is increasing and $\displaystyle\{a_{2i}\}_{i\geq 1}$ is decreasing sequence, $\displaystyle\{x_{i}\}_{i\geq 1}$ is a decreasing sequence. Let $\displaystyle h:[0,\infty)\rightarrow\mathbb{R}$ be defined as $$\displaystyle h(u)=2\sqrt{1+(p+1)^{-1-\frac{1}{p}}u^{1+\frac{1}{p}}}$$ for $\displaystyle u\geq 0$. Observe that $$\frac{h(a_{2i+1})}{h(a_{2i+2})}-1=\frac{\sqrt{1+\left(\frac{i^{-\alpha}}{p+1}\right)^{1+\frac{1}{p}}}-\sqrt{1+\left(\frac{(i+1)^{-\beta}}{p+1}\right)^{1+\frac{1}{p}}}}{\sqrt{1+\left(\frac{(i+1)^{-\beta}}{p+1}\right)^{1+\frac{1}{p}}}}\leq\frac{1}{i^{\frac{p+1}{p}\alpha}}-\frac{1}{i^{\frac{p+1}{p}\beta}}.$$ (5.7) Then if $\displaystyle\lambda<\frac{p+1}{p}\alpha$ we get $$\frac{h(a_{2i+1})}{h(a_{2i+2})}-1<\frac{t_{2i+2}}{t_{2i+1}}-1.$$ (5.8) Therefore, we have $$t_{2i+1}h(a_{2i+1})<t_{2i+2}h(a_{2i+2}).$$ (5.9) Note that $$\displaystyle\displaystyle\frac{1-t_{2i+1}}{1-t_{2i}}=\frac{1}{i^{\beta-\alpha}}<1.$$ (5.10) Hence, $\displaystyle t_{2i+1}>t_{2i}$. Since $\displaystyle h(a_{2i+1})>h(a_{2i})$ we have $\displaystyle t_{2i+1}h(a_{2i+1})>h(a_{2i})t_{2i}$. Let $\displaystyle\xi(x)$ be solving the following problem $$\displaystyle\displaystyle\left(\frac{x}{1-\xi(x)}\right)^{1+\frac{1}{p}}$$ $$\displaystyle\displaystyle=\left(\frac{C}{\xi(x)+d}\right)^{2}-1$$ (5.11) $$\displaystyle\displaystyle\xi(x_{i})$$ $$\displaystyle\displaystyle=t_{2i+1},$$ (5.12) $$\displaystyle\displaystyle\xi(x_{i+1})$$ $$\displaystyle\displaystyle=t_{2i+2}.$$ (5.13) Note that $\displaystyle C,d>0$ is determined by (5.12) and (5.13). Next we show that $\displaystyle\xi^{\prime}<0$. To this end we differentiate both side of (5.11) and get the following $$\begin{array}[]{rl}0<\left(1+\frac{1}{p}\right)x^{\frac{1}{p}}&=-\xi^{\prime}(x)\left(1+\frac{1}{p}\right)(1-\xi(x))^{\frac{1}{p}}\left[\left(\frac{C}{\xi(x)+d}\right)^{2}-1\right]\\ &-\xi^{\prime}(x)(1-\xi(x))^{1+\frac{1}{p}}\frac{2C}{(\xi(x)+d)^{2}}\left(\frac{C}{\xi(x)+d}\right).\end{array}$$ (5.14) Therefore, we get $\displaystyle\xi^{\prime}(x)<0$. Let $\displaystyle\Phi(x)$ be defined as $$\Phi(x):=\xi(x)\sqrt{1+\left(\frac{x}{1-\xi(x)}\right)^{1+\frac{1}{p}}}=\frac{C\xi(x)}{\xi(x)+d}.$$ (5.15) Observe that $$\Phi^{\prime}(x)=\xi^{\prime}(x)\left[\frac{C}{\xi(x)+d}-\frac{C\xi(x)}{(\xi(x)+d)^{2}}\right]=\xi^{\prime}(x)\frac{Cd}{(\xi(x)+d)^{2}}<0.$$ (5.16) Finally we define the function $\displaystyle t(x)$ such that $\displaystyle t(x_{i}+)=t_{2i}$ and $\displaystyle t(x_{i}-)=t_{2i+1}$ for $\displaystyle i\geq i_{0}$ and $\displaystyle t$ satisfies (5.11)–(5.13) for $\displaystyle x\in(x_{i+1},x_{i})$. Let $\displaystyle\rho:(0,\infty)\rightarrow\mathbb{R}$ be defined as $$\rho(x)=-t(x)h\left(\frac{x}{1-t(x)}\right).$$ (5.17) By (5.9) and (5.16), $\displaystyle x\mapsto\rho(x)$ is increasing. By Proposition 5.1 with $\displaystyle R=x_{1}$, there exists an entropy solution $\displaystyle u$ such that $$u(x_{i}+,1)=\left(\frac{x_{i}}{(p+1)(1-t_{2i})}\right)^{\frac{1}{p}}\mbox{ and }u(x_{i}-,1)=\left(\frac{x_{i}}{(p+1)(1-t_{2i+1})}\right)^{\frac{1}{p}}.$$ (5.18) By (5.6) we get $$u(x_{i}+,1)=\left(\frac{a_{2i}}{p+1}\right)^{\frac{1}{p}}\mbox{ and }u(x_{i}-,1)=\left(\frac{a_{2i+1}}{p+1}\right)^{\frac{1}{p}}.$$ (5.19) Therefore, $$\displaystyle\displaystyle\left|u(x_{i}-,1)-u(x_{i}+,1)\right|$$ $$\displaystyle\displaystyle=\left|\left(\frac{a_{2i}}{p+1}\right)^{\frac{1}{p}}-\left(\frac{a_{2i+1}}{p+1}\right)^{\frac{1}{p}}\right|$$ $$\displaystyle\displaystyle=(1+p)^{-\frac{1}{p}}\left[i^{-\frac{\alpha}{p}}-i^{-\frac{\beta}{p}}\right].$$ (5.20) Let $\displaystyle\epsilon>0$. Then, we have $$\left|u(x_{i}-,1)-u(x_{i}+,1)\right|^{\frac{p+1}{1+\epsilon}}\geq C(p)\left[i^{-\frac{\alpha(p+1)}{p(1+\epsilon)}}-i^{-\frac{\beta(p+1)}{p(1+\epsilon)}}\right].$$ (5.21) Now, we set $$\lambda=1+\frac{2p}{3(2p+1)}\epsilon,\,\frac{p+1}{p}\alpha=1+\frac{4p+2}{3(2p+1)}\epsilon\mbox{ and }\frac{p+1}{p}\beta=1+\frac{2(3p+2)}{3(2p+1)}\epsilon.$$ (5.22) We check that $\displaystyle\beta-\alpha=\lambda-1$ and $\displaystyle\frac{p+1}{p}\beta>1+\epsilon$. Hence, $\displaystyle u(\cdot,1)\notin BV^{s}_{loc}(\mathbb{R})$ for $\displaystyle s=\frac{1}{p+1}+\frac{\epsilon}{p+1}$. Note that by Proposition 5.1 initial data $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$. Now we find a data which is in $\displaystyle BV(\mathbb{R})$. From the construction we have $\displaystyle x_{1}<R_{2}(1)$ where $\displaystyle R_{2}(t)$ is as in Theorem 3.1. Choose a point $\displaystyle x_{0}\in(x_{1},R_{2}(1))$. Note that $\displaystyle 0<t_{+}(x_{0},1)<1$ and $\displaystyle u(x,t_{+}(x_{0},1))=\bar{v}_{-}$ for $\displaystyle x\geq 0$. We also observe that $\displaystyle L_{1}(t)=0$ and $\displaystyle R_{2}(t)>0$ for $\displaystyle t=t_{+}(x_{0},1)$. Therefore, for $\displaystyle t=t_{+}(x_{0})$ we have $$u(x,t)=(g^{\prime})^{-1}\left(\frac{x-z_{-}(x,t)}{t}\right)\mbox{ for }x<0.$$ (5.23) Since $\displaystyle g$ is uniformly convex we have $\displaystyle u(\cdot,t_{+}(x_{0},1))\in BV((-\infty,0))$. To conclude the Theorem 2.6 we set $\displaystyle v_{0}(x):=u(x,t_{0}(x_{0},1))$. Let $\displaystyle v(x,t)$ be the entropy solution to (1.4) with initial data $\displaystyle v_{0}$. Note that $\displaystyle v(x,1-t_{0}(x_{0},1))=u(x,1)$ for all $\displaystyle x\in\mathbb{R}$. Hence, the proof of Theorem 2.6 is completed. ∎ Appendix A Hölder continuity of singular maps In this section useful lemma on Hölder exponent and non degeneracy of fluxes are collected and used throughout the paper. Some commentaries are added for all lemmas. The following lemma recall that the non uniform convexity of a flux function corresponds to a loss of the Lipschitz regularity for the reciprocal function of the derivative. This key point inforces a $\displaystyle BV^{s}$ (or generalized $\displaystyle BV$ regularity CJLO ; GJC ) instead of $\displaystyle BV$ regularity lax1 ; ol for the entropy solutions. Lemma A.1. Let $\displaystyle g\in C^{1}(\mathbb{R})$ be satisfying the non-degeneracy (2.2) with exponent $\displaystyle q$. Then $\displaystyle(g^{\prime})^{-1}$ is Hölder continuous with exponent $\displaystyle 1/q$. Proof. Fix a compact set $\displaystyle K$. Let $\displaystyle x$ and $\displaystyle y$ is in $\displaystyle g^{\prime}(K)$. There exist $\displaystyle\tilde{x},\tilde{y}$ such that $\displaystyle\tilde{x}=(g^{\prime})^{-1}(x)$ and $\displaystyle\tilde{y}=(g^{\prime})^{-1}(y)$. Then, $$\frac{|(g^{\prime})^{-1}(x)-(g^{\prime})^{-1}(y)|}{|x-y|^{1/q}}=\frac{|\tilde{x}-\tilde{y}|}{|g^{\prime}(\tilde{x})-g^{\prime}(\tilde{y})|^{1/q}}=\frac{|\tilde{x}-\tilde{y}|}{|g^{\prime}(\tilde{x})-g^{\prime}(\tilde{y})|^{1/q}}\leq\frac{1}{C_{2}^{1/q}}.$$ This proves the Lemma A.1. ∎ The interface condition (1.5) needs the use of some reciprocal functions of the flux $\displaystyle g$ or $\displaystyle f$. The fact that the reciprocal function of $\displaystyle g$ is never Lipschitz near $\displaystyle\min g$ forbids the classical Lax-Oleinik $\displaystyle BV$ smoothing effect for an uniform convex flux. Lemma A.2. Let $\displaystyle g$ be a $\displaystyle C^{2}$ function satisfying (2.2) with exponent $\displaystyle q$ then $\displaystyle g_{+}$ satisfies (2.2) with exponent $\displaystyle q+1$ on domain $\displaystyle(\theta_{g},\infty)$. Proof. Since $\displaystyle\theta_{g}$ is the critical point of $\displaystyle g$ hence, $\displaystyle g^{\prime}(\theta_{g})=0$, then we consider $$\displaystyle\displaystyle g(x)-g(y)$$ $$\displaystyle\displaystyle=(x-y)\int_{0}^{1}g^{\prime}(\lambda x+(1-\lambda)y)d\lambda,$$ $$\displaystyle\displaystyle=(x-y)\int_{0}^{1}(g^{\prime}(\lambda x+(1-\lambda)y)-g^{\prime}(\theta_{g}))d\lambda.$$ We know that $\displaystyle g^{\prime}(\cdot)$ is a increasing function and $\displaystyle g$ satisfies the non-degeneracy condition (2.2). Let $\displaystyle x>y\geq\theta_{g}$, then $$\displaystyle\displaystyle|g(x)-g(y)|$$ $$\displaystyle\displaystyle=|x-y|\int_{0}^{1}(g^{\prime}(\lambda x+(1-\lambda)y)-g^{\prime}(\theta_{g}))d\lambda$$ $$\displaystyle\displaystyle\geq C_{2}|x-y|\int_{0}^{1}(\lambda x+(1-\lambda)y)-\theta_{g})^{q}d\lambda$$ $$\displaystyle\displaystyle\geq\frac{1}{q+1}C_{2}\big{(}(x+(1-\lambda)y)-\theta_{g})^{q+1}\big{)}\Bigg{|}_{0}^{1}$$ $$\displaystyle\displaystyle\geq\frac{1}{q+1}C_{2}((x-\theta_{g})^{q+1}-(y-\theta_{g})^{q+1})$$ $$\displaystyle\displaystyle\geq\frac{C_{2}}{q+1}|x-y|^{q+1}.$$ (A.1) ∎ The previous commentary of Lemma A.2 is even more important for the non Lipschitz regularity of the singular map. Lemma A.3. Suppose fluxes $\displaystyle f$ and $\displaystyle g$ are $\displaystyle C^{1}(\mathbb{R})$ and convex functions with $\displaystyle f(\theta_{f})<g(\theta_{g})$ which additionally satisfies the non-degeneracy condition (2.2) and let $\displaystyle K$ is any compact set of $\displaystyle\mathbb{R}$. Then for $\displaystyle x\in K$, $\displaystyle f^{-1}_{+}g(\cdot)$ is a Lipschitz continuous function and $\displaystyle g^{-1}_{-}f(\cdot)$ is a Hölder continuous function. Proof. Since $\displaystyle f(\theta_{f})<g(\theta_{g})$, there exist $\displaystyle a_{1}<\theta_{f}<a_{2}$ such that $\displaystyle f(a_{1})=g(\theta_{g})=f(a_{2})$. Hence, we have $$\bar{c}:=\min\left\{\left|f^{\prime}(a)\right|;\,a\in(-\infty,a_{1}]\cup[a_{2},\infty)\right\}>0.$$ (A.2) Without loss generality we can assume that $\displaystyle g(x)\not=g(y)$ because if $\displaystyle g(x)=g(y)$ then result holds anyway. There exist $\displaystyle\tilde{x},\tilde{y}>\theta_{f}$ such that $\displaystyle f(\tilde{x})=g(x)$ and $\displaystyle f(\tilde{y})=g(y)$. As $\displaystyle f^{-1}_{+}$ is increasing, we get $\displaystyle\tilde{x},\tilde{y}>a_{2}$. Consider the following $$\displaystyle\displaystyle\frac{|f^{-1}_{+}g(x)-f^{-1}_{+}g(y)|}{|x-y|}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|f^{-1}_{+}g(x)-f^{-1}_{+}g(y)|}{|g(x)-g(y)|}\cdot\frac{|g(x)-g(y)|}{|x-y|},$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|f^{-1}_{+}f(\tilde{x})-f^{-1}_{+}f(\tilde{y})|}{|f(\tilde{x})-f(\tilde{y})|}\cdot\frac{|g(x)-g(y)|}{|x-y|},$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|\tilde{x}-\tilde{y}|}{|f(\tilde{x})-f(\tilde{y})|}\cdot\frac{|g(x)-g(y)|}{|x-y|},$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{1}{f^{\prime}(c_{0})}\cdot\frac{|g(x)-g(y)|}{|x-y|},$$ for some $\displaystyle c_{0}$ in between $\displaystyle\tilde{x},\tilde{y}$. Note that $\displaystyle{c}_{0}\geq a_{2}$ and $\displaystyle f^{\prime}\geq\bar{c}$. As $\displaystyle g$ is Lipschitz continuous function, we have $\displaystyle\left|g(x)-g(y)\right|\leq c_{1}\left|x-y\right|$ where $\displaystyle c_{1}$ depends on $\displaystyle g$ and $\displaystyle K$. Therefore we get, $$\frac{|f^{-1}_{+}g(x)-f^{-1}_{+}g(y)|}{|x-y|}\leq C.$$ (A.3) We know that for $\displaystyle f(x)\geq g(\theta_{g})$ there exists $\displaystyle\tilde{x}$ such that $\displaystyle f(x)=g(\tilde{x})$ and $\displaystyle g^{\prime}(\tilde{x})>0$, without loss of generality we can assume that $\displaystyle g(x)\not=g(y)$ because if $\displaystyle g(x)=g(y)$ then result holds. $$\displaystyle\displaystyle\frac{|g^{-1}_{-}f(x)-g^{-1}_{-}f(y)|^{q+1}}{|x-y|}$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|g^{-1}_{-}f(x)-g^{-1}_{-}f(y)|^{q+1}}{|f(x)-f(y)|}\cdot\frac{|f(x)-f(y)|}{|x-y|},$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|g^{-1}_{-}g(\tilde{x})-g^{-1}_{-}g(\tilde{y})|^{q+1}}{|g(\tilde{x})-g(\tilde{y})|}\cdot\frac{|f(x)-f(y)|}{|x-y|},$$ $$\displaystyle\displaystyle=$$ $$\displaystyle\displaystyle\frac{|\tilde{x}-\tilde{y}|^{q+1}}{|g(\tilde{x})-g(\tilde{y})|}\cdot\frac{|f(x)-f(y)|}{|x-y|},$$ Now from the Lipschitz continuity $\displaystyle f$ and (A.1), $$\frac{|g^{-1}_{-}f(x)-g^{-1}_{-}f(y)|^{q+1}}{|x-y|}\leq C.$$ (A.4) Hence, it implies that $$|g^{-1}_{-}f(x)-g^{-1}_{-}f(y)|\leq C|x-y|^{1/q+1}.$$ ∎ Next lemma shows that power law fluxes satisfies the non-degeneracy condition (2.2). Lemma A.4. Let $\displaystyle M>0$ and $\displaystyle g:[-M,M]\rightarrow\mathbb{R}$ be defined as $\displaystyle g(u)=\left|u\right|^{p}$ for $\displaystyle p\geq 2$. Then $\displaystyle g$ satisfies the non-degeneracy condition (2.2) with exponent $\displaystyle p-1$. This is the simplest example with power-law degeneracy $\displaystyle p-1$ junca1 ; CJ1 . Proof. We calculate $\displaystyle g^{\prime}(u)=\operatorname{sign}(u)p|u|^{p-1}$. Then we show that the non-degeneracy condition (2.2) is satisfied for $\displaystyle g$ case by case. Hence, we consider Case(I): If $\displaystyle u,v\geq 0$, since $\displaystyle(|u|+|v|)^{p}>(|u|^{p}+|v|^{p})$, we get $$\frac{|g^{\prime}(u)-g^{\prime}(v)|}{|u-v|^{p-1}}=p\frac{|u^{p-1}-v^{p-1}|}{|u-v|^{p-1}}\geq p.$$ (A.5) Now for $\displaystyle u,v\leq 0$ can be handled similarly as Case I. Case(II): If $\displaystyle u\leq 0$ and $\displaystyle v\geq 0$, then we also get $$\frac{|g^{\prime}(u)-g^{\prime}(v)|}{|u-v|^{p-1}}=p\frac{||u|^{p-1}+|v|^{p-1}|}{|u-v|^{p-1}}\geq p.$$ (A.6) Again for $\displaystyle u\leq 0$ and $\displaystyle v\geq 0$ can be handled in similar way as Case II. ∎ Appendix B $\displaystyle BV^{s}$ embedding The continuous embedding between fractional $\displaystyle BV$ spaces is explicited using the $\displaystyle L^{\infty}$ norm or, more precisely, the oscillation in the next lemma. Recall that the oscillation of the function $\displaystyle u$ on $\displaystyle I$ is, $$\displaystyle osc(u):=\sup_{x<y}\{|u(x)-u(y)|\}\leq 2\|u\|_{\infty}.$$ Lemma B.1. Let $\displaystyle u:I\subset\mathbb{R}\rightarrow\mathbb{R}$ be bounded function on a given interval $\displaystyle I$ and $\displaystyle 0<s<t$ such that $\displaystyle u\in BV^{t}\subset BV^{s}$. Let $\displaystyle p=\frac{1}{s}\geq q=\frac{1}{t}$, then, $$TV^{s}u(I)\;\leq\;osc(u)^{p-q}\;TV^{t}u(I).$$ (B.1) Proof. When $\displaystyle osc(u)\leq 1$, the inequality $\displaystyle y^{p}\leq y^{q}$ for all $\displaystyle y\in[0,1]$ gives a direct estimate. More precisely, let $\displaystyle\sigma=(x_{1},\cdots,x_{n})$ be any partition of $\displaystyle I$, $$\displaystyle\displaystyle\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{p}\leq\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{q}\leq TV^{t}u(I).$$ This inequality can be improved as follows if $\displaystyle u$ is non constant, that is $\displaystyle osc(u)>0$. For this purpose, consider $\displaystyle v=u/osc(u)$ so $\displaystyle osc(v)\leq 1$. Now, on a subdivision, we have, $$\displaystyle\displaystyle osc(u)^{-p}\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{p}=\sum\limits_{i=1}^{n-1}|v(x_{i})-v(x_{i+1})|^{p}$$ $$\displaystyle\displaystyle\leq\sum\limits_{i=1}^{n-1}|v(x_{i})-v(x_{i+1})|^{q}=osc(u)^{-q}\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{q}.$$ That is to say, the following inequality which is also valid when $\displaystyle osc(u)=0$, $$\displaystyle\displaystyle\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{p}\leq osc(u)^{p-q}\sum\limits_{i=1}^{n-1}|u(x_{i})-u(x_{i+1})|^{q}.$$ This is enough to conclude the lemma. ∎ Appendix C Backward construction The proof of the optimality presented in section 5 needs a construction of an initial data and solution by borrowing ideas and techniques from control. We only give a sketch of the existence of such solution along with initial data that stated in Proposition 5.1. The complete construction can be found in AG . Proof of Theorem 5.1. We first approximate $\displaystyle z(x)$ by piece-wise constant increasing function as follows $$\left\{\begin{array}[]{r}z_{0}=w_{0}<w_{1}<\cdots<w_{k}=z_{1},\\ \left|w_{i+1}-w_{i}\right|<\frac{1}{N},\\ 0=x_{0}<x_{1}<\cdots<x_{k}=R,\\ z(x_{i})=w_{i}\mbox{ for }1\leq i\leq k-1,\\ \mbox{ with }z_{0}=z(0)\mbox{ and }z_{1}=z(R-).\end{array}\right.$$ (C.1) We set $\displaystyle t_{0}=T$ and $\displaystyle t_{i},\,1\leq i\leq 2k,$, $\displaystyle c_{i},d_{i}\,1\leq i\leq k$ as follows $$\begin{array}[]{rl}h_{+}\left(\frac{x_{i}}{T-t_{2i-1}}\right)=-\frac{w_{i-1}}{t_{2i-1}},\,h_{+}\left(\frac{x_{i}}{T-t_{2i}}\right)=-\frac{w_{i}}{t_{2i}},\\ f^{\prime}(c_{2i-1})=\frac{x_{i}}{T-t_{2i-1}},\,f^{\prime}(c_{2i})=\frac{x_{i}}{T-t_{2i}}\mbox{ and }d_{i}=g_{+}^{-1}(f(a_{i})).\end{array}$$ (C.2) Then we observe that $\displaystyle c_{2i-1}>c_{2i},d_{2i-1}>d_{2i},T=t_{0}>t_{1}>\cdots>t_{2k}=T_{1}$. Consider Lipschitz curves $\displaystyle r_{i},\tilde{r}_{i},a_{i},b_{i}$ defined as follows $$\displaystyle\displaystyle\begin{array}[]{rll}s_{i}=\frac{f(c_{2i-1})-f(c_{2i})}{c_{2i-1}-c_{2i}},&S_{i}=\frac{g(d_{2i-1})-g(d_{2i})}{d_{2i-1}-d_{2i}},&1\leq i\leq k,\\ r_{i}(t)=g^{\prime}(d_{i})(t-t_{i}),&\tilde{r}_{i}(t)=f^{\prime}(c_{i})(t-t_{i}),&1\leq i\leq 2k,\\ a_{i}(t)=x_{i}+s_{i}(t-T),&b_{i}(t)=S_{i}(t-q_{i}),\,a_{i}(q_{i})=0,&1\leq i\leq 2k,\end{array}$$ (C.6) $$\displaystyle\displaystyle r_{0}(t)=g^{\prime}(b_{0})(t-T)=g^{\prime}(u_{-})(t-t_{0}).$$ (C.7) Now, we define $\displaystyle u_{0}^{N}$ as below $$u_{0}^{N}:=\left\{\begin{array}[]{ll}u_{-}&\mbox{ if }x<w_{0},\\ d_{2i-1}&\mbox{ if }w_{i-1}<x<b_{i}(0),1\leq i\leq k,\\ d_{2i}&\mbox{ if }b_{i}(0)<x<w_{i},1\leq i\leq k,\\ v_{-}&\mbox{ if }w_{2k}<x<0,\\ \bar{v}_{-}&\mbox{ if }x>0.\end{array}\right.$$ (C.8) Let $\displaystyle\tilde{t}_{i}(x)$ be the unique solution to $$h_{+}\left(\frac{x}{T-\tilde{t}_{i}(x,t)}\right)=-\frac{z_{i}}{\tilde{t}_{i}(x,t)}\mbox{ for }x\in(x_{i},x_{i+1}),\,1\leq i\leq k-1.$$ (C.9) Corresponding entropy solution $\displaystyle u^{N}$ is the following $$u^{N}(x,t)=\left\{\begin{array}[]{ll}u_{-}&\mbox{ if }x<r_{0}(t),\\ (g^{\prime})^{-1}\left(\frac{x-z_{i}}{t}\right)&\mbox{ if }r_{2i}(t)<x<\min\{r_{2i+1}(t),0\},\\ (f^{\prime})^{-1}\left(\frac{x}{t-\tilde{t}_{i}(x,t)}\right)&\mbox{ if }\max\{\tilde{r}_{2i+1}(t),0\}<x<\tilde{r}_{2i-1}(t),\\ d_{2i-1}&\mbox{ if }r_{2i-1}(t)<x<\min\{S_{i}(t),0\},1\leq i\leq k,\\ d_{2i}&\mbox{ if }S_{2i}(t)<x<\min\{r_{2i}(t),0\},1\leq i\leq k,\\ c_{2i-1}&\mbox{ if }\max\{\tilde{r}_{2i-1}(t),0\}<x<s_{i}(t),1\leq i\leq k,\\ c_{2i}&\mbox{ if }\max\{s_{i}(t),0\}<x<\tilde{r}_{2i},1\leq i\leq k,\\ v_{-}&\mbox{ if }r_{2k}(t)<x<0,\\ \bar{v}_{-}&\mbox{ if }x>\max\{\tilde{r}_{2k},0\}.\end{array}\right.$$ (C.10) By assumption we have $\displaystyle h_{+}$ is a locally Lipschitz continuous function and we can prove TV bound of $\displaystyle g^{\prime}(u_{0}^{N})$ (see AG for more details). Then, by applying Helly’s Theorem we can find a $\displaystyle u_{0}\in L^{\infty}(\mathbb{R})$ and corresponding entropy solution $\displaystyle u$ satisfying (5.3). This completes the proof of Proposition 5.1. ∎ Acknowledgement. Authors thank IFCAM project “Conservation laws: $\displaystyle BV^{s}$, interface and control”. SSG and AP thank the Department of Atomic Energy, Government of India, under project no. 12-R&D-TFR-5.01-0520 for support. SSG acknowledges Inspire faculty-research grant DST/INSPIRE/04/2016/00-0237. References (1) Adimurthi, R. Dutta, S. S. Ghoshal and G. D. Veerappa Gowda, Existence and nonexistence of TV bounds for scalar conservation laws with discontinuous flux, Comm. Pure Appl. Math., 64 (2011), no. 1, 84-115. (2) Adimurthi, S. S. Ghoshal and G. D. Veerappa Gowda, Finer regularity of an entropy solution for 1-d scalar conservation laws with non uniform convex flux, Rend. Semin. Mat. Univ. Padova, 132 (2014), 1-24. (3) Adimurthi and S. S. Ghoshal, Exact and optimal controllability for scalar conservation laws with discontinuous flux, to appear in Commun. Contemp. Math., (arxiv preprint arXiv:2009.13324). (4) Adimurthi, J. Jaffré and G. D. Veerappa Gowda, Godunov type methods for scalar conservation laws with flux function discontinuous in the space variable, SIAM J. Numer. Anal., 42(1) (2004), 179-208. (5) Adimurthi, S. Mishra and G. D. Veerappa Gowda, Optimal entropy solutions for conservation laws with discontinuous flux-functions, J. Hyperbolic Differ. Equ., 2 (2005), no. 4, 783-837. (6) Adimurthi, S. Mishra and G. D. Veerappa Gowda, Explicit Hopf-Lax type formulas for Hamilton-Jacobi equations and conservation laws with discontinuous coefficients, J. Differential Equations, 241 (2007), no. 1, 1-31. (7) Adimurthi and G. D. Veerappa Gowda, Conservation laws with discontinuous flux, J. Math. Kyoto Univ., 43 (2003), no. 1, 27-70. (8) L. Ambrosio, N. Fusco and D. Pallara, Functions of bounded variation and free discontinuity problems, Oxford Mathematical Monographs, xviii, 434 p. (2000). (9) B. Andreianov and C. Cancès, The Godunov scheme for scalar conservation laws with discontinuous bell-shaped flux functions, Appl. Math. Lett., 25 (2012), no. 11, 1844-1848. (10) B. Andreianov, K. H. Karlsen and N. H. Risebro, A theory of $\displaystyle L^{1}$-dissipative solvers for scalar conservation laws with discontinuous flux, Arch. Ration. Mech. Anal. 201 (2011), no. 1, 27-86. (11) C. Bourdarias, M. Gisclon and S. Junca, Fractional BV spaces and applications to scalar conservation laws, J. Hyperbolic Differ. Equ. 11 (2014), no. 4, 655-677. (12) A. Bressan, G. Guerra and W. Shen, Vanishing viscosity solutions for conservation laws with regulated flux. J. Differ. Equ. 266 (2019) 312–351. (13) R. Bürger, A. García, K. H. Karlsen and J. D. Towers, A family of numerical schemes for kinematic flows with discontinuous flux, J. Engrg. Math., 60 (2008), no. 3-4, 387-425. (14) R. Bürger, K. H. Karlsen, N. H. Risebro and J. D. Towers, Well-posedness in $\displaystyle BV_{t}$ and convergence of a difference scheme for continuous sedimentation in ideal clarifier-thickener units, Numer. Math., 97 (2004), no. 1, 25-65. (15) R. Bürger, K. H. Karlsen and J. D. Towers, A model of continuous sedimentation of flocculated suspensions in clarifier-thickener units, SIAM J. Appl. Math., 65 (2005), no. 3, 882-940. (16) P. Castelli and S. Junca, Oscillating waves and the maximal smoothing effect for one dimensional nonlinear conservation laws, AIMS on Applied Mathematics, 8, 709-716, (2014). (17) P. Castelli and S. Junca, Smoothing effect in $\displaystyle BV-\Phi$ for entropy solutions of scalar conservation laws, J. Math. Anal. Appl., 451 (2), 712–735, (2017). (18) P. Castelli, P. E. Jabin and S. Junca, Fractional spaces and conservation laws, Theory, numerics and applications of hyperbolic problems I, Aachen, Germany, August 2016. Springer Proceedings in Mathematics & Statistics, 236, 285-293 (2018). (19) K. S. Cheng, The space $\displaystyle BV$ is not enough for hyperbolic conservation laws, J. Math. Anal. Appl., 91 (2), 559–561, (1983). (20) S. Diehl, Dynamic and steady-state behavior of continuous sedimentation, SIAM J. Appl. Math., 57 (1997), no. 4, 991-1018. (21) S. Diehl, A conservation law with point source and discontinuous flux function modeling continuous sedimentation, SIAM J. Appl. Math., 56 (1996), no. 2, 388–419. (22) S. S. Ghoshal, Optimal results on TV bounds for scalar conservation laws with discontinuous flux, J. Differential Equations, 258 (2015), no. 3, 980-1014. (23) S. S. Ghoshal, BV regularity near the interface for nonuniform convex discontinuous flux, Netw. Heterog. Media, 11 (2016), no. 2, 331-348. (24) S. S. Ghoshal, B. Guelmame, A. Jana and S. Junca, Optimal regularity for all time for entropy solutions of conservation laws in $\displaystyle BV^{s}$, Nonlinear Differential Equations and Applications NoDEA, 27 (2020), article number 46, 29 p. (25) S. S. Ghoshal and A. Jana, Non existence of the BV regularizing effect for scalar conservation laws in several space dimension for $\displaystyle C^{2}$ fluxes, SIAM J. Math. Anal. 53 (2021), no. 2, 1908–1943. (26) S. S. Ghoshal, A. Jana and J. D Towers, Convergence of a Godunov scheme to an Audusse-Perthame adapted entropy solution for conservation laws with BV spatial flux, Numer. Math. 146 (3), (2020), 629-659. (27) B. Guelmame, S. Junca and D. Clamond, Regularizing effect for conservation laws with a Lipschitz convex flux, Commun. Math. Sci., 17 (8), 2223-2238, (2019). (28) P. E. Jabin, Some regularizing methods for transport equations and the regularity of solutions to scalar conservation laws, Séminaire: Equations aux Dérivées Partielles, Ecole Polytech. Palaiseau, 2008-2009, Exp. No. XVI, (2010). (29) J. Jaffré and S. Mishra, On the upstream mobility flux scheme for the simulating two phase flow in heterogeneous porous media, Comput. Geosci., 2009. (30) S. K. Godunov, A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics. (Russian) Mat. Sb. (N.S.) 47 (1959) no. 89, 271-306. (31) K. H. Karlsen and J. D. Towers, Convergence of a Godunov scheme for conservation laws with a discontinuous flux lacking the crossing condition. J. Hyperbolic Differ. Equ., 14 (2017), no. 4, 671–701. (32) S. N. Kružkov, First-order quasilinear equations with several space variables, Mat. Sbornik, 123 (1970), 228-255; Math. USSR Sbornik, 10, (1970), 217-273 (in English). (33) P. D. Lax, Hyperbolic systems of conservation laws. II, Comm. Pure Appl. Math., 10 (1957) 537-566. (34) P.-L. Lions, B. Perthame, and E. Tadmor. A kinetic formulation of multidimensional scalar conservation laws and related equations. J. Amer. Math. Soc. 7, 169-192, (1994). (35) E. R. Love and L. C. Young, Sur une classe de fonctionnelles linéaires, Fund. Math., 28 (1937), 243-257. (36) S. Mochon, An analysis for the traffic on the highways with changing surface condition, Math. Model., 9 (1987), no. 1, 1-11. (37) J. Musielak and W. Orlicz, On space of functions of finite generalized variation, Bull. Acad. Pol. Sc., 5 (1957), 389-392. (38) J. Musielak and W. Orlicz, On generalized variations I, studia mathematica XVIII, (1959), 11-41. (39) E. Y. Panov., Existence of strong traces for generalized solutions of multidimensional scalar conservation laws, J. Hyperbolic Differ. Equ. 2, no. 4, 885–908, (2005). (40) E. Y. Panov, Existence of strong traces for quasi-solutions of multidimensional conservation laws, J. Hyperbolic Differ. Equ. 4 (4), 729–770, (2007). (41) E. Y. Panov, On existence and uniqueness of entropy solutions to the Cauchy problem for a conservation law with discontinuous flux. J. Hyperbolic Differ. Equ. 6, No. 3, 525-548 (2009) (42) O. A. Oleĭnik, Discontinuous solutions of non-linear differential equations, (Russian), Uspehi Mat. Nauk (N.S.), 12 (1957) no. 3, (75), 3-73. (43) D. S. Ross, Two new moving boundary problems for scalar conservation laws, Comm. Pure Appl. Math., 41 (1988), no.5, 725-737. (44) J. D. Towers, Convergence of a difference scheme for conservation laws with a discontinuous flux, SIAM J. Numer. Anal., 38 (2000), no. 2, 681-698. (45) A. I., Vol’pert. Spaces $\displaystyle BV$ and quasilinear equations. (Russian)Mat. Sb. (N.S.) 73 (115) 1967, 255–302.
Abstract Renormalization of two-loop divergent corrections to the vacuum expectation values ($v_{1}$, $v_{2}$) of the two Higgs doublets in the minimal supersymmetric standard model, and their ratio $\tan\beta=v_{2}/v_{1}$, is discussed for general $R_{\xi}$ gauge fixings. When the renormalized ($v_{1}$, $v_{2}$) are defined to give the minimum of the loop-corrected effective potential, it is shown that, beyond the one-loop level, the dimensionful parameters in the $R_{\xi}$ gauge fixing term generate gauge dependence of the renormalized $\tan\beta$. Additional shifts of the Higgs fields are necessary to realize the gauge-independent renormalization of $\tan\beta$. TU-641 hep-ph/0112251 Two-loop renormalization of $\tan\beta$ and its gauge dependence Youichi Yamada Department of Physics, Tohoku University, Sendai 980-8578, Japan PACS: 11.10.Gh; 11.15.-q; 14.80.Cp; 12.60.Jv Several extensions of the standard model have more than one Higgs boson doublets. For example, the minimal supersymmetric (SUSY) standard model (MSSM) [1, 2] has two Higgs doublets $$H_{1}=(H_{1}^{0},H_{1}^{-}),\;\;\;H_{2}=(H_{2}^{+},H_{2}^{0}).$$ (1) Both $H_{1}^{0}$ and $H_{2}^{0}$ acquire the vacuum expectation values (VEVs) $v_{i}$ ($i=1,2$) which spontaneously break the SU(2) $\times$ U(1) gauge symmetry. $H_{i}^{0}$ are then expanded about the minimum of the Higgs potential as $$H_{i}^{0}=\frac{v_{i}}{\sqrt{2}}+\phi_{i}^{0}.$$ (2) $\phi_{i}^{0}$ are shifted Higgs fields with vanishing VEVs. I assume that CP violation in the Higgs sector is negligible and take $v_{i}$ as real and positive. A lot of physical quantities of the theory depend on the Higgs VEVs. In calculating radiative corrections to these quantities, the VEVs have to be renormalized. In the minimal standard model with only one Higgs doublet, the renormalization of the Higgs VEV is usually substituted by that of the weak boson masses [3, 4]. However, this is not enough for extended theories with two or more Higgs VEVs. For example, the renormalization of $v_{i}$ in the MSSM is usually performed [5, 6, 7] by specifying the weak boson masses, which are proportional to $v_{1}^{2}+v_{2}^{2}$, and the ratio $\tan\beta\equiv v_{2}/v_{1}$. Since $\tan\beta$ itself is not a physical observable, however, a lot of renormalization schemes for $\tan\beta$ have been proposed in the studies of the radiative corrections in the MSSM. Some of them are listed in Ref. [8]. In this letter, I concentrate on process-independent definitions of $\tan\beta$, which are given by the ratio of the renormalized VEVs $v_{i}$. I discuss the renormalization of the ultraviolet (UV) divergent corrections to $v_{i}$ and $\tan\beta$, working in the modified minimal subtraction schemes with dimensional reduction [9] ($\overline{\rm DR}$ scheme). The results are presented as the renormalization group equations (RGEs) for $v_{i}$ and $\tan\beta$. Since they are not physical observables, they may depend on the gauge fixing in general. I therefore investigate their gauge dependence in the general $R_{\xi}$ gauge fixing [10]. Although I show the results for the MSSM, the results for the gauge dependence can be generalized for other models with two or more Higgs doublets. Even within the $\overline{\rm DR}$ scheme, there still remains an ambiguity of the way how to cancel the radiative shifts of the Higgs VEVs, $\Delta v_{i}$, by the one-point functions of $\phi^{0}_{i}$ by tadpole diagrams. One way is to cancel $\Delta v_{i}$ entirely by the shift of $\phi_{i}^{0}$. As a result, the tadpole contributions have to be added to all quantities which depend on $v_{i}$. The renormalized $v_{i}$ give the minimum of the tree-level Higgs potential and are just tree-level functions of the gauge-symmetric quadratic and quartic couplings in the Higgs potential. These $v_{i}$ are therefore independent of the gauge fixing parameters [11]. This renormalization scheme for $v_{i}$ is sometimes used [12, 13, 14, 15] to show manifest gauge independence of physical quantities. However, since the running of $v_{i}$ in this scheme is very rapid [16], and the tadpole contributions appear in almost any corrections, this scheme is often inconvenient in practical calculations. Another, more popular way [16, 5, 6, 7, 15] is to absorb $\Delta v_{i}$ by the shift of quadratic terms in the Higgs potential. The renormalized $v_{i}$ then give the minimum of the loop-corrected effective potential $V_{\rm eff}(H_{1},H_{2})$. This scheme is very convenient in practical calculation, because the explicit forms of the tadpole diagrams are necessary only for two-point functions of the Higgs bosons. However, the effective potential is generally dependent on the gauge fixing parameters [17, 18, 19, 20]. The gauge dependence of the renormalized $v_{i}$ and their ratio $\tan\beta$ then might be a serious problem in calculating radiative corrections. I will therefore discuss the gauge dependence of the running $\tan\beta$ in this definition, in general $R_{\xi}$ gauges and to the two-loop order. The RGE for $v_{i}$ can be obtained from the UV divergent corrections to $v_{i}$-dependent masses or couplings of particles. For simplicity, I use the corrections to two quark masses $m_{b}$ and $m_{t}$, ignoring the masses of all other quarks and leptons. These mass terms are generated from the $b\bar{b}H_{1}$ and $t\bar{t}H_{2}$ Yukawa couplings, respectively, as $$L_{\rm int}=-h_{b}\bar{b}_{R}b_{L}(v_{1}/\sqrt{2}+\phi_{1}^{0})-h_{t}\bar{t}_{% R}t_{L}(v_{2}/\sqrt{2}+\phi_{2}^{0})+{\rm h.c.}$$ (3) The $R_{\xi}$ gauge fixing term takes the form $$\displaystyle L_{GF}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{2\xi_{Z}}(\partial^{\mu}Z_{\mu}-\rho_{Z}G_{Z})^{2}-% \frac{1}{\xi_{W}}|\partial^{\mu}W^{+}_{\mu}-i\rho_{W}G_{W}^{+}|^{2}$$ (4) $$\displaystyle-\frac{1}{2\xi_{\gamma}}(\partial^{\mu}\gamma_{\mu})^{2}-\frac{1}% {2\xi_{g}}\sum_{a=1}^{8}(\partial^{\mu}g^{a}_{\mu})^{2}.$$ The would-be Nambu-Goldstone bosons $G_{V}$ for $V=(Z,W)$ appear in Eq. (4). The parameters $\rho_{V}\equiv\xi_{V}m_{V}$, where $m_{V}^{2}=g_{V}^{2}(v_{1}^{2}+v_{2}^{2})/4$ ($g_{W}^{2}=g_{2}^{2}$, $g_{Z}^{2}=g_{2}^{2}+g_{Y}^{2}$) are masses of $Z$ and $W^{\pm}$, are introduced in Eq. (4). This is to emphasize that the gauge symmetry breaking terms $\xi_{V}m_{V}$ in $L_{GF}$, and also in the accompanied Fadeev-Popov ghost term, has very different nature from $v_{i}$ generated by the shifts (2), as shown later. The terms $\rho_{V}G_{V}$ in Eq. (4) are expressed in the gauge basis (1) of the Higgs bosons as $$\rho_{Z}G_{Z}=\xi_{Z}m_{Z}G_{Z}\equiv-{\sqrt{2}}{\rm Im}(\rho_{1Z}\phi_{1}^{0}% -\rho_{2Z}\phi_{2}^{0}),$$ (5) $$\rho_{W}G_{W}^{\pm}=\xi_{W}m_{W}G_{W}^{\pm}\equiv-(\rho_{1W}H_{1}^{\pm}-\rho_{% 2W}H_{2}^{\pm}),$$ (6) with parameters $\rho_{iV}$. The usual form of the $R_{\xi}$ gauge fixing in the MSSM is recovered by the substitution [6, 2] $$(\rho_{1V},\rho_{2V})=\xi_{V}g_{V}(v_{1},v_{2})/2=\xi_{V}m_{V}(\cos\beta,\sin% \beta).$$ (7) The UV divergent corrections to $m_{b}$ contain one source for the SU(2)$\times$U(1) gauge symmetry breaking. It is either $v_{1}$ originated from the shift (2) of $H_{1}^{0}$, or $\rho_{1V}$ in the $R_{\xi}$ gauge fixing term (4) and the Fadeev-Popov ghost term. The former contribution is obtained from that to the $\bar{b}_{R}b_{L}\phi_{1}^{0}$ Yukawa coupling $h_{b}$ by replacing external $\phi_{1}^{0}$ by $v_{1}/\sqrt{2}$, except for the wave function correction of $H_{1}^{0}$ to $h_{b}$. Similar argument holds for the UV divergent corrections to $m_{t}$ and to the $\bar{t}_{R}t_{L}\phi_{2}^{0}$ Yukawa coupling $h_{t}$. As a result, if the $\rho_{iV}$ contributions are absent, the runnings of $v_{i}$ are the same as those of the wave functions of $H_{i}^{0}$, namely $$\displaystyle\frac{dv_{1}}{dt}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{h_{b}}\left[\sqrt{2}\frac{d}{dt}(m_{b})-\frac{dh_{b}}{dt% }v_{1}\right]=-\gamma_{1}v_{1},$$ $$\displaystyle\frac{dv_{2}}{dt}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{h_{t}}\left[\sqrt{2}\frac{d}{dt}(m_{t})-\frac{dh_{t}}{dt% }v_{2}\right]=-\gamma_{2}v_{2},$$ (8) where $t\equiv\ln Q_{{\overline{\rm DR}}}$ is the $\overline{\rm DR}$ renormalization scale. The anomalous dimensions of $H_{i}^{0}$ are denoted as $\gamma_{i}$, which generally depend on the gauge fixing parameters $\xi$. The RGEs (8) for $v_{i}$ have been widely used in the Landau gauge $\xi=\rho_{iV}=0$. However, in general $R_{\xi}$ gauges, $\rho_{iV}$ in the gauge fixing terms (4) may give additional contributions to the quark mass running, as $\bar{b}b\rho_{1V}$ and $\bar{t}t\rho_{2V}$. Since they have no corresponding contributions to the $\bar{b}b\phi_{1}$ and $\bar{t}t\phi_{2}$ couplings, the RGEs for $v_{i}$ deviate [21, 6] from Eq. (8). Their general forms are then $$\frac{dv_{i}}{dt}=-\gamma_{i}v_{i}+Y_{iV}\rho_{iV},$$ (9) where $Y_{iV}$ are polynomials of dimensionless couplings. Therefore, the RGE for $\tan\beta$ becomes, using Eq. (7), $$\frac{d}{dt}\tan\beta=\tan\beta\left(-\gamma_{2}+\gamma_{1}+\frac{\xi_{V}g_{V}% }{2}Y_{2V}-\frac{\xi_{V}g_{V}}{2}Y_{1V}\right).$$ (10) I then give explicit form of the RGE for $\tan\beta$ in the MSSM, to the two-loop order. First, one-loop RGEs for $v_{i}$ ($i=1,2$) are $$\displaystyle\left.\frac{dv_{i}}{dt}\right|_{\rm 1loop}$$ $$\displaystyle=$$ $$\displaystyle-\gamma_{i}^{(1)}v_{i}+\frac{1}{(4\pi)^{2}}(g_{Z}\rho_{iZ}+2g_{2}% \rho_{iW})$$ (11) $$\displaystyle=$$ $$\displaystyle v_{i}\left[-\gamma_{i}^{(1)}+\frac{1}{(4\pi)^{2}}\left(\frac{\xi% _{Z}g_{Z}^{2}}{2}+\xi_{W}g_{2}^{2}\right)\right]\,,$$ with the one-loop anomalous dimensions $\gamma_{i}^{(1)}$, $$(4\pi)^{2}\gamma_{i}^{(1)}=N_{c}h_{q}^{2}-\frac{3}{4}g_{2}^{2}\left(1-\frac{2}% {3}\xi_{W}-\frac{1}{3}\xi_{Z}\right)-\frac{1}{4}g_{Y}^{2}(1-\xi_{Z}),$$ (12) where $h_{q}^{2}=(h_{b}^{2},h_{t}^{2})$ for $i=(1,2)$, respectively, and $N_{c}=3$. The $\rho_{1Z}$ contribution to $m_{b}$ is obtained from the diagram in Fig. 1. All other contributions of $\rho_{iV}$ to $m_{q}$ come from similar diagrams. Eq. (11) is consistent with the result in Refs. [6, 7] for $\xi=1$. Since the gauge dependence of $\gamma_{i}$, as well as the contribution from ($\rho_{iZ}$, $\rho_{iW}$) satisfying Eq. (7), cancels in the ratio (10), the one-loop running $\tan\beta$ is gauge parameter independent in the $R_{\xi}$ gauge. I next proceed to the two-loop corrections. The two-loop anomalous dimensions $\gamma_{i}^{(2)}$ are obtained from the general formula [22] in the $\overline{\rm MS}$ scheme (the modified minimal subtraction schemes with dimensional regularization), after conversion into the $\overline{\rm DR}$ scheme [23], as $$\displaystyle(4\pi)^{4}\gamma_{1}^{(2)}$$ $$\displaystyle=$$ $$\displaystyle-N_{c}(3h_{b}^{4}+h_{b}^{2}h_{t}^{2})+2N_{c}h_{b}^{2}\left(\frac{% 8}{3}g_{3}^{2}-\frac{1}{9}g_{Y}^{2}\right)+L(g),$$ $$\displaystyle(4\pi)^{4}\gamma_{2}^{(2)}$$ $$\displaystyle=$$ $$\displaystyle-N_{c}(3h_{t}^{4}+h_{b}^{2}h_{t}^{2})+2N_{c}h_{t}^{2}\left(\frac{% 8}{3}g_{3}^{2}+\frac{2}{9}g_{Y}^{2}\right)+L(g).$$ (13) The last term $L(g)$ is a gauge-dependent ${\cal O}(g^{4})$ polynomial and is common both for $\gamma_{1}^{(2)}$ and $\gamma_{2}^{(2)}$. The $\xi$ dependence of the ${\cal O}(h_{q}^{2}g^{2})$ terms completely cancels out [22]. Note also that the ${\cal O}(h_{q}^{4})$ and ${\cal O}(h_{q}^{2}g^{2})$ terms agree with the result in the $\xi=0$ gauge [24] and with the superfield calculation [25] which uses manifestly supersymmetric gauge fixing. The two-loop $\rho_{iV}$ contributions to $dv_{i}/dt$ have ${\cal O}(h_{q}^{2}g\rho_{iV})$ and ${\cal O}(g^{3}\rho_{iV})$ terms. The latter is common for both $i=1$ and 2, and cancels out in the ratio $\tan\beta$ if Eq. (7) is satisfied. Therefore, only the former ${\cal O}(h_{q}^{2}g\rho_{iV})$ contributions are explicitly calculated. For example, the ${\cal O}(h_{b}^{2}g_{Z}\rho_{1Z})$ contribution to $v_{1}$ comes from the diagram (a) in Fig. 2, while other diagrams (b,c) cancel each other. The RGEs for $v_{i}$ are finally $$\left.\frac{dv_{i}}{dt}\right|_{\rm 2loop}=-\gamma_{i}^{(2)}v_{i}-\frac{N_{c}h% _{q}^{2}}{(4\pi)^{4}}(g_{Z}\rho_{iZ}+2g_{W}\rho_{iW})+P_{V}(g)\rho_{iV},$$ (14) where again $h_{q}^{2}=(h_{b}^{2},h_{t}^{2})$ for $i=(1,2)$, respectively. $P_{V}(g)$ are possibly gauge-dependent ${\cal O}(g^{3})$ functions which are common for both $\rho_{1V}$ and $\rho_{2V}$. It is therefore seen that, due to the $\rho_{iV}$ contributions in Eq. (14), the running $\tan\beta$ has the ${\cal O}(h_{q}^{2}g_{2}^{2},h_{q}^{2}g_{Y}^{2})$ gauge parameter dependence. Although existing higher-order calculations of the corrections to the MSSM Higgs sector [26, 27, 28, 29] have not included the contributions of these orders yet, the gauge dependence of $\tan\beta$ may cause theoretical problem in future studies of the higher-order corrections in the MSSM. One way to restore the gauge independence of renormalized running $\tan\beta$ is to introduce gauge-dependent shifts of $\phi_{i}^{0}$ such as to cancel the $\rho_{iV}$ contributions to the effective action. This modification corresponds to the addition of extra shifts of $v_{i}$ to all diagrams. The running $v_{i}$ in this new definition then obey the same RGEs as those for $H_{i}$, namely Eq. (8). The modified renormalized $\tan\beta$ becomes gauge independent to the two-loop order. However, an extra two-loop shift $\delta(v_{2}/v_{1})$ has to be added to any quantities which depend on $\tan\beta$. Before leaving, I briefly comment on two related issues in the process-independent on-shell renormalization of $v_{i}$ and $\tan\beta$ which is used in Refs. [6, 7]. First, they cancel the one-loop $\rho_{iV}$ contributions by extra counterterms for $v_{i}$, $\delta v_{i}$, and determine their finite parts by imposing the condition $\delta v_{1}/v_{1}=\delta v_{2}/v_{2}$. It is clear from Eq. (14) that this condition has to be modified beyond the one-loop. Second, the gauge dependence already appears in the one-loop finite part of the on-shell counterterm $\delta(\tan\beta)$. This is similar to the gauge dependence of the on-shell renormalized mixing matrices for other particles [30]. In conclusion, I discussed the UV renormalization of the ratio $\tan\beta=v_{2}/v_{1}$ of the Higgs VEVs in the MSSM, to the two-loop order and in general $R_{\xi}$ gauges. When renormalized $v_{i}$ are given by the minimum of the loop-corrected effective potential, the contributions of $\rho_{iV}$ in the $R_{\xi}$ gauge fixing term cause two-loop gauge dependence of the RGE for $\tan\beta$. To avoid this gauge dependence, the contributions of $\rho_{iV}$ have to be cancelled by extra shift of the Higgs boson fields $\phi_{i}^{0}$. Acknowledgements: This work was supported in part by the Grant-in-aid for Scientific Research from Japan Society for the Promotion of Science, No. 12740131. References [1] H. P. Nilles, Phys. Rep. 110 (1984) 1; H. E. Haber and G. L. Kane, Phys. Rep. 117 (1985) 75; R. Barbieri, Riv. Nuov. Cim. 11 (1988) 1; S. P. Martin, hep-ph/9709356, in Perspectives on Supersymmetry, edited by G.L. Kane (World Scientific, 1998). [2] J. F. Gunion and H. E. Haber, Nucl. Phys. B272 (1986) 1; B402 (1993) 567(E). [3] A. Sirlin, Phys. Rev. D 22 (1980) 971. [4] K.-I. Aoki, Z. Hioki, R. Kawabe, M. Konuma, and T. Muta, Prog. Theor. Phys. Suppl. 73 (1982) 1; M. Böhm, H. Spiesberger, and W. Hollik, Fortschr. Phys. 34 (1986) 687; A. Denner, Fortschr. Phys. 41 (1993) 307. [5] A. Yamada, Phys. Lett. B 263 (1991) 233; Z. Phys. C 61 (1994) 247; A. Brignole, Phys. Lett. B 281 (1992) 284; D. Pierce and A. Papadopoulos, Phys. Rev. D 47 (1993) 222; H. E. Haber and R. Hempfling, Phys. Rev. D 48 (1993) 4280. [6] P. H. Chankowski, S. Pokorski, and J. Rosiek, Phys. Lett. B 274 (1992) 191; Nucl. Phys. B423 (1994) 437; 497. [7] A. Dabelstein, Z. Phys. C 67 (1995) 495; Nucl. Phys. B456 (1995) 25. [8] Y. Yamada, hep-ph/9608382, in DPF ’96: The Minneapolis Meeting, edited by H. Keller, J. K. Nelson, and D. Reeder (World Scientific, 1998). [9] W. Siegel, Phys. Lett. 84B (1979) 193; D. M. Capper, D. R. T. Jones, and P. van Nieuwenhuizen, Nucl. Phys. B167 (1980) 479. [10] K. Fujikawa, B. W. Lee and A. I. Sanda, Phys. Rev. D 6 (1972) 2923. [11] D. J. Gross and F. Wilczek, Phys. Rev. D 8 (1973) 3633; W. E. Caswell and F. Wilczek, Phys. Lett. 49B (1974) 291; H. Kluberg-Stern and J. B. Zuber, Phys. Rev. D 12 (1975) 467; 482. [12] G. Degrassi and A. Sirlin, Nucl. Phys. B383 (1992) 73; R. Hempfling and B. A. Kniehl, Phys. Rev. D 51 (1995) 1386. [13] P. Gambino and P. A. Grassi, Phys. Rev. D 62 (2000) 076002. [14] M. Hirsch, M. A. Díaz, W. Porod, J. C. Romão, and J. W. F. Valle, Phys. Rev. D 62 (2000) 113008. [15] P. H. Chankowski and P. Wasowicz, hep-ph/0110237. [16] G. Gamberini, G. Ridolfi, and F. Zwirner, Nucl. Phys. B331 (1990) 331. [17] R. Jackiw, Phys. Rev. D 9 (1974) 1686; L. Dolan and R. Jackiw, Phys. Rev. D 9 (1974) 2904. [18] N. K. Nielsen, Nucl. Phys. B101 (1975) 173. [19] I. J. R. Aitchison and C. M. Fraser, Ann. Phys. (N.Y.) 156 (1984) 1; D. Johnston, Nucl. Phys. B253 (1985) 687; B283 (1987) 317. [20] O. M. Del Cima, D. H. T. Franco, and O. Piguet, Nucl. Phys. B551 (1999) 813. [21] M. Okawa, Prog. Theor. Phys. 60 (1978) 1175; A. Schilling and P. van Nieuwenhuizen, Phys. Rev. D 50 (1994) 967. [22] M. E. Machacek and M. T. Vaughn, Nucl. Phys. B222 (1983) 83. [23] S. P. Martin and M. T. Vaughn, Phys. Lett. B 318 (1993) 331. [24] S. P. Martin, hep-ph/0111209. [25] P. West, Phys. Lett. B 137 (1984) 371; D. R. T. Jones and L. Mezincescu, Phys. Lett. B 138 (1984) 293; Y. Yamada, Phys. Rev. D 50 (1994) 3537. [26] R. Hempfling and A. H. Hoang, Phys. Lett. B 331 (1994) 99; R. Zhang, Phys. Lett. B 447 (1999) 89. [27] M. Carena, M. Quirós, and C. E. M. Wagner, Nucl. Phys. B461 (1996) 407; H. E. Haber, R. Hempfling, and A. H. Hoang, Z. Phys. C 75 (1997) 539. [28] S. Heinemeyer, W. Hollik, and G. Weiglein, Phys. Rev. D 58 (1998) 091701; Phys. Lett. B 440 (1998) 296; Eur. Phys. J. C 9 (1999) 343; 16 (2000) 139. [29] J. Espinosa and R. Zhang, JHEP 0003 (2000) 026; Nucl. Phys. B586 (2000) 3; M. Carena, H. E. Haber, S. Heinemeyer, W. Hollik, C. E. M. Wagner, and G. Weiglein, Nucl. Phys. B580 (2000) 29; G. Degrassi, P. Slavich, and F. Zwirner, Nucl. Phys. B611 (2001) 403; A. Brignole, G. Degrassi, P. Slavich, and F. Zwirner, hep-ph/0112177. [30] P. Gambino, P. A. Grassi, and F. Madricardo, Phys. Lett. B 454 (1999) 98; B. A. Kniehl, F. Madricardo, and M. Steinhauser, Phys. Rev. D 62 (2000) 073010; A. Barroso, L. Brücher, and R. Santos, Phys. Rev. D 62 (2000) 096003; Y. Yamada, Phys. Rev. D 64 (2001) 036008.
Abstract Fabry-Perot fiber etalons (FPE) built from three or more reflectors are attractive for a variety of applications including communications and sensing. For accelerating a research and development work, one often desires to use off-the-shelf components to build an FPE with a required transmission profile for a particular application. Usually, multistage FPEs are designed with equal lengths of cavities followed by determination of the required reflectivities for realizing a desired transmission profile. As seen in previous works, fabricated reflectors are usually slightly different from the designed ones leading to departure from the desired transmission profile of the FPE. Here, we show a novel digital synthesis of multistage etalons with off-the-shelf reflectors and unequal lengths of involved cavities. We find that, in contrast to equal cavity lengths, unequal lengths of cavities provide more number of poles in the $z$-domain to achieve a desired multicavity FPE transmission response. For given reflectivities and by determining correct unequal lengths of cavities with our synthesis technique, we demonstrate a design example of increasing the FSR followed by its experimental validation. This work is generalizable to ring resonators, mirrored, and fiber Bragg grating based cavities; enabling the design and optimization of cavity systems for a wide range of applications including lasers, sensors, and filters. Digital synthesis of multistage etalons for enhancing the FSR Faiza Iftikhar${}^{1}$ Usman Khan${}^{2}$ and M. Imran Cheema${}^{1,*}$ ${}^{1}$Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering, LUMS, Sector U, DHA, Lahore, Pakistan ${}^{2}$Electrical and Computer Engineering, Tufts University Medford, MA 02155, USA ${}^{*}$imran.cheema@lums.edu.pk 1 Introduction The transmission profile of multistage Fabry-Perot etalons (FPEs) can be manipulated as a function of the involved reflectivities and lengths of cavities. This control of transfer function has enabled researchers to utilize these multicavity structures in various communications applications. Traditionally, researchers determine reflectivities for an FPE system by assuming equal lengths of cavities for a desired application [14, 10, 4, 2, 13, 12]. By looking at the reverse problem i.e., determination of lengths of cavities for given reflectivities to realize a particular application has not been fully explored yet. In previous works, the concept of unequal cavity FPEs and its implications in terms of transmission response was first introduced by Stadt et al. using a matrix approach[11]. In order to deal with complexity of the non-linear transfer function of multistage FPEs, researchers came up with a $z$-domain technique for the design and analysis of optical structures [6]. In the aforementioned works, the lengths of cavities were selected randomly to demonstrate their techniques. The $z$-domain techniques have been utilized for designing a variety of multistage FPE systems including interferometers [15, 4], interleavers [3, 14], and filters [7, 5, 1] The multistage FPEs with equal cavities require determination of reflectivities for realizing a required transmission response. The implemented FPE system has always different transmission response due to fabrication tolerances of the reflectors [14]. During the implementation stage, although one can not do much about the fabricated reflectors however, one can still control the lengths of cavities especially in mirror and FBG based FPEs. Considering all these issues, we propose a novel digital synthesis technique for determining optimum lengths of cavities for a given set of reflectivities to achieve a desired transmission response of multistage FPEs. Our design process starts with a given transmission profile. We then estimate a $z$-domain transfer function from the given transmission response by using a vector fitting technique [8]. With our developed algorithm, we then map the estimated transfer function to an FPE system with a desired number of reflectors and given reflectivities. This process provides lengths of cavities to achieve the desired transmission response with the given set of reflectivities. We show the application of our proposed synthesis technique by providing modeling and experimental results of FPEs comprising of two, three, and four cavities to achieve a desired free spectral range (FSR) and peak rejection ratio (PR) of side bands in a given transmission response. We now describe the rest of the paper. In Section 2, we provide an analytical expression for finding $z$-transform of an $n$ cavity FPE. We provide our digital synthesis technique in Section 3 followed by theoretical and experimental validation of the technique with a design example in Section 4. Finally, we provide concluding remarks in Section 5. 2 Generalized $z$-domain transmission function of a multistage FPE The schematics of a multistage FPE is shown in the Fig.1. Each cavity is assumed to be formed by connecting fiber Bragg gratings (FBGs) through optical fibres of unequal lengths. After invoking the Stokes parameters, $r_{n}^{-}=-r_{n}^{+}$ , $t_{n}^{+}t_{n}^{-}+r_{n}^{+}r_{n}^{-}=1$ and the matrix approach outlined in [11], it can be shown for the multistage FPE system in Fig.1, $$\displaystyle\begin{bmatrix}E_{n+1}^{+}\\ E_{n+1}^{-}\end{bmatrix}=\frac{e^{j\sum\limits_{m=1}^{n}\phi_{m}}}{t_{n}...t_{% 3}.t_{2}.t_{1}.t_{0}}\begin{bmatrix}1&r_{n}e^{-j2\phi_{n}}\\ r_{n}&e^{-j2\phi_{n}}\end{bmatrix}........$$ (1) $$\displaystyle\begin{bmatrix}1&r_{2}e^{-j2\phi_{2}}\\ r_{2}&e^{-j2\phi_{2}}\end{bmatrix}.\begin{bmatrix}1&r_{1}e^{-j2\phi_{1}}\\ r_{1}&e^{-j2\phi_{1}}\end{bmatrix}.\begin{bmatrix}1&r_{0}\\ r_{0}&1\end{bmatrix}\begin{bmatrix}E_{0}^{+}\\ E_{0}^{-}\end{bmatrix}$$ $$\displaystyle\begin{bmatrix}E_{n+1}^{+}\\ E_{n+1}^{-}\end{bmatrix}=\frac{e^{j\sum\limits_{m=1}^{n}\phi_{m}}}{\prod% \limits_{i=0}^{n}t_{i}}\begin{bmatrix}A_{n}&B_{n}\\ C_{n}&D_{n}\end{bmatrix}\begin{bmatrix}E_{0}^{+}\\ E_{0}^{-}\end{bmatrix}$$ (2) where $\phi_{n}=\dfrac{2\pi n_{n}l_{n}}{\lambda_{o}}$ represents phase length of the $n^{th}$ cavity, $\lambda_{o}$ is the source wavelength, $n_{n}l_{n}$ is the optical path length of the $n^{th}$ cavity, $r_{n}$ and $t_{n}$ are amplitude reflection and transmission coefficients at the $n^{th}$ FBG, respectively and ‘$+$‘, ‘$-$’ indicate directions of fields towards the right and left, respectively. By applying the boundary conditions, we then determine the transmission amplitude of the $n$ cavity FPE system, $$t_{n}=\frac{E_{0}^{+}}{E_{n+1}^{+}}\mid_{E_{0}^{-}=0}=\frac{e^{-j\sum\limits_{% m=1}^{n}\phi_{m}}\prod\limits_{i=0}^{n}t_{i}}{a_{n}}$$ (3) After analyzing above equations, we see a definite pattern of terms appearing in $a_{n}$ and its deduced expression is given by $$a_{n}=1+\sum_{k=1}^{2^{n}-1}C_{{P}(n)[k]}e^{-j2\sum\limits_{m\in{P}(n)[k]}^{\#% P(n)[k]}\phi_{m}}$$ (4) where $P(n)$ is the power set of $1$ to $n$ cavities excluding the empty set, $P(n)[k]$ is $k^{th}$ subset of the $P(n)$, $\#P(n)[k]$ is the cardinality of the subset $P(n)[k]$, and $C$ is a constant associated with each subset. The constant $C$ is made-up of reflectivities of an individual cavity or combination of cavities. As an example for a three cavity FPE (i.e., 4 FBGs), $P(n=3)=\left\{\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\right\}$, $P(n)[k=4]=\{1,2\}$, $\#P(n)[k=4]=2$, $C_{{P}(3)[1]}=r_{o}r_{1}$, $C_{{P}(3)[2]}=r_{1}r_{2}$, $C_{{P}(3)[3]}=r_{2}r_{3}$, $C_{{P}(3)[4]}=r_{o}r_{2}$, $C_{{P}(3)[5]}=r_{o}r_{1}r_{2}r_{3}$, $C_{{P}(3)[6]}=r_{1}r_{3}$, $C_{{P}(3)[7]}=r_{o}r_{3}$. The expressions for $C$ follow a specific set of rules: a)The $C$ of one cavity will have product of reflection coefficients spanned by that cavity b)The $C$ of two joint cavities will have product of reflection coefficients at their ends c)The $C$ of two disjoint cavities will have product of all reflection coefficients spanned by each cavity. Physically, these rules make perfect sense as all individual cavities and their combinations (coupling) will dictate the overall transmission response of the multistage FPE. On the same lines as outlined in [6], we can transform the transmission response given by Eq. (3) and our derived Eq. (4) in the $z$-domain as $$t_{n}(z)=\frac{z^{-\sum\limits_{m=1}^{n}x_{m}}\prod\limits_{i=0}^{n}t_{i}}{A_{% n}}$$ (5) $$A_{n}=1+\sum_{k=1}^{2^{n}-1}C_{{P}(n)[k]}.z^{\scriptstyle-2\sum\limits_{m\in{P% }(n)[k]}^{\#P(n)[k]}x_{m}}$$ (6) where $z^{-2x_{m}}=e^{-j\dfrac{4\pi\text{n}_{m}\text{l}_{m}}{\lambda_{o}}}$. By using equations (5) and (6), we can directly get the $z$-domain transmission function of an FPE system for any number of reflectors and lengths without invoking any algebraic manipulations. It is evident from Eq. (6) that $n$ equal cavities are equivalent to adding $n$ number of poles to the system whereas $n$ unequal cavities are equivalent to $2^{n}-1$ poles. This makes the unequal cavity system more powerful in achieving the desired transmission response. 3 Digital synthesis procedure The overall procedure for digitally synthesizing a required FPE is shown in Fig. 2. Starting from a desired transmission response (T${}_{d}$), we use discrete system identification via vector fitting [8] to estimate its transfer function T${}_{est}(z)$ in the $z$-domain. The obtained T${}_{est}(z)$ is generally a higher order function with typically more than 100 poles and zeros. With the aim to map T${}_{est}(z)$ to $n$ cavity system with unequal lengths of cavities, we refine the T${}_{est}(z)$ by applying constraints on coefficients of the denominator and numerator polynomials. For this purpose, we determine the $z$-domain transmission function, t${}_{n}(z)$, of the $n$ cavity etalon by using equations (5)-(6) for a given set of reflectivities. We calculate coefficient constraints based on fixed reflectivities of reflectors. We elaborate on the constraint procedure in the design example and appendix. The estimated transfer function T${}_{est}(z)$ is refined iteratively using the predictive error method (PEM) [9] to achieve the same functional form of t${}_{n}(z)$. The refined function is called as T${}_{r}(z)$. During the refinement process at each iteration, we determine the mean square error (MSE) and peak stopband rejection (PR) by comparing T${}_{est}(z)$ and T${}_{r}(z)$. If we do not get a required MSE and PR in a given constraint set of $n$ cavity system, one more reflector is added in the middle of the system to increase the number of cavities. When we get the required T${}_{r}(z)$ with the minimum MSE and maximum PR, we convert it into the analogue form, t${}_{n}$, using equations (3)-(4). We then determine the respective lengths of $n$ cavities. 4 Design example We now present modelling and experimental results of enhancing the FSR and PR of 2, 3, and 4 cavity FPE systems by determining the lengths of cavities using the digital synthesis procedure described in Section 3. 4.1 Modelling results Consider that we are given a two cavity system at 1550nm with fixed reflectivities of $R_{0}=0.87,R_{1}=0.99,R_{2}=0.91$ and equal optical path lengths of $n_{1}l_{1}$ = $n_{2}l_{2}$=90cm. Although these specifications can be picked somewhat arbitrarily however, we pick these numbers by keeping our experiments in mind (see Section 4.2). We choose these reflectivities due to availability of respective fiber Bragg gratings (FBGs) in our lab. The optical path lengths are selected due to easiness of splicing of two FBGs. By using equations (3)-(4), we can obtain the transmission of the given structure as shown in Fig. 3a. The free spectral range (FSR) of the given structure is 1.344pm. Now consider that we require to increase the FSR by 15 times i.e., to 20pm with the peak stopband rejection of better than -30dB. We process the two equal cavity transmission response to suppress peaks in such a way that the FSR is enhanced by 15 times as shown in Fig. 3b. During the processing, we take 15 FSRs of the given transmission profile and apply a digital bandpass filter with PR of -40dB to the central peak in Matlab. After generating the desired transmission profile, t${}_{d}$, we use tfest method in Matlab [8] to estimate its $z$-domain transfer function, T${}_{est}(z)$. While estimating the transfer function we keep on increasing the number of poles until the maximum percentage fit and minimum mean square error(MSE) are achieved as shown in Fig. 4. For the present example, the estimated transfer function, T${}_{est}(z)$, with 99.96% match is of order 600. Now let us try to map T${}_{est}(z)$ on a 2 cavity FPE system with unequal lengths whose $z$-domain transfer function, t${}_{n}(z)$, is given by equations (5)-(6). As more than 2${}^{2}$=4 terms are non-zero in denominator polynomial of the transfer function, T${}_{est}(z)$, we refine it by imposing fixed constraints on the required four coefficients, $C_{{P}(2)[1]}$,$C_{{P}(2)[2]}$,$C_{{P}(2)[3]}$, in the denominator and one coefficient, $t_{0}t_{1}t_{2}$ in the numerator. These constraints depend on values of $x_{1}$ and $x_{2}$ as elaborated in the appendix. This process gives us a refined transfer function, T${}_{r}(z)$. During the refinement process, we use the predictive error method (PEM) [9] which records MSE between T${}_{est}(z)$ and T${}_{r}(z)$ for each set of constrained coefficients. During the refinement procedure, we impose constraints on the aforementioned selected coefficients while forcing the rest of coefficients towards zero and keep updating the cavity lengths, $x_{m}$. We scan cavity lengths from ($x_{1},x_{2}$)=(1cm,1cm) to ($x_{1},x_{2}$)=(90cm,90cm) during the refinement. After the refinement step completion, we transform T${}_{r}(z)$ to t${}_{n}$ by using equations (3)-(4) and determine lengths of cavities. Some of results obtained from the refinement process are shown in Table 1. For calculating lengths $l_{1}$ and $l_{2}$ we assume refractive index of 1.445 due to usage of SMF-28 fibers for our experimental results. Although the second combination in Table 1 provides maximum PR however, we pick the shaded combination due to easiness of splicing. From the two cavity results, it is clear that the desired PR is not achievable. Therefore, we increase the number of cavities to achieve the required PR. Tables 2 and 3 show improvements in the PR corresponding to three and four cavity FPEs, respectively. Again we pick the shaded combinations to avoid splicing challenges while connecting FBGs during experiments. The modelling results for transmission of two, three, and four cavity FPEs are shown in Fig. 5 for the obtained unequal lengths from our synthesis technique. For the three cavity FPE, we assume $R_{0}=0.87$, $R_{1}=R_{2}=0.99$, and $R_{3}=0.91$. For the four cavity FPE, we assume $R_{0}=0.87$, $R_{1}=R_{2}=R_{3}=0.99$ and $R_{4}=0.91$. We have picked these reflectivities as per availability in our lab and for easier progression from two to three and then to the four cavity FPE system during experiments. 4.2 Experimental results The experimental schematics are shown in Fig. 6. In our experimental setup, we used an Eblana Photonics Inc. 1550nm laser diode (EP1550-0-NLW-B26-100FM) mounted on a Thorlabs driver (CLD1015). We tuned the laser diode using current modulation by employing a 0.5Hz triangular wave of 100mV${}_{pp}$. We applied the laser signal to two, three, and four multicavity structures whose parameters are given in Section 4.1. The output is displayed on an oscilloscope via Thorlabs detector (DET08CFC) followed by the collected data analysis in Matlab. The experimental results are shown in Fig. 7 which are in excellent agreement with the modeling results provided in Fig. 5. 5 Conclusion The modeling and experimental results clearly show that our proposed digital synthesis technique is achieving the desired transmission response using multistage FPEs. We find that the use of unequal cavities provide more number of poles in achieving a desired FPE transmission response as compared to the equal cavity approach. In the present work, we have demonstrated a controlled enhancement of the FSR by finding optimum lengths of cavities in multistage FPEs. We find that the FSR can be increased by any factor with two cavity etalon however, arbitrary increase in the FSR occurs at the expense of reduction in PR. For achieving both high FSR and PR, we need to add more FBGs. Our digital synthesis approach can be highly useful in applications where one can employ off-the-shelf reflectors and vary inter-cavity lengths to achieve a desired transmission response. In the present work, although we present centimeter range cavities for demonstrating experimental results however, our digital synthesis technique is applicable to any length scale. We anticipate that the present work will find wide applications in developing filters, interleavers, sensors, and single mode lasers. Funding Higher Education Commission of Pakistan (NRPU-5927). Appendix Let us consider a case of the two cavity system with $x_{1}=90$cm and $x_{2}=1$cm. Using equations (5) and (6) we get $$t_{2}(z)=\frac{t_{0}t_{1}t_{2}z^{-91}}{1+C_{{P}(2)[1]}z^{-180}+C_{{P}(2)[2]}z^% {-2}+C_{{P}(2)[3]}z^{-182}}$$ (7) where $C_{{P}(2)[1]}=r_{0}r_{1}$, $C_{{P}(2)[2]}=r_{1}r_{2}$, $C_{{P}(2)[3]}=r_{0}r_{2}$. In the design example, T${}_{est}(z)$ has 600 poles therefore, denominator constraints for the 600 coefficients, $a_{1}-a_{600}$, are $a_{1}=1$, $a_{2}=C_{{P}(2)[2]}$, $a_{180}=C_{{P}(2)[1]}$, $a_{182}=C_{{P}(2)[3]}$, and the rest of coefficients in the denominator are forced to zero. Similarly, constraints on coefficients of the numerator, $b_{1}-b_{300}$, are such that only $b_{91}=t_{0}t_{1}t_{2}$ is non-zero while remaining coefficients are equal to zero. This procedure is repeated by iteratively changing values of $x_{1}$ and $x_{2}$ until the desired MSE and PR are achieved. References [1] Osman S Ahmed, Mohamed A Swillam, Mohamed H Bakr, and Xun Li. Efficient design optimization of ring resonator-based optical filters. Journal of Lightwave Technology, 29(18):2812–2817, 2011. [2] Jinho Bae, Joohwan Chun, and Thomas Kailath. The schur algorithm applied to the design of optical multi-mirror structures. Numerical linear algebra with applications, 12(2-3):283–292, 2005. [3] S. Cao, J. Chen, J. N. Damask, C. R. Doerr, L. Guiziou, G. Harvey, Y. Hibino, H. Li, S. Suzuki, K.-Y. Wu, and P. Xie. Interleaver technology: Comparisons and applications requirements. J. Lightwave Technol., 22(1):281, Jan 2004. [4] Chi-Hao Cheng and Shasha Tang. Michelson interferometer based interleaver design using classic iir filter decomposition. Opt. Express, 21(25):31330–31335, Dec 2013. [5] E. M. Dowling and D. L. Macfarlane. Lightwave lattice filters for aptically multiplexed communication systems. Journal of Lightwave Technology, 12:471–486, March 1994. [6] Duncan L. MacFarlane and Eric M. Dowling. Z-domain techniques in the analysis of fabry–perot étalons and multilayer structures. J. Opt. Soc. Am. A, 11(1):236–245, Jan 1994. [7] C. K. Madsen. Efficient architectures for exactly realizing optical filters with optimum bandpass designs. IEEE Photonics Technology Letters, 10:1136–1138, August 1998. [8] Ahmet Arda Ozdemir and Suat Gumussoy. Transfer function estimation in system identification toolbox via vector fitting. IFAC-PapersOnLine, 50(1):6232–6237, 2017. [9] Rik Pintelon and Johan Schoukens. System identification: a frequency domain approach. John Wiley & Sons, 2012. [10] Marcel W Pruessner, Todd H Stievater, Peter G Goetz, William S Rabinovich, and Vincent J Urick. Cascaded integrated waveguide linear microcavity filters. Applied Physics Letters, 103(1):011105, 2013. [11] Herman van de Stadt and Johan M. Muller. Multimirror fabry–perot interferometers. J. Opt. Soc. Am. A, 2(8):1363–1370, Aug 1985. [12] Seongmin Yim and Henry F Taylor. Spectral slicing optical waveguide filters for dense wavelength division multiplexing. Optics communications, 233(1-3):113–117, 2004. [13] Seongmin Yim and Henry F Taylor. Design and spectral characteristics of multireflector etalons. Journal of lightwave technology, 23(3):1419, 2005. [14] Juan Zhang, Dong Hua, Yipeng Ding, and Yang Wang. Digital synthesis of optical interleaver based on a solid multi-mirror fabry-perot interferometer. Appl. Opt., 56(36):9976–9983, Dec 2017. [15] Juan Zhang and Xiaowei Yang. Universal michelson gires-tournois interferometer optical interleaver based on digital signal processing. Opt. Express, 18(5):5075–5088, Mar 2010.
Locally connected spanning trees on graphs Ching-Chi Lin Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan.Email: d91018@csie.ntu.edu.tw.Institute of Information Science, Academia Sinica, Nankang, Taipei 11507, Taiwan.    Gerard J. Chang Department of Mathematics, National Taiwan University, Taipei 10617, Taiwan. Email: gjchang@math. ntu.edu.tw. Supported in part by the National Science Council under grant NSC93-2213-E-002-028. Member of Mathematics Division, National Center for Theoretical Sciences at Taipei.    Gen-Huey Chen${}^{*}$ Email: ghchen@csie.ntu.edu.tw. (November 20, 2020) Abstract A locally connected spanning tree of a graph $G$ is a spanning tree $T$ of $G$ such that the set of all neighbors of $v$ in $T$ induces a connected subgraph of $G$ for every $v\in V(G)$. The purpose of this paper is to give linear-time algorithms for finding locally connected spanning trees on strongly chordal graphs and proper circular-arc graphs, respectively. Keywords: algorithm, circular-arc graph, directed path graph, interval graph, locally connected spanning tree, proper circular-arc graph, strongly chordal graph. 1 Introduction Communication networks or power transmission networks are often modelled as graphs $G$. The vertices in $V(G)$ represent sites in the network and the edges in $E(G)$ represent communication lines or power transmission lines. When delivering a message to a remote site on a communication network, it is transferred by a path consisting of many communication lines. Similarly, power transmission between source site and destination site is accomplished by a serial of power transmission lines. It is inexpensive to construct such networks as tree networks. However, one single site failure would influence the whole network. In order to guarantee the quality of service, Farley [5, 6] proposes isolated failure immune (IFI) networks. Two site failures are isolated if the sites are not adjacent. A network is immune to a set of failures if transmission between operative sites can be completed under such failures. A graph is a $2$-tree if it is either a 2-clique or it can be produced by adding a new vertex $v$ and two edges $vx$ and $vy$ to a $2$-tree such that $xy$ is an edge of the 2-tree. It has been shown that an IFI network is minimum if and only if it is a $2$-tree [5, 14]. Cai [2, 3] introduced the concept of locally connected spanning tree and showed that a network containing a locally connected spanning tree is an IFI network. A locally connected spanning tree of a graph $G$ is a spanning tree $T$ such that the set of all neighbors of $v$ in $T$ induce a connected subgraph of $G$ for every $v\in V(G)$. Figure 1 shows a graph $G$ with a locally connected spanning tree $T_{1}$ and a non-locally connected spanning tree $T_{2}$. Notice that the set of all neighbors of $u$ (respectively, $v$) in $T_{2}$ induces a disconnected subgraph in $G$. Cai [3] proved that determining whether a graph contains a locally connected spanning tree is NP-complete for planar graphs and split graphs. Furthermore, he also gave a linear-time algorithm for finding a locally connected spanning tree of a directed path graph, and a linear-time algorithm for adding fewest edges to a graph to make a given spanning tree of the graph a locally connected spanning tree of the augmented graph. Since split graphs are chordal, determining whether a graph contains a locally connected spanning tree is NP-complete for chordal graphs. It is well known that the family of strongly chordal graphs is a proper subfamily of the family of chordal graphs, and is a proper superfamily of the family of directed path graphs. In this paper, we give linear-time algorithms for finding locally connected spanning trees on strongly chordal graphs and proper circular-arc graphs, respectively. The former answers an open problem proposed by Cai [3]. The remainder of the paper is organized as follows. Section 2 describes and analyzes our algorithm for strongly chordal graphs. Section 3 describes and analyzes our algorithm for proper circular-arc graphs. Section 4 concludes the paper with an open question. We conclude this section at the following two lemmas which are useful in this paper. A separating set $S$ of a graph $G$ is a set $S\subseteq V(G)$ such that $G-S$ has more than one component. A cut-vertex is a vertex that forms a separating set. A graph is $k$-connected if it contains no separating set of size less than $k$. For $S\subseteq V(G)$, the subgraph induced by $S$ is the graph $G[S]$ whose vertex set is $S$ and edge set $\{xy\in E(G):x,y\in S\}$. Lemma 1 ([3]) If $G$ has a locally connected spanning tree $T$ and $S$ is a separating set of $G$, then $G[S]$ contains at least one edge of $T$. Consequently, a graph having a locally connected spanning tree is $2$-connected. Lemma 2 Suppose $\{{x,y}\}$ is a separating set of $G$ and $H$ is a component of $G-\{{x,y}\}$. If $H$ contains no common neighbor of $x$ and $y$, then $G$ has no locally connected spanning tree. Proof. Suppose to the contrary that $G$ has a locally connected spanning tree $T$. Then there exists a vertex of $H$ connecting $x$ or $y$, say $x$, through an edge in $T$. Notice that $\{{x,y}\}$ is a separating set of $G$, so $T$ contains the edge $xy$. Since the neighborhood of $x$ in $T$ induces a connected subgraph in $G$, $y$ is connected to $H[N_{T}(x)]$, which implies $x$ and $y$ have a common neighbor in $H$, a contradiction.       2 Algorithm for strongly chordal graphs This section establish a linear-time algorithm for determining whether a strong chordal graph has a locally connected spanning tree, and producing one if the answer is positive. First, some preliminaries on strongly chordal graphs. A graph is chordal (or triangulated) if every cycle of length greater than three has a chord, which is an edge joining two noncontiguous vertices in the cycle. The neighborhood $N_{G}(v)$ of a vertex $v$ is the set of all vertices adjacent to $v$ in $G$; and the closed neighborhood $N_{G}[v]=N_{G}(v)\cup\{v\}$. A vertex $v$ is simplicial if $N_{G}[v]$ is a clique. For any ordering $(v_{1},v_{2},\ldots,v_{n})$ of $V(G)$, let $G_{i}$ denote the subgraph of $G$ induced by $\{v_{i},v_{i+1},\ldots,v_{n}\}$. It is well known [7] that a graph $G$ is chordal if and only if it has a perfect elimination order which is an ordering $(v_{1},v_{2},\ldots,v_{n})$ of $V(G)$ such that each $v_{i}$ is a simplicial vertex of $G_{i}$. A strongly chordal graph is a chordal graph such that every cycle of even length at least six has a chord that divides the cycle into two odd length paths. Farber [4] proved that a graph is strongly chordal if and only if it has a strong elimination order which is an ordering $(v_{1},v_{2},\ldots,v_{n})$ of $V(G)$ such that $N_{G_{i}}[v_{j}]\subseteq N_{G_{i}}[v_{k}]$ for $i\leq j\leq k$ and $v_{j},v_{k}\in N_{G_{i}}[v_{i}]$. Notice that a strong elimination order is also a perfect elimination order. Anstee and Farber [1] presented an $O(n^{3})$-time algorithm, Hoffman, Kolen, and Sakarovitch [8] presented an $O(n^{3})$-time algorithm, Lubiw [9] presented an $O(m\log^{2}m)$-time algorithm, Paige and Tarjan [12] presented an $O(m\log m)$-time algorithm and Spinrad [13] presented an $O(n^{2})$-time algorithm for finding a strong elimination order of a strongly chordal graph of $n$ vertices and $m$ edges. According to Lemma 1, for a graph to have a locally connected spanning tree it is necessary that the graph is $2$-connected. We now give a necessary and sufficient condition for a chordal graph, and hence strongly chordal graph, to be $k$-connected. Lemma 3 Suppose $\sigma=(v_{1},v_{2},\ldots,v_{n})$ is a perfect elimination order of a chordal graph $G$ and $k<n$ is a positive integer. Then, $G$ is $k$-connected if and only if $|N_{G_{i}}(v_{i})|\geq k$ for $1\leq i\leq n-k$. Proof. Suppose $P=(v_{i_{1}},v_{i_{2}},\ldots,v_{i_{r}})$ is a shortest $v_{i}$-$v_{n}$ path. We claim that $i=i_{1}<i_{2}<\ldots<i_{r}=n$. Assume to the contrary that $i_{s-1}>i_{s}$ for some $2\leq s\leq r$. We may choose $s$ to be as large as possible. As $i_{r}=n\geq i_{s-1}>i_{s}$, we have that $s\leq r-1$. By the choice of $s$ we also have $i_{s+1}>i_{s}$. Since $v_{s}$ is a simplicial vertex of $G_{s}$, we have $v_{i_{s-1}},v_{i_{s+1}}\in N(v_{i_{s}})$ implying $v_{s-1}v_{s+1}\in E(G)$, contradicting that $P$ is a shortest path. ($\Rightarrow$) Suppose $G$ is $k$-connected, but $|N_{G_{i}}(v_{i})|<k$ for some $1\leq i\leq n-k$. Then $\sigma^{\prime}$ obtained from $\sigma$ by deleting all vertices in $N_{G_{i}}(v_{i})$ is a simplicial elimination order of $G^{\prime}=G-N_{G_{i}}(v_{i})$ which is connected. By the claim above, there is a shortest $v_{i}$-$v_{n^{\prime}}$ path $(v_{i_{1}},v_{i_{2}},\ldots,v_{i_{r}})$ in $G^{\prime}$ with $i=i_{1}<i_{2}<\ldots<i_{r}=n^{\prime}$. This is impossible as $v_{i_{2}}$ is a neighbor of $v_{i}$ in $G-N_{G_{i}}(v_{i})$. ($\Leftarrow$) Suppose $|N_{G_{i}}(v_{i})|\geq k$ for $1\leq i\leq n-k$. For any subset $S\subseteq V(G)$ of size less than $k$, let $v_{n^{\prime}}$ be the vertex of $V(G)-S$ with largest index. Then, any vertex $v_{i}$ of $G-S$ other than $v_{n^{\prime}}$ should have at least one neighbor $v_{i^{*}}$ not in $S$ and $i<i^{*}$. Consequently, every vertex $v_{i}$ in $G-S$ has a path connecting to $v_{n^{\prime}}$. Therefore, $G-S$ is connected. This gives the $k$-connectivity of $G$.       Now, suppose $(v_{1},v_{2},\ldots,v_{n})$ is a strong elimination order of $G$. For any neighbor $v_{j}$ of $v_{i}$ with $j>i$, let $\ell(v_{i},v_{j})=k$ be the minimum index such that $v_{k}\in N_{G}[v_{i}]\cap N_{G}[v_{j}]$. Notice that $\ell(v_{i},v_{j})$ always exists. For the case when $v_{i}$ and $v_{j}$ has no common neighbors with indices smaller than $i$ we have $\ell(v_{i},v_{j})=i$. The closest neighbor of a vertex $v_{i}$ is the vertex $v_{i^{*}}\in N_{G_{i}}(v_{i})$ such that $\ell(v_{i},v_{i^{*}})\leq\ell(v_{i},v_{j})$ for all $v_{j}\in N_{G_{i}}(v_{i})$, while tie breaks by choosing $i^{*}$ minimum. In the following, we give an algorithm to determine whether a strongly chordal graph has a locally connected spanning tree, and to produce one when the answer is positive. The algorithm first choose $v_{n-1}v_{n}$ as an edge of the desired tree. It then iterates for $i$ from $n-2$ back to $1$ by adding the edge $v_{i}v_{i^{*}}$ into the tree. To ensure the $2$-connectivity of the graph $G$, according to Lemma 3, we check if $|N_{G_{i}}(v_{i})|\geq 2$. When the answer is negative, the graph is not $2$-connected and so has no locally connected spanning tree. Algorithm Strongly-Chordal. Input: A strongly chordal graph $G$ of order $n\geq 3$ with a strong elimination order $(v_{1},v_{2},\ldots,v_{n})$. Output: A locally connected spanning tree $T_{1}$ of $G$ if it exists, and “NO” otherwise. 1. For $i=1$ to $n$ do Sort $N_{G}(v_{i})$ into $v_{i_{1}},v_{i_{2}},\ldots,v_{i_{d_{i}}}$, where $i_{1}<i_{2}<\dots<i<i_{p_{i}}<\dots<i_{d_{i}}$. 2. For $i=1$ to $n$ do $v_{i^{*}}=0$. 3. For $j=1$ to $n$ do If $v_{j^{*}}=0$, then $v_{j^{*}}=v_{j_{p_{j}}}$. For $k=p_{j}$ to $d_{j}-1$ do If $v_{(j_{k})^{*}}=0$, then $v_{(j_{k})^{*}}=v_{j_{k+1}}$. 4. If $v_{n-1}$ is adjacent to $v_{n}$, then let $T_{n-1}=v_{n-1}v_{n}$, else return “NO”. 5. For $i=n-2$ to $1$ step by $-1$ do If $|N_{G_{i}}(v_{i})|\leq 1$, then return “NO”, else let $T_{i}=T_{i+1}+v_{i}v_{i^{*}}$. 6. Return $T_{1}$. Notice that we may use “$T_{n-1}=v_{n-1}v_{n}$” in step 4 of the algorithm, as $|N_{G_{n-2}}(v_{n-2})|\geq 2$ implying that $v_{n-1}$ is adjacent to $v_{n}$. Also, $v_{(n-1)^{*}}=v_{n}$, and so we can interpret step 4 as “$T_{n}=\phi$ and $T_{n-1}=T_{n}+v_{n-1}v_{(n-1)^{*}}$”. Theorem 4 For a strongly chordal graph $G$ with a strong elimination order provided, Algorithm Strongly-Chordal determines in linear-time whether $G$ has a locally connected spanning tree, and produces one if the answer is positive. Proof. We first claim that steps 1 to 3 give the closest neighbor $v_{i^{*}}$ of each $v_{i}$. Notice that step 1 sorts the neighbors of each vertex first. For the case when there are no $i_{r}<i<i_{s}$ with $v_{i_{r}}$ adjacent to $v_{i_{s}}$, the closest neighbor $v_{i^{*}}$ is the neighbor of $v_{i}$ of minimum index which is larger than $i$, namely $v_{i_{p_{i}}}$. For the other case, $v_{i^{*}}$ is obtained by finding a minimum index $j$ such that $i=j_{k}$ and $i^{*}=j_{k+1}$ for some $k$ with $p_{j}\leq k\leq d_{j}-1.$ These are taken care of in steps 2 and 3. When the algorithm returns a “NO”, according to Lemmas 3 and 1, the graph $G$ has no locally connected spanning tree. We now assume that the algorithm returns $T_{1}$. In this case, $|N_{G_{i}}(v_{i})|\geq 2$ for all $i\leq n-2$. By Lemma 3, $G$ is $2$-connected. We first claim that $T_{i+1}$ has an edge $v_{j}v_{i^{*}}$ whose end vertices are neighbors of $v_{i}$ for $i\leq n-2$. When $v_{i^{*}}=v_{i_{d_{i}}}$, let $v_{j}=v_{i_{d_{i}-1}}$. Since $v_{j}$ and $v_{i^{*}}$ are neighbors of $v_{i}$ with $i<j<i^{*}$, we have $v_{j}v_{i^{*}}\in E(G)$ and $\ell(v_{j},v_{i^{*}})\leq i$. If $v_{j^{*}}=v_{i^{*}}$, then $T_{i+1}$ has an edge $v_{j}v_{i^{*}}=v_{j}v_{j^{*}}$ whose end vertices are neighbors of $v_{i}$ as desired. Now, suppose $v_{j^{*}}\neq v_{i^{*}}$. By the choice of $v_{j}$, we have that $v_{j^{*}}$ is not a neighbor of $v_{i}$. And so $\ell(v_{j},v_{j^{*}})<\ell(v_{j},v_{i^{*}})\leq i$. Let $k=\ell(v_{j},v_{j^{*}})$. Then $v_{i}\in N_{G_{k}}[v_{j}]\subseteq N_{G_{k}}[v_{j^{*}}]$, which violates that $v_{j^{*}}$ is not a neighbor of $v_{i}$. When $v_{i^{*}}=v_{i_{r}}$ with $r<d_{i}$, consider the closest neighbor $v_{i^{**}}$ of $v_{i^{*}}$. Let $k=\ell(v_{i^{*}},v_{i^{**}})$. Then, we have $k=\ell(v_{i^{*}},v_{i^{**}})=\ell(v_{i_{r}},v_{(i_{r})^{*}})\leq\ell(v_{i_{r}}% ,v_{i_{d_{i}}})\leq i$. It follows that $v_{i}\in N_{G_{k}}[v_{i^{*}}]\subseteq N_{G_{k}}[v_{i^{**}}]$, and so $T_{i+1}$ has an edge $v_{i^{*}}v_{i^{**}}$ whose end vertices are neighbors of $v_{i}$ as desired. We shall prove that $T_{i}$ is a locally connected spanning tree of $G_{i}$ for each $i$ by induction on $i$ from $n$ back to $1$. The assertion is clearly true for $i\geq n-1$. Suppose $T_{i+1}$ is a locally connected spanning tree of $G_{i+1}$. To see $T_{i}$ is a locally connected spanning tree of $G_{i}$, we only need to verify that $G_{i}[N_{T_{i}}(v_{i})]$ and $G_{i}[N_{T_{i}}(v_{i^{*}})]$ are connected as $T_{i}=T_{i+1}+v_{i}v_{i^{*}}$. Since $v_{i}$ is a leaf in $T_{i}$, $G_{i}[N_{T_{i}}(v_{i})]$ is connected. According to the facts that $G_{i+1}[N_{T_{i+1}}(v_{i^{*}})]$ is connected and that $T_{i+1}$ has an edge connecting $v_{i^{*}}$ and a neighbor of $v_{i}$ in $G_{i+1}$ for $i\leq n-2$, it follows that $G_{i}[N_{T_{i}}(v_{i^{*}})]$ is connected. These prove the correctness of the algorithm. We finally argue that the time complexity for the algorithm is linear. In step $1$, we may sort the neighbors of each $v_{i}$ by adding $v_{i}$ into the adjacent list of each neighbor of $v_{i}$ from $i=1$ to $n$. So totally, step $1$ takes $O(n+m)$ time. It is easy to see that the other steps also take linear time.       Corollary 5 If $G$ is a strongly chordal graph, then $G$ has a locally connected spanning tree $T$ if and only if it is $2$-connected. 3 Algorithm for proper circular-arc graphs This section establishes a linear-time algorithm for determining whether a proper circular-arc graph has a locally connected spanning tree, and producing one if the answer is positive. First, some preliminaries on circular-arc graphs. A circular-arc graph $G$ is the intersection graph of a family $F$ of arcs in a circle, with vertices of $G$ corresponding to arcs in $F$ and two vertices in $G$ are adjacent if and only if their corresponding arcs in $F$ overlap. McConnell [10, 11] gave a linear-time algorithm to recognize circular-arc graphs. As a byproduct, an intersection model $F$ of circular-arc graph $G$ can be constructed in linear time. A family $F$ is said to be proper if no arc in $F$ is contained in another. For a vertex $v$ of $G$, let $a(v)$ denote the corresponding arc in $F$. An arc $a(v)$ that begins at endpoint $h(v)$ and ends at endpoint $t(v)$ in a counterclockwise traversal is denoted by $[h(v),t(v)]$, where $h(v)$ is the head of $a(v)$ and $t(v)$ is the tail of $a(v)$. Assume without loss of generality that all arc endpoints are distinct and no arc covers the entire circle. A segment $(s,t)$ of a circle is the continuous part that begins at endpoint $s$ and ends at endpoint $t$ in a counterclockwise traversal. The segment $(s,t)$ is considered as not containing points $s$ and $t$ and segment $[s,t]$ is considered as containing $s$ and $t$. Similarly, $[s,t)$ and $(s,t]$ are segments containing $s$ but not $t$; and not containing $s$ but $t$. The density $d(v)$ of the arc $a(v)$ is the number of arcs, including $a(v)$, in $F$ that contain $h(v)$. First, a lemma on circular-arc graphs. Lemma 6 If a circular-arc graph $G$ has at least four corresponding arcs with $d(v)\leq 2$ in $F$, then $G$ has no locally connected spanning tree. Proof. Suppose $a(v_{p})$, $a(v_{q})$, $a(v_{r})$ and $a(v_{s})$ are four arcs of $F$ with density at most $2$ in a counterclockwise traversal, see Figure 2. Let $a(v_{p^{\prime}})$, $a(v_{q^{\prime}})$, $a(v_{r^{\prime}})$ and $a(v_{s^{\prime}})$ be the arcs which contain the heads of $a(v_{p})$, $a(v_{q})$, $a(v_{r})$ and $a(v_{s})$, respectively. We assume that $a(v_{p^{\prime}})$, $a(v_{q^{\prime}})$, $a(v_{r^{\prime}})$ or $a(v_{s^{\prime}})$ is empty when $d(v_{p})$, $d(v_{q})$, $d(v_{r})$ or $d(v_{s})$ is $1$, respectively. If $a(v_{p^{\prime}})$ exists and contains $h(v_{s})$, i.e. $a(v_{p^{\prime}})=a(v_{s^{\prime}})$, then $v_{p^{\prime}}$ is a cut-vertex of $G$ since no arc in $F-\{a(v_{p^{\prime}})\}$ crosses the points $h(v_{p})$ and $h(v_{s})$. In this case, $G$ has no locally connected spanning tree. So, we may assume that $a(v_{p^{\prime}})$ does not cross $h(v_{s})$. Similarly, we may assume that $a(v_{p^{\prime}})$ does not cross $h(v_{q})$. Then, $a(v_{p^{\prime}})$ is contained in $[h(v_{s}),h(v_{q}))$. Similarly, if $a(v_{r^{\prime}})$ exists then it is contained in $[h(v_{q}),h(v_{s}))$. Therefore, $a(v_{p^{\prime}})$ and $a(v_{r^{\prime}})$ do not overlap, which implies that $v_{p^{\prime}}v_{r^{\prime}}$ is not an edge of $G$. Again, $\{{v_{p^{\prime}},v_{r^{\prime}}}\}$ is a separating set of $G$, since no arc in $F-\{a(v_{p^{\prime}}),a(v_{r^{\prime}})\}$ crosses the points $h(v_{p})$ and $h(v_{r})$. According to Lemma 1, $G$ has no locally connected spanning tree as $v_{p^{\prime}}v_{r^{\prime}}\not\in E(G)$. Notice that the proof covers the case when $a(v_{p^{\prime}})$ or $a(v_{r^{\prime}})$ is empty.       Suppose $G$ is a circular-arc graph in which $d(v)=1$ for some vertex $v$, then $G$ is in fact an interval graph. In this case, results in the previous section can be used to determine whether $G$ has a locally connected spanning tree as interval graphs are strongly chordal. Notice that the ordering from left to right of the right endpoints of the intervals in an interval representation of $G$ is a strong elimination order. Therefore, without lost of generality, we may assume that $d(v)\geq 2$ in $F$ for each vertex $v$ of $G$. We now turn attention to the algorithm for finding locally connected spanning trees on proper circular-arc graphs. In this case, if we identify $a(v_{1})$, then let $(a(v_{2}),a(v_{3}),\ldots,a(v_{n}))$ be the ordering of arcs in $F-\{a(v_{1})\}$ such that $h(v_{i})$ is encountered before $h(v_{j})$ in a counterclockwise traversal from $h(v_{1})$ if $i<j$. Since $d(v)\geq 2$ for all vertices $v$, it is the case that $G$ is $2$-connected as $(v_{1},v_{2},\ldots,v_{n},v_{1})$ is a Hamiltonian cycle. By Lemma 6, if $G$ has at least four corresponding arcs with $d(v)=2$ in $F$, then $G$ has no locally connected spanning tree. Therefore, we only need to treat the case when $G$ has at most three corresponding arcs in $F$ with density equal to $2$. We divide the problem into three cases. For the case when $G$ has at most one corresponding arc in $F$ with density equal to $2$, the algorithm is similar to the one for interval graphs. For the cases $G$ has two or three, the graph $G$ has special structures, we design algorithm by using these properties. Algorithm Proper-Circular-Arc. Input: A proper circular-arc graph $G$ of order $n\geq 3$ and with an intersection model $F$ such that $d(v)\geq 2$ for all vertices $v$. Output: A locally connected spanning tree $T$ of $G$, if it exists, and “NO” otherwise. 1. If $G$ has at least four corresponding arcs in $F$ with density equal to $2$, then return “NO”. 2. If $G$ has at most one corresponding arc in $F$ with density equal to $2$. (a) If $G$ has one corresponding arc in $F$ with density equal to $2$, then let $a(v_{1})$ contain its head in $F$. Otherwise, let $a(v_{1})$ be an arbitrary arc in $F$. (b) Let $T_{1}=\phi$. Let $T_{i}=T_{i-1}+v_{i}v_{i-1}$ for $i=2$ to $n$. 3. If $G$ has exactly two corresponding arcs in $F$ with density equal to $2$. (a) If the two arcs overlap. Let $a(v_{1})$ and $a(v_{2})$ be the two arcs in $F$ such that $a(v_{1})$ contains the head of $a(v_{2})$. Do step $2(b)$. (b) Otherwise, suppose $a(v_{1})$ and $a(v_{k})$ contain the heads of the two arcs, respectively. We may assume that $a(v_{1})$ contains the head of $a(v_{k})$, if they overlap. If $v_{1}$ and $v_{k}$ have no common neighbor $v_{z}$ with $z>k$ or $v_{1}v_{k}\not\in E(G)$, then return “NO”. Otherwise, let $T=\{{v_{1}v_{i}|~{}i=2,\ldots,k\text{ or }i=z}\}\cup\{{v_{z}v_{i}|~{}i=k+1,% \ldots,n\text{ but }i\not=z}\}$. 4. If $G$ has exactly three corresponding arcs in $F$ with density equal to $2$. (a) Suppose $a(v_{1})$, $a(v_{p})$ and $a(v_{q})$ contain the heads of the three arcs in a counterclockwise traversal, respectively. (b) If all of $\{{v_{1},v_{p}}\}$, $\{{v_{1},v_{q}}\}$ and $\{{v_{p},v_{q}}\}$ are separating sets of $G$ or one of the three edges $v_{1}v_{p}$, $v_{1}v_{q}$ and $v_{p}v_{q}$ does not belong to $E(G)$, then return “NO”. Otherwise, we may assume that $\{{v_{p},v_{q}}\}$ is not a separating set of $G$. (c) Let $T=\{{v_{1}v_{i}|~{}i=2,\ldots,n}\}$. 5. Return $T$. Now, we prove the correctness of the above algorithm. If the algorithm outputs a tree $T$, we verify that it is a locally connected spanning tree. Otherwise, we show that $G$ lacks one necessary condition of having a locally connected spanning tree. In the following, let $G_{i}=G[\{v_{1},v_{2},\ldots,v_{i}\}]$. Lemma 7 Algorithm Proper-Circular-Arc outputs a locally connected spanning tree $T$ of $G$, if it exists. Proof. By Lemma 6, if $G$ has at least four corresponding arcs with $d(v)=2$ in $F$, then $G$ has no locally connected spanning tree. Therefore, we need only to consider the cases when $G$ has at most three corresponding arcs with $d(v)=2$ in $F$. We first consider the case when $G$ has at most one corresponding arc in $F$ with density equal to $2$. Notice that if $v_{i}$ has $k$ neighbors in $G_{i}$, then the neighbors of $v_{i}$ are $v_{i-1}\ldots,v_{i-k}$. Since $d(v_{i})\geq 3$, the vertex $v_{i}$ has at least $2$ neighbors $v_{i-1}$ and $v_{i-2}$ in $G_{i}$ for $i\geq 3$. We shall prove that $T_{i}$ is a locally connected spanning tree of $G_{i}$ by induction on $i$. The claim is true for $T_{2}$. By the induction hypothesis, $T_{i-1}$ is a locally connected spanning tree of $G_{i-1}$ and so $G_{i-1}[N_{T_{i-1}}(v_{i-1})]$ is connected. We know that $T_{i-1}$ contains an edge connecting $v_{i-1}$ and $v_{i-2}$ in $G_{i-1}$ for $i\geq 3$ as we have $T_{i-1}=T_{i-2}+v_{i-1}v_{i-2}$. Since $v_{i-1}$ and $v_{i-2}$ are neighbors of $v_{i}$, $G_{i}[N_{T_{i}}(v_{i-1})]$ and $G_{i}[N_{T_{i}}(v_{i})]$ are connected, $T_{i}$ is a locally connected spanning tree of $G_{i}$. Next, consider the case when $G$ has two corresponding arcs in $F$ with density equal to $2$. If the two arcs overlap, we have $d(v_{1})=d(v_{2})=2$ and $d(v_{i})\geq 3$ for $i\geq 3$. The remainder of the proof for this case is similar to the above case. Otherwise, we have $d(v_{2})=d(v_{k+1})=2$ and the two vertices $v_{2}$ and $v_{k+1}$ are in different component of $G-\{{v_{1},v_{k}}\}$. See Figure 3(a) for an example. By Lemma 2, if $v_{1}$ and $v_{k}$ do not have common neighbor $v_{z}$ with $z>k$, then $G$ has no locally connected spanning tree. For the case when such $v_{z}$ exists, we prove that the output $T$ is a locally connected spanning tree in this case by showing that each edge in $T$ is also in $E(G)$ and the two components $G[N_{T}(v_{1})]$ and $G[N_{T}(v_{z})]$ are connected. Consider $v_{1}v_{k}\in E(G)$. Since $a(v_{1})$ contains the head of $a(v_{k})$, the arc $a(v_{i})$ would also contain $h(v_{k})$ for $1<i<k$. It follows that $v_{i}v_{1},v_{i}v_{k}\in E(G)$ for $1<i<k$. Notice that $v_{z}$ is a common neighbor of $v_{1}$ and $v_{k}$ with $z>k$. Therefore, each vertex in $N_{T}(v_{1})$ is adjacent to $v_{1}$ in $G$ and $G[N_{T}(v_{1})]$ is connected. Since $a(v_{z})$ contains the tail of $a(v_{k})$ and the head of $a(v_{1})$, $a(v_{i})$ would also contain $t(v_{k})$ for $k<i<z$ and $a(v_{i})$ would also contain $h(v_{1})$ for $z<i\leq n$. It follows each vertex in $N_{T}(v_{z})$ is also adjacent to $v_{z}$ in $G$ and both of the two components $G[v_{k+1},\ldots,v_{z-1}]$ and $G[v_{z+1},\ldots,v_{n},v_{1}]$ are connected. Since the density of $a(v_{z+1})$ is at least $3$, vertex $v_{z-1}$ is adjacent to $v_{z+1}$, where $v_{z+1}=v_{1}$ if $z=n$. Therefore, $G[N_{T}(v_{z})]$ is connected. Finally, consider the case when $G$ has three corresponding arcs in $F$ with density equal to $2$. Choose any two vertices of $\{{v_{1},v_{p},v_{q}}\}$, if one does not succeed another in a counterclockwise traversal, the two vertices form a separating set of $G$. If all of $\{{v_{1},v_{p}}\}$, $\{{v_{1},v_{q}}\}$ and $\{{v_{p},v_{q}}\}$ are separating sets of $G$, then $T$ should contain the cycle $(v_{1},v_{p},v_{q})$. Thus, the two vertices is either a separating set of $G$ or one succeeds another in a counterclockwise traversal. It follows that the three edges $v_{1}v_{p}$, $v_{1}v_{q}$ and $v_{p}v_{q}$ should belong to $E(G)$ and at least one of the three sets is not a separating set, if $G$ contains a locally connected spanning tree. To see the output $T$ is a locally connected spanning tree of $G$ in this case, it suffices to show that $G-v_{1}$ is connected and $G$ contains edges containing $v_{1}$ and $v_{i}$ for $i\geq 2$. Consider the case when $G$ contains the three edges $v_{1}v_{p}$, $v_{1}v_{q}$ and $v_{p}v_{q}$, and $\{{v_{p},v_{q}}\}$ is not a separating set of $G$, i.e., $q=p+1$. See Figure 3(b) for an example. Since $a(v_{p})$ contains the tail of $a(v_{1})$ and $a(v_{q})$ contains the head of $a(v_{1})$, $a(v_{i})$ contains $t(v_{1})$ for $2\leq i\leq p$ and $a(v_{i})$ contains $h(v_{1})$ for $q\leq i\leq n$. Therefore, $v_{i}v_{1}\in E(G)$ for $2\leq i\leq n$ and the two components $G[v_{2},\ldots,v_{p}]$ and $G[v_{q},\ldots,v_{n}]$ are connected. Since $v_{p}$ and $v_{q}$ are adjacent, $G-v_{1}$ is connected.       Now, we prove the algorithm runs in linear time. Recognizing the corresponding arcs in $F$ whose density are equal to $2$ and determining the order $(v_{1},v_{2},\dots,v_{n})$ could be done by traversal the intersection model $F$ in counterclockwise order, which takes $O(n)$ time. Therefore, Step $1$ takes linear time. Notice that, if the arcs with density equal to $2$ are recognized, constructing a locally connected spanning tree in the three difference cases all take $O(n)$ time. Step $2$ takes linear time. Consider the Step $3$. It takes $O(1)$ time to check whether $a(v_{1})$ and $a(v_{k})$ are overlap and takes $O(n)$ time to check whether there exists a common neighbor $v_{z}$ with $z>k$. Step $3$ takes $O(n)$ time. It also takes constant time to determine whether any two vertices of $\{{v_{1},v_{p},v_{q}}\}$ is a separating set of $G$ by checking whether one succeeds another in a counterclockwise traversal. Thus, this algorithm runs in linear time. Theorem 8 For a proper circular-arc graph $G$ with an intersection model $F$ provided, Algorithm Proper-Circular-Arc determines in linear-time whether $G$ has a locally connected spanning tree, and produces one if the answer is positive. 4 Conclusion In this paper, we present two algorithms for finding locally connected spanning trees on strongly chordal graphs and proper circular-arc graphs, respectively. The former answers an open problem proposed by Cai [3]. It is an interesting problem to design an algorithm for finding locally connected spanning trees on circular-arc graphs or to prove that it is NP-complete. References [1] R. P. Anstee and M. Farber. Characterizations of totally balanced matrices. Journal of Algorithms, 5(2):215–230, 1984. [2] L. Cai. On spanning 2-trees in a graph. Discrete Applied Mathematics, 74(3):203–216, 1997. [3] L. Cai. The complexity of the locally connected spanning tree problem. Discrete Applied Mathematics, 131(1):63–75, 2003. [4] M. Farber. Characterizations of strongly chordal graphs. Discrete Math., 43(2-3):173–189, 1983. [5] A. M. Farley. Networks immune to isolated failures. Networks, 11:255–268, 1981. [6] A. M. Farley and A. Proskurowski. Networks immune to isolated line failures. Networks, 12:393–403, 1982. [7] M. C. Golumbic. Algorithmic Graph Theory and Perfect Graphs. Academic Press, New York, 1980. [8] A. J. Hoffman, A. W. J. Kolen, and M. Sakarovitch. Totally-balanced and greedy matrices. SIAM J. Algebraic Discrete Methods, 6(4):721–730, 1985. [9] A. Lubiw. Doubly lexical orderings of matrices. SIAM J. Comput., 16(5):854–879, 1987. [10] R. M. McConnell. Linear-time recognition of circular-arc graphs. In 42nd IEEE Symposium on Foundations of Computer Science, pages 386–394. IEEE, 2001. [11] R. M. McConnell. Linear-time recognition of circular-arc graphs. Algorithmica, 37(2):93–147, 2003. [12] R. Paige and R. E. Tarjan. Tree partition refinement algorithms. SIAM J. Comput., 16:973–989, 1987. [13] J. P. Spinrad. Doubly lexical ordering of dense $0$-$1$ matrices. Inform. Process. Lett., 45(5):229–235, 1993. [14] J. A. Wald and C. J. Colbourn. Steiner trees, partial 2-trees, and mimimum IFI networks. Networks, 13:159–167, 1983.
SISSA 21/2021/FISI Four-fermion operators at dimension 6: dispersion relations and UV completions Aleksandr Azatov${}^{a,b,c,1}$, Diptimoy Ghosh${}^{d,2}$, Amartya Harsh Singh${}^{d,3}$ ${}^{a}$ SISSA International School for Advanced Studies, Via Bonomea 265, 34136, Trieste, Italy ${}^{b}$ INFN - Sezione di Trieste, Via Bonomea 265, 34136, Trieste, Italy ${}^{c}$ IFPU, Institute for Fundamental Physics of the Universe, Via Beirut 2, 34014 Trieste, Italy ${}^{d}$ Department of Physics, Indian Institute of Science Education and Research Pune, India Abstract A major task in phenomenology today is constraining the parameter space of SMEFT and constructing models of fundamental physics that the SM derives from. To this effect, we report an exhaustive list of sum rules for 4-fermion operators of dimension 6, connecting low energy Wilson coefficients to cross sections in the UV. Unlike their dimension 8 counterparts which are amenable to a positivity bound, the discussion here is more involved due to the weaker convergence and indefinite signs of the dispersion integrals. We illustrate this by providing examples with weakly coupled UV completions leading to opposite signs of the Wilson coefficients for both convergent and non convergent dispersion integrals. We further decompose dispersion integrals under weak isospin and color groups which lead to a tighter relation between IR measurements and UV models. These sum rules can become an effective tool for constructing consistent UV completions for SMEFT following the prospective measurement of these Wilson coefficients. E-mail: ${}^{1}$aleksandr.azatov@sissa.it, ${}^{2}$diptimoy.ghosh@iiserpune.ac.in, ${}^{3}$amartya.harshsingh@students.iiserpune.ac.in Contents 1 Introduction 2 Review of dispersion relations 3 Warm up exercise 3.1 Charge neutral vector exchange 3.2 Charge two scalar 3.3 UV completion at 1-loop 4 Four fermion operators 4.1 Experimental constraints 4.2 FULLY RIGHT HANDED 4.2.1 $O_{ee}$ 4.2.2 $O_{uu},O_{dd}$ 4.2.3 $O^{(1),(8)}_{ud}$ 4.2.4 $O_{eu},O_{ed}$ 4.3 SUM RULES FOR EW DOUBLETS 4.3.1 $O_{le},O_{lu},O_{ld},O_{qe}$ 4.3.2 $O_{qu}^{(1),(8)},O_{qd}^{(1),(8)}$ 4.3.3 $O_{ll}$ 4.3.4 $O^{(3),(1)}_{lq}$ 4.3.5 $O_{qq}$ and $O^{(3)}_{qq}$ 5 Summary A Massless spinor helicity conventions B Details about cross sections and loop amplitudes B.1 $Z^{\prime}$ at tree level B.2 Integrating out color octet B.3 Charge 2 scalar at tree level B.4 Dispersion relation at 1-loop C Decomposition of cross sections in terms of $SU(2)$ and $SU(3)$ irreps C.1 $SU(3)$ decomposition 1 Introduction esting the Standard Model(SM) and searching for new physics are two essential goals of the current and future experimental programs in particle physics. In this respect, all of the measurements can be classified as low energy (SM scale ) and high energy experiments. For low energy observables, the Standard Model Effective Field Theory( SMEFT) provides an excellent tool to consistently parameterize new physical perturbations, classified order by order in the form of non-renormalizable operators with higher dimensions. We expect new physics to kick in above at least the weak scale, and as we approach the regime of high energies greater than this scale, the applicability of EFT techniques becomes successively questionable. Reliable calculations then require a discussion of the explicit UV completions, and thus it’s clear that the connection between UV and IR observables and predictions becomes somewhat model dependent, and explicit matching is required to infer useful information. In this direction, dispersion relations provide a model independent way to connect low and high energy measurements, in the form of sum rules for low energy Wilson coefficients and high energy cross sections. This provides a consistent way to match the known and measurable low energy, and speculative high energy quantities(for a recent reappraisal see [1] and for a textbook introduction [2, 3] ). Their power lies in their generality -they follow from the simple and sacred physical requirements of Poincare invariance, unitary and locality. Recently there has been significant attention directed toward the application of the dispersion relations and sum rules for SMEFT [4, 5, 6, 7]. For the four fermion interactions most of the effort so far has been focused on the dimension 8 operators([8, 9, 10, 11]) where the sum rules lead to positivity constraints on the Wilson coefficients in a model independent way. On the other hand, from a phenomenological point of view, dimension eight operators are very hard to measure at experiments; and most likely the new physics will demonstrate itself first via dimension six corrections to the SM. Thus, it becomes crucial to understand similar dispersion relations for the dimension six operators. The situation here is drastically different from the dimension 8 discussion because the relevant dispersion integral, aside from being possibly non-convergent, is of indefinite sign and doesn’t admit any simple model independent positivity bound. However, the situation is far from hopeless, and the dispersion relations turn out to be instructive in a different way: instead of being viewed as a constraint on Wilson coefficients, these sum rules are to be used as a tool to constrain the UV completions of these operators, given signs to be measured in the IR. Therefore, in a way, we are approaching the IR-UV relationship from the opposite standpoint to what is customary. Our emphasis is on model building for a full theory by taking IR measurements as our input, instead of trying to predict these measurements from general inputs from the UV theory. We will show that different signs of the Wilson coefficients will be related to the dominance of the particle collision cross sections in the various channels, and decompose these cross sections as explicitly as possible to indicate the quantum numbers of initial states with dominant cross sections. Moreover, it is crucial to emphasize that sum rules can only be written down for a subspace of the dimension 6 basis, namely the effective 4 fermion operators that can generate forward amplitudes. Based on these sum rules, we will report examples of the weakly coupled UV completions which can lead to either sign of the Wilson coefficients. Such information, which we believe was not consistently summarized before, can become a useful guide for the future measurements in case some of the Wilson coefficients are discovered to be non zero. These measurements, supplemented with the sum rules we derive, will bring us closer to an understanding of the fundamental physics which the SMEFT derives from. The manuscript is organized as follows : in section 2 we briefly review dispersion integrals. In section 3, we study in detail the operator $(\bar{e}_{R}\gamma^{\mu}e_{R})^{2}$ and illustrate the relation between UV completions and signs of the effective operator at tree level and 1 loop. In section 4, we present the whole set of the four fermion operators and identify which of them can be constrained by the dispersion relations. Results are summarized in the section 5. Most details of the calculations have been relegated to the appendices. 2 Review of dispersion relations In this section, we will review dispersion relations and their applications to constraints on EFTs following the discussion in [1, 12, 4, 13]( readers familiar with the formalism can proceed directly to section 3). It is a general principle that the non-analyticities associated with scattering amplitudes have a physical origin in the form of poles and branch cuts arising from localized particle states, and thresholds. The positivity of the spectral function in the Kallen-Lehmann decomposition generalizes to more general cross sections, which can be related to elastic forward scattering amplitudes via a dispersion integral, to be reviewed in a moment. What this means in an EFT context is that, in perturbation theory, one can evaluate the 2 sides of a dispersion integral to a certain order; allowing us to extract information about the effective IR coupling that contributes to that amplitude at low energies on one side of the relation, from general observations about the UV piece of the dispersion relation without any explicit matching. While unitarity reflects in the positivity of the spectral function and cross sections, we need additional information about the high energy behaviour of the amplitude to control the dispersion integral at the infinite contour. The asymptotics of amplitudes at high energies is a question about the unitarity and locality of the theory. The famous Froissart bound-whilst technically proved only for theories with a mass gap, but believed to hold true generally- tells us that the behaviour of the amplitude $A(s)$ is such that $A(s)/s^{2}\to 0$ as $s\to\infty$ ( [14, 15, 16] ). This, in general allows us to write down a dispersion relation with 2 subtractions, i.e. a linear polynomial of the form $a(t)+b(t)s$ supplemented by a contour integral picking up the nonanalytic structure of the amplitude. $a(t),b(t)$ cannot be determined by unitarity alone, but the nonanalytic structure can be related to manifestly positive cross sections via the optical theorem. We can then differentiate this relation w.r.t $s$ twice to get rid of the unknown subtractions, and we’re left with a manifestly positive integral on the right, and the coeffecient of $s^{2}$ in $A(s)$ on the left-therby leading to what are conventionally called ’positivity bounds’ [1] on EFT parameters. This prescription, however, cannot be directly applied to dimension $6$ operators. Their contribution to $2\to 2$ amplitudes scales as $p^{2}$, and so $d^{2}A(s)/ds^{2}$ kills information about their couplings, and we cannot constrain them in any way. The best we can do is to look at $dA(0)/ds$, and be left with a dispersion integral of indefinite sign as well as an undetermined subtraction constant (which we’ll call $C_{\infty}$, as it captures the pole of the amplitude at infinity). Let us briefly derive this dispersion relation from first principles. Consider a process $ab\to ab$ with the amplitude $A_{ab\to ab}\equiv A_{ab}(s,t)$, and in the forward limit ($t\to 0$). This amplitude can be expanded as $$A_{ab}(s,0)=\sum_{n}c_{n}(\mu^{2})(s-\mu^{2})^{n},~{}~{}~{}{c_{n}(\mu^{2})=\frac{1}{n!}\frac{\partial^{n}}{\partial s^{n}}A_{ab}(s,0)|_{s=\mu^{2}}}$$ (1) about some arbitrary reference scale $\mu^{2}$ where the amplitude is analytic. We can now use Cauchy’s theorem to write $$\displaystyle\frac{1}{2\pi i}\oint ds\frac{A_{ab}(s,0)}{(s-\mu^{2})^{n+1}}=\sum_{s_{i},\mu^{2}}Res\frac{A_{ab}(s,0)}{(s-\mu^{2})^{n+1}}=c_{n}(\mu^{2})+\sum_{s_{i}}Res\frac{A_{ab}(s,0)}{(s-\mu^{2})^{n+1}},$$ (2) where $s_{i}$ are the physical poles associated with IR stable resonance exchanges in the scattering, and the contour of integration is shown on the Fig. 1. The residues at physical poles are IR structures that we will drop henceforth. This can always be done if the scale $\mu$ is chosen such that $\mu^{2}\gg m_{IR}^{2}$, where $m_{IR}^{2}$ corresponds to the scale of the $s_{i}$ poles. Indeed, the last term in Eq.2 gives corrections of the order $\mathcal{O}(m_{IR}^{2}/\mu^{2})$, which can be safely ignored. The analytic structure of the amplitude allows to decompose the integral as a sum of the contributions along the branch cuts and over infinite circle, so that scehamtically $$\displaystyle\frac{1}{2\pi i}\int ds\frac{A_{ab}(s,0)}{(s-\mu^{2})^{n+1}}=\textrm{integrals along cuts}+\textrm{integral on big circle}=C^{n}_{\infty}+I_{n}$$ $$\displaystyle C^{n}_{\infty}=\int^{2\pi}_{0}\frac{d\theta}{2\pi}\frac{A_{ab}(|s_{\Lambda}|e^{i\theta},0)}{(|s_{\Lambda}|e^{i\theta}-\mu^{2})^{n+1}}\cdot(|s_{\Lambda}|e^{i\theta})$$ (3) The integration over the branch cuts can be written as a sum of the integrals over discontinuities $$2\pi iI_{n}=\int_{4m^{2}}^{\infty}\bigg{(}\frac{A_{ab}(s+i\epsilon,0)-A_{ab}(s-i\epsilon,0)}{(s-\mu^{2})^{n+1}}+(-1)^{n}\frac{A_{ab}(4m^{2}-s-i\epsilon,0)-A_{ab}(4m^{2}-s+i\epsilon,0)}{(s-4m^{2}+\mu^{2})^{n+1}}\bigg{)}.$$ (4) Since $4m^{2}-s=u$ for $t=0$, the second term is just the $u$ channel crossed amplitude for the process $a\bar{b}\to a\bar{b}$ i.e. $A_{a\bar{b}}$(instead of $ab\to ab$)111 Crossing relations for particles with spin become more nontrivial (see for example [17, 18]). However, in the case of the massless spin $1/2$ particles, which are the interest of this paper, the usual crossing relations for the forward amplitude remain valid [17] and we will not worry about these issues in the rest of the paper.. Using the optical theorem, we can rewrite the discontinuity in terms of cross section and in the limit $m\to 0$, and $\mu\to 0$ we obtain : $$I_{n}=\int\frac{ds}{\pi s^{n}}\bigg{(}\sigma_{ab}+(-1)^{n}\sigma_{a\bar{b}}\bigg{)}.$$ (5) For dimension six operators, we will be interested in dispersion relations of Eq. 2 for the case $n=1$. $$\displaystyle c_{1}(\mu^{2})=\int\frac{ds}{\pi s}\bigg{(}\sigma_{ab}-\sigma_{a\bar{b}}\bigg{)}+C_{\infty}^{(n=1)}.$$ (6) Note that the quantity $c_{n}(\mu^{2})$ on the left hand side can be evaluated in IR using the EFT expansion. This introduces an additional source of corrections of the order ${\cal O}(\mu^{2}/\Lambda^{2})$, where $\Lambda$ is the scale suppressing higher dimensional operators. We can see that the dispersion relations are valid up to corrections of the order ${\cal O}(m_{IR}^{2}/\mu^{2},\mu^{2}/\Lambda^{2})$, and these can be ignored if $\Lambda^{2}\gg\mu^{2}\gg m^{2}_{IR}$. At last, let us mention that the forward limit $t\to 0$ must be taken with care, and is in principle problematic in the presence of massless particles propagating in the t-channel of the UV amplitude (see for example [4, 13]). In fact, we always have the usual SM Coulumb singularities that lead to the bad behaviour in the forward limit. The way out of this problem is by using IR mass regulators to match the known SM contributions to both sides of the dispersion relation, and subtract them away. 3 Warm up exercise We consider the simplest case of a fully right handed operator which is made up of singlet fields $e_{R}$, all of the same generation (the dispersion relation for this operator was presented in [6] too), $$\displaystyle c_{RR}(\bar{e}_{R}\gamma_{\mu}e_{R})(\bar{e}_{R}\gamma^{\mu}e_{R}).$$ (7) Following the strategy outlined in the previous section, we start by considering the amplitude $A_{e\bar{e}}$ and derive the following dispersion relation $$\frac{dA_{e_{R}\overline{e_{R}}}(s,0)}{ds}\bigg{|}_{s=0}=\int\frac{ds}{\pi s}\left(\sigma_{e_{R}\overline{e_{R}}}-\sigma_{e_{R}e_{R}}\right)+C_{\infty},$$ (8) where we have omitted the $(n=1)$ subscript for $C_{\infty}$. The amplitude in the IR ($s\to 0$) limit can be safely calculated using the EFT and we find (we use helicity amplitudes; for notations and for the explicit conventions see appendix A): $$\displaystyle A_{e_{R}\overline{e_{R}}}(s,t)=c_{RR}\cdot 2([2\gamma_{\mu}1\rangle[3\gamma^{\mu}4\rangle-[3\gamma_{\mu}1\rangle[2\gamma^{\mu}4\rangle)=-8c_{RR}[23]\langle 14\rangle$$ $$\displaystyle A_{e_{R}\overline{e_{R}}}(s,t)|_{t\to 0}=-8c_{RR}s$$ (9) so that we arrive at the following sum rule for the $c_{RR}$ Wilson coefficient222In this expression we should take the value of the Wilson coefficient at the scale $\mu\to 0$. The RGE evolution of the Wilson coefficients from the EFT cut off scale to $\mu$ can lead to the modification of the Eq.10 (see [19] for a recent discussion). In this paper we will assume that these running effects are subleading and can be safely ignored. $$-8c_{RR}=\int\frac{ds}{\pi s}\left(\sigma_{e_{R}\overline{e_{R}}}-\sigma_{e_{R}e_{R}}\right)+C_{\infty}$$ (10) Let us see how this equation can be used as guidance for UV completions that lead to the possible signs of the $c_{RR}$ Wilson coefficient. 3.1 Charge neutral vector exchange Let us start with the negative sign for $c_{RR}$. The dispersion relation predicts that this will be generated by the models with resonances in $e\bar{e}$ channel (apart from the $C_{\infty}$ contribution). The simplest model which can enhance the $\sigma_{e\bar{e}}$ cross section is a simple $Z^{\prime}$ with the interaction $$\displaystyle{\cal L}_{Z^{\prime}}=\lambda Z^{\prime}_{\mu}\bar{e}_{R}\gamma^{\mu}e_{R}.$$ (11) Integrating $Z^{\prime}$ at tree level we obtain for the Wilson coefficient $$\displaystyle c_{RR}=-\frac{\lambda^{2}}{2M_{Z^{\prime}}^{2}},$$ (12) where the sign follows the prediction of the dispersion relations. However, inspecting the amplitudes carefully, we see that the massive vector exchange in the $t-$ channel spoils the convergence of the amplitude in the forward region, making the integral over the infinite circle non-vanishing. To this end, let us look at the amplitude $A_{\bar{e}_{R}e_{R}}$ in detail- $$\displaystyle iA=-\lambda^{2}\bigg{(}[2\gamma^{\mu}1\rangle[3\gamma_{\mu}4\rangle\frac{-i}{s-M_{Z^{\prime}}^{2}}-[3\gamma^{\mu}1\rangle[2\gamma_{\mu}4\rangle\frac{-i}{t-M_{Z^{\prime}}^{2}}\bigg{)}$$ $$\displaystyle A(s,t)=-2\lambda^{2}[23]\langle 14\rangle\bigg{(}\frac{1}{s-M_{Z^{\prime}}^{2}}+\frac{1}{t-M_{Z^{\prime}}^{2}}\bigg{)}.$$ (13) In the forward limit, this amplitude goes as $$\displaystyle A(s,t)|_{t\to 0}=-2\lambda^{2}s\left(\frac{1}{s-M_{Z^{\prime}}^{2}}+\frac{1}{-M_{Z^{\prime}}^{2}}\right).$$ (14) We can see that the integral over infinite contour becomes non zero and is equal to $$\displaystyle C_{\infty}^{(Z^{\prime})}=\frac{2\lambda^{2}}{M_{Z^{\prime}}^{2}}.$$ (15) We see that even though the contribution from the infinite contour is non-zero, it turns out of the same sign and size as the cross section part of the dispersion relation $$\displaystyle\left[\int\frac{ds}{\pi s}\left(\sigma_{e_{R}\overline{e_{R}}}-\sigma_{e_{R}e_{R}}\right)\right]^{(Z^{\prime})}=\frac{2\lambda^{2}}{M_{Z^{\prime}}^{2}},$$ (16) (see appendix B for details of the calculation). The fact that exchange of the elementary vector boson spoils the convergence of the amplitude in the forward limit at large $s$ is not new and was observed for example in [4] in the discussion of the other dimension six operators. Let us extend the discussion for the operators with two fermion flavours. For example $c_{e\mu}(\bar{e}_{R}\gamma^{\mu}e_{R})(\bar{\mu}_{R}\gamma_{\mu}\mu_{R})$ contributes $e\bar{\mu}\to e\bar{\mu}$ in the IR. This operator can be generated by two kinds of UV completions with a charge neutral vector boson- $$\mathcal{L}_{UV}^{(1)}=\lambda Z_{(1)}^{\mu}(\bar{e_{R}}\gamma^{\mu}\mu_{R}+h.c)\hskip 19.91692pt\mathcal{L}^{(2)}_{UV}=(\lambda_{1}Z_{(2)}^{\mu}\bar{e}_{R}\gamma^{\mu}e_{R}+\lambda_{2}Z_{(2)}^{\mu}\bar{\mu}_{R}\gamma_{\mu}\mu_{R})$$ (17) The analysis in both cases is very similar to the single flavour discussion; however, in the first case (${\cal L}_{UV}^{(1)}$) the integral over infinite contour vanishes, since there is no amplitude with $Z_{(1)}$ in the $t$-channel. Writing down the dispersion relations for the $e\bar{\mu}\to e\bar{\mu}$ scattering we will obtain (note that there is a different numerical prefactor compared to Eq.10 due to combinatorics): $$\displaystyle c_{e\mu}=-\frac{1}{2}\left[\int\frac{ds}{\pi s}\left(\sigma_{e_{R}\bar{\mu}_{R}}-\sigma_{e_{R}\mu_{R}}\right)\right]=-\frac{|\lambda|^{2}}{M_{(1)}^{2}}.$$ (18) In the second case $({\cal L}_{UV}^{(2)})$, we are in the opposite situation since both cross sections $\sigma_{e\bar{\mu}(\mu)}=0$ vanish at leading order in perturbation theory. However there is a forward amplitude for this process, which comes from $t$-channel diagram and it contributes only to $C_{\infty}$. In other words, the pole at infinity saturates the dispersion relation, and even though no corresponding UV cross section can be measured to constrain this coefficient, it can be nonzero because of this pole. In fact, a simple calculation yields $$c_{e\mu}=-\frac{C_{\infty}}{2}=-\frac{\lambda_{1}\lambda_{2}}{M_{(2)}^{2}}$$ (19) which can be either positive or negative depending on the values of the $\lambda_{1},\lambda_{2}$ couplings. Let us continue with our examination of the UV completions for the various signs of the $c_{RR}$ 3.2 Charge two scalar What about the positive sign of $c_{RR}$? The dispersion relation in Eq. 10 predicts that this happens for UV completions that generate only $\sigma_{ee}$ cross section. The simplest possibility is a charge two scalar with the interaction $$\displaystyle{\cal L}=\kappa\phi\overline{e_{R}^{c}}e_{R}+h.c.$$ (20) Then at the order ${\cal O}(\kappa^{2})$, only $\sigma_{ee}$ will be non-vanishing, so the Wilson coefficient must be positive. Indeed, integrating out the scalar field at tree level gives $$\displaystyle c_{RR}=\frac{|\kappa|^{2}}{2M_{\phi}^{2}}$$ (21) which is manifestly positive. In this case the forward amplitude converges quickly enough, so that $C_{\infty}=0$ -this is just the statement that a scalar cannot be exchanged in the $t$ channel when opposite helicity fermions are scattered. We see that the both signs of the Wilson coefficient are possible with a weakly coupled UV completion. One can still wonder whether the negative sign of the $c_{RR}$ interactions in the Eq. 12 is related to the $t-$ channel pole and non-convergence of the amplitude in the UV. To quell any doubts, in the next subsection we will build a weakly coupled UV completion without new vector bosons and with convergent forward amplitudes. 3.3 UV completion at 1-loop Let us extend the SM with vector-like fermion $\Psi$ of charge $1$ and a charge $-2$ complex scalar $\phi$ with a Yukawa interaction $$\displaystyle{\cal L}=|D\phi|^{2}+i\bar{\Psi}\mathord{\not\mathrel{{\mathrel{D}}}}\Psi+M_{\psi}\bar{\Psi}\Psi-M^{2}_{\phi}|\phi|^{2}+y\bar{e}_{R}\phi\Psi.$$ (22) This generates an effective operator at the order $O(y^{4})$, and at this order the only cross section available is $\sigma_{e\bar{e}}$. The dispersion relation predicts that the Wilson coefficient must be negative. Moreover, $C_{\infty}=0$ here as the amplitude scales slowly enough with $s$. Indeed, integrating out heavy fields at one loop we obtain $$\displaystyle c_{RR}=-\frac{|y|^{4}}{128\pi^{2}M_{\Psi}M_{\phi}}f(x),~{}~{}~{}x\equiv\frac{M_{\Psi}}{M_{\Phi}}$$ $$\displaystyle f(x)=\frac{(x+4x^{3}\log x-x^{5})}{(1-x^{2})^{3}},~{}~{}~{}~{}~{}\lim_{x\to 1}f(x)=1/3$$ (23) where one can see that the function $f(x)$ is always positive. See appendix B for explicit verification of the dispersion integral in the case $M_{\Psi}=M_{\Phi}$. In summary, this warm up exercise shows us that both signs of the Wilson coefficients are possible within weakly coupled theories. Contribution of the infinite contours is important for the t-channel exchange of the vector resonances. Interestingly, both signs of the Wilson coefficient are possible even for the weakly coupled models with vanishing $C_{\infty}$ 333This result for the Wilson coefficients contradicts the findings of the Ref.[20], where the only possible sign of the Wilson coefficient was found to be positive. In the following, we will derive the set of the dispersion relations for the whole set of four fermion operators and identify the UV completions leading to the various signs of the Wilson coefficients. 4 Four fermion operators First of all, let us define a complete basis of the four fermion operators, and we will do this following the notations of the Ref. [21], [22]: purely left-handed $$\displaystyle O_{ll}^{ijkm}=\left(\bar{l^{i}}_{L}\gamma_{\mu}l^{j}_{L}\right)\left(\bar{l^{k}}_{L}\gamma^{\mu}l^{m}_{L}\right),~{}~{}~{}O_{qq}^{(1)ijkm}=\left(\bar{q^{i}}_{L}\gamma_{\mu}q^{j}_{L}\right)\left(\bar{q^{k}}_{L}\gamma^{\mu}q^{m}_{L}\right),$$ $$\displaystyle O_{qq}^{(3)ijkm}=\left(\bar{q^{i}}_{L}\gamma_{\mu}\sigma_{a}q^{j}_{L}\right)\left(\bar{q^{k}}_{L}\gamma^{\mu}\sigma_{a}q^{m}_{L}\right),~{}~{}O_{ql}^{(1)ijkm}=\left(\bar{l^{i}}_{L}\gamma_{\mu}l^{j}_{L}\right)\left(\bar{q^{k}}_{L}\gamma^{\mu}q^{m}_{L}\right)$$ $$\displaystyle O_{ql}^{(3)ijkm}=\left(\bar{l^{i}}_{L}\gamma_{\mu}\sigma_{a}l^{j}_{L}\right)\left(\bar{q^{k}}_{L}\gamma^{\mu}\sigma_{a}q^{m}_{L}\right),$$ purely right-handed $$\displaystyle O_{ee}^{ijkm}=\left(\bar{e}_{R}\gamma_{\mu}e_{R}\right)\left(\bar{e}_{R}\gamma^{\mu}e_{R}\right),~{}~{}~{}O_{uu}^{ijkm}=\left(\bar{u}_{R}\gamma_{\mu}u_{R}\right)\left(\bar{u}_{R}\gamma^{\mu}u_{R}\right)$$ $$\displaystyle O_{dd}=\left(\bar{d}_{R}\gamma_{\mu}d_{R}\right)\left(\bar{d}_{R}\gamma^{\mu}d_{R}\right),~{}~{}O_{ud}=\left(\bar{u}_{R}\gamma_{\mu}u_{R}\right)\left(\bar{d}_{R}\gamma^{\mu}d_{R}\right)$$ $$\displaystyle O_{ud}^{(8)}=\left(\bar{u}_{R}\gamma_{\mu}T_{A}u_{R}\right)\left(\bar{d}_{R}\gamma^{\mu}T_{A}d_{R}\right),~{}~{}~{}O_{eu}=\left(\bar{e}_{R}\gamma_{\mu}e_{R}\right)\left(\bar{u}_{R}\gamma^{\mu}u_{R}\right)$$ $$\displaystyle O_{ed}\left(\bar{e}_{R}\gamma_{\mu}e_{R}\right)\left(\bar{d}_{R}\gamma^{\mu}d_{R}\right),$$ (24) left-right $$\displaystyle O_{le}=\left(\bar{l}_{L}\gamma_{\mu}l_{L}\right)\left(\bar{e}_{R}\gamma^{\mu}e_{R}\right),~{}~{}~{}O_{qqee}\left(\bar{q}_{L}\gamma_{\mu}q_{L}\right)\left(\bar{e}_{R}\gamma^{\mu}e_{R}\right)\hskip 91.04872pt$$ $$\displaystyle O_{lu}=\left(\bar{l}_{L}\gamma_{\mu}l_{L}\right)\left(\bar{u}_{R}\gamma^{\mu}u_{R}\right),~{}~{}~{}O_{ld}=\left(\bar{l}_{L}\gamma_{\mu}l_{L}\right)\left(\bar{d}_{R}\gamma^{\mu}d_{R}\right)$$ $$\displaystyle O_{qu}^{(1)}=\left(\bar{q}_{L}\gamma_{\mu}q_{L}\right)\left(\bar{u}_{R}\gamma^{\mu}u_{R}\right),~{}~{}~{}O_{qu}^{(8)}=\left(\bar{q}_{L}\gamma_{\mu}T_{A}q_{L}\right)\left(\bar{u}_{R}\gamma^{\mu}T_{A}u_{R}\right)$$ $$\displaystyle O_{qd}^{(1)}=\left(\bar{q}_{L}\gamma_{\mu}q_{L}\right)\left(\bar{d}_{R}\gamma^{\mu}d_{R}\right),~{}~{}~{}O_{qd}^{(8)}=\left(\bar{q}_{L}\gamma_{\mu}T_{A}q_{L}\right)\left(\bar{d}_{R}\gamma^{\mu}T_{A}d_{R}\right)$$ $$\displaystyle O_{ledq}=\left(\bar{l}_{L}e_{R}\right)\left(\bar{d}_{R}q_{L}\right),~{}~{}~{}O_{quqd}^{(1)}=\left(\bar{q}_{L}u_{R}\right)i\sigma_{2}\left(\bar{q}_{L}d_{R}\right)^{\mathrm{T}}$$ $$\displaystyle O_{lequ}^{(1)}=\left(\bar{l}_{L}e_{R}\right)i\sigma_{2}\left(\bar{q}_{L}u_{R}\right)^{\mathrm{T}},~{}~{}O_{lequ}^{(3)}=\left(\bar{l}_{L}\sigma_{\mu\nu}e_{R}\right)i\sigma_{2}\left(\bar{q}_{L}\sigma^{\mu\nu}u_{R}\right)^{\mathrm{T}}$$ $$\displaystyle O_{quqd}^{(8)}=\left(\bar{q}_{L}T_{A}u_{R}\right)i\sigma_{2}\left(\bar{q}_{L}T_{A}d_{R}\right)^{\mathrm{T}},$$ (25) baryon number violating $$\displaystyle O_{duq}=\epsilon_{ABC}\left(\bar{d}^{c\,A}_{R}u^{B}_{R}\right)\left(\bar{q}^{c\,C}_{L}i\sigma_{2}l_{L}\right),~{}~{}O_{qqu}=\epsilon_{ABC}\left(\bar{q}^{c\,A}_{L}i\sigma_{2}q^{B}_{L}\right)\left(\bar{u}^{c\,C}_{R}e_{R}\right)$$ $$\displaystyle O_{duu}=\epsilon_{ABC}\left(\bar{d}^{c\,A}_{R}u^{B}_{R}\right)\left(\bar{u}^{c\,C}_{R}e_{R}\right),~{}~{}O_{qqq}\epsilon_{ABC}(i\sigma_{2})_{\alpha\delta}(i\sigma_{2})_{\beta\gamma}\left(\bar{q}^{c\,A\alpha}_{L}q^{B\beta}_{L}\right)\left(\bar{q}^{c\,C\gamma}_{L}l^{\delta}_{L}\right).$$ (26) The rest of the possible operators can be reduced via some Fierzing to the basis of Eq.4-4-4, using the completeness relations for the SU(2) and SU(3) generators $$\displaystyle\sum_{a=1}^{3}\left(\sigma^{a}\right)_{ij}\left(\sigma^{a}\right)_{kl}=2\left(\delta_{il}\delta_{kj}-\frac{1}{2}\delta_{ij}\delta_{kl}\right)$$ (27) $$\displaystyle\sum_{A=1}^{8}\left(T^{A}\right)_{ij}\left(T^{A}\right)_{kl}=2\left(\delta_{il}\delta_{kj}-\frac{1}{3}\delta_{ij}\delta_{kl}\right)$$ (28) As we have seen in the previous section, the dispersion relations are effective in the case of forward scattering i.e. when the initial and final states are the same 444Recently it was shown that the scattering of the mixed(entangled) flavour states can lead to the additional constraints in the case of the dimension eight operators [8, 9], where strict positivity bounds can be applied. In the case of dimension six operators the measurements of the cross sections for the mixed states looks almost impossible, so we do not investigate this direction further.. Therefore, only the following subspace of operators can be subject to sum rules - $$\displaystyle O_{ll}^{iijj},O_{ll}^{ijji},O_{qq}^{(1,3)iijj},O_{qq}^{(1,3)ijji},O_{ql}^{(1,3)iikk},O_{ee,uu,dd}^{iijj},O_{ee,uu,dd}^{ijji},$$ $$\displaystyle O_{ud}^{(1),(8),iijj},O_{ed}^{iijj},O_{eu}^{iijj},O_{le,qe,lu,ld}^{iijj},O^{(1)(8)iijj}_{qu},O^{(1)(8)iijj}_{qd},$$ (29) which will be the focus of this paper. For fully right-handed operators, the discussion follows closely the results reported for $O_{ee}$ above. Therefore, we will henceforth report only the results and the examples of UV completions leading to various signs. 4.1 Experimental constraints Having defined the operators which we will consider in our discussion, let us briefly mention the status of the experimental bounds based on the discussion in [23, 24]. Current bounds on on four lepton and two lepton two quark operator come from the combinations of the $Z,W$ pole observables, fermion production at LEP, low energy neutrino scatterings , parity violating electron scatterings, and parity violation in atoms. One of the challenges in deriving these bounds comes from the modifications of $W,Z$ vertices which too can contribute to the same low energy observables, so that the global fit including the $W,Z$ pole observables becomes necessary. For example, for two lepton two quark operators Ref. [24] has found nine flat directions unbounded experimentally. Current combinations of the low energy experimental constraints as well as LHC measurements bound the various Wilson coefficients in the range $10^{-2}-10^{-3}$ (where the operators are assumed to be suppressed by the $v_{ew}^{2}$ scale), which means sensitivity to the scales $\mathcal{O}$(few TeV). Just to be specific, for example the four-electron operator discussed in Eq. 7 is bounded by the Bhabha scattering measurements at LEP-2[25] and SLAC E158 experiment for the Møller scattering ($e^{-}e^{-}\to e^{-}e^{-}$)[26], where both experiments are testing the complementary combinations of the Wilson coefficients leading to the net sensitivity of $\sim 4\times 10^{-3}v_{ew}^{-2}$ on the value of the Wilson coefficient. LHC measurements of the dilepton production in $pp$ scattering leads to additional strong constraints on the two quark-two lepton operators [27, 28], where for some operators we will become sensitive to new physics up to the scale of $\sim 50$ TeV. So far, all of the measurements are consistent with SM predictions. 4.2 FULLY RIGHT HANDED 4.2.1 $O_{ee}$ This operator has already been discussed in the section 3 and we would just like to emphasize that there are no sum rules for more than two flavours of fermions. Following the notations of Eq. 4-4 the dispersion relations can be summarized as: $$\displaystyle-8c_{ee}^{iiii}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{e}_{i}e_{i}}-\sigma_{{e}_{i}e_{i}}\bigg{)}+C_{\infty}$$ (30) $$\displaystyle-2(c^{iijj}_{ee})|_{i\neq j}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{e}_{i}e_{j}}-\sigma_{\bar{e}_{i}\bar{e}_{j}}\bigg{)}+C_{\infty}$$ (31) Note that in this simple case where the fields are singlets, the operators $O_{ee}^{iijj}$ and $O_{ee}^{ijji}$ are identical after Fierzing; and $O_{ee}^{iijj}$ and $O_{ee}^{jjii}$ are just trivially identical by symmetrization, so we report the dispersion relation only in terms of $c_{ee}^{iijj}$ in order to not double-count. Summarising the discussion about UV completions in the section 3 we have: $c_{ee}<0$ : neutral $Z^{\prime}$ at tree level; Vectorlike singlet fermion $\Psi$ and a heavy singlet comlex scalar $\Phi$ with $Q[\Phi\Psi]=-1$ at 1 loop. $c_{ee}>0$ : Charge 2 scalar; for different flavours ($O_{ee}^{iijj}|_{i\neq j}$), $Z^{\prime}$ can lead to a possibly positive sign as well if the couplings to the different flavours of the fermions are of opposite signs (see Eq.19) 4.2.2 $O_{uu},O_{dd}$ Let us proceed with our investigation of the four fermion quark operators. The discussion proceeds exactly in the same way as for the leptons, except for new color structure. Fierzing them into the basis of Eq.4-4 there are only six structures of the operators $O_{uu,dd}^{iijj},O_{uu,dd}^{ijji}$, which are in this case not related by a Fierz identity because of an implicit contraction of color indices. Let us start with the operators where all of the quarks have the same hypercharge, and focus on the operator $O_{uu}^{iiii}$. Denoting by $\alpha,\beta$ the color indices and considering same and different color scatterings, we will obtain the following relations: $$\displaystyle-8c_{uu}^{iiii}=\int\frac{ds}{\pi s}\left(\sigma_{u_{\alpha}\bar{u}_{\alpha}}-\sigma_{u_{\alpha}u_{\alpha}}\right)+C_{\infty}^{\alpha\alpha}=\int\frac{ds}{\pi s}\left(\frac{2\sigma_{u\bar{u}}^{(8)}+\sigma^{(1)}_{u\bar{u}}}{3}-\sigma_{uu}^{(6)}\right)-C_{\infty}^{uu,(6)}$$ $$\displaystyle-4c_{uu}^{iiii}=\left[\int\frac{ds}{\pi s}\left(\sigma_{u_{\alpha}\bar{u}_{\beta}}-\sigma_{u_{\alpha}u_{\beta}}\right)+C_{\infty}^{\alpha\beta}\right]_{\alpha\neq\beta}=\int\frac{ds}{\pi s}\left(\sigma_{u\bar{u}}^{(8)}-\frac{\sigma_{uu}^{(\bar{3})}+\sigma^{(6)}_{uu}}{2}\right)+C_{\infty}^{u\bar{u},(8)}.$$ (32) In the last step, we have decomposed the various possibilities of the initial state fermions in terms of the $SU(3)$ QCD representations. This is convenient, since the Wigner-Eckart theorem requires the amplitudes to remain the same for all of the components of the irreducible representation. In particular, for the quark antiquark scattering the initial state will always be decomposed as a singlet and octet of $SU(3)$. Even though measuring $\sigma^{(8)}$ and $\sigma^{(1)}$ independently at collider experiment looks practically impossible, such dispersion relations can become very useful for model building if the non-zero values of the Wilson coefficients are found. Note that we can calculate the integral over the infinite contour using amplitude $A_{u\bar{u}}$ or its crossed version $A_{uu}$ and the values of this integrals will satisfy (see appendix C for details): $$\displaystyle-C_{\infty}^{uu(6)}=\frac{2C_{\infty}^{u\bar{u}(8)}}{3}+\frac{C_{\infty}^{u\bar{u}(1)}}{3}$$ $$\displaystyle-\frac{C_{\infty}^{uu(6)}+C_{\infty}^{uu(\bar{3})}}{2}=C_{\infty}^{u\bar{u}(8)}.$$ (33) Re-expressing everything in terms of the color averaged cross section we will obtain $$\displaystyle-\frac{16}{3}c_{uu}^{iiiii}=\int\frac{ds}{\pi s}\left(\sigma_{u\bar{u}}-\sigma_{uu}\right)-\frac{1}{3}C_{\infty}^{uu(6)}+\frac{2}{3}C_{\infty}^{u\bar{u}(8)}$$ (34) Again, $C_{\infty}$ can be non-vanshing, for example, in UV models with charge neutral vector resonances exchange in the $t$ channel, but unlike the four electron case here this resonance can be either singlet or octet of $SU(3)$QCD. Extending this analysis to the case of different flavour of the up quarks we will obtain : $$\displaystyle-2c_{u}^{iijj}=\int\frac{ds}{\pi s}\left(\frac{2\sigma_{u\bar{u}}^{(8)}+\sigma^{(1)}_{u\bar{u}}}{3}-\sigma_{uu}^{(6)}\right)-C_{\infty}^{uu,(6)}$$ $$\displaystyle-2(c_{u}^{iijj}+c_{u}^{ijji})=\int\frac{ds}{\pi s}\left(\sigma_{u\bar{u}}^{(8)}-\frac{\sigma_{uu}^{(\bar{3})}+\sigma^{(6)}_{uu}}{2}\right)+C_{\infty}^{u\bar{u},(8)},$$ (35) We again mention that the operators $O^{iijj}_{uu}$ and $O_{uu}^{jjii}$ (similarly for $O^{ijji}_{uu}$ and $O^{jiij}_{uu}$)are trivially identical, so it’s important that we don’t double count them. As before, expressing everything in terms of uncolored cross sections, we find $$\displaystyle-2c^{iijj}_{uu}-\frac{2}{3}c^{ijji}_{uu}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{u\bar{u}}-\sigma_{uu}\bigg{)}+\frac{8}{9}C^{u\bar{u}(8)}_{\infty}+\frac{1}{9}C^{u\bar{u}(1)}_{\infty}$$ (36) and exactly the same relations hold for the down quarks. Let us look at the possible UV completions. In the case of $c_{uu}^{iiii}$, we will have a negative sign of the Wilson coefficient with $Z^{\prime}$, and a positive sign for the charge $-4/3$ scalar. Similar to the lepton case, we can generate a negative Wilson coefficient by adding vectorlike fermions and a complex scalar with $Q[\Phi\Psi]=2/3$ and $(\Phi\psi)-$ fundamental of QCD. The discussion of two fermion flavours is almost identical to the lepton case. To demonstrate an explicit verification of these sum rules, in Appendix (B.2), we provide an example of a UV-completion of the type $gV^{A}_{\mu}(\bar{u}_{i}\gamma^{\mu}T^{A}u_{i})$. This is a flavor diagonal interaction with a color octet vector and a universal coupling; where the sum rule for the Wilson coefficients is saturated by the pole at infinity since no leading order cross sections are available. 4.2.3 $O^{(1),(8)}_{ud}$ Just as in the previous section, we obtain (we will omit here flavour indices as these do not play any role, since of the two up quarks and two down quarks should be the same to form sum rules) $$\displaystyle-2(c_{ud}^{(1)}-\frac{1}{6}c_{ud}^{(8)}+\frac{1}{2}c_{ud}^{(8)})=\int\frac{ds}{\pi s}\left(\frac{2\sigma^{(8)}_{u\bar{d}}+\sigma^{(1)}_{u\bar{d}}}{3}-\sigma_{ud}^{(6)}\right)+\frac{1}{3}(2C_{\infty}^{u\bar{d}(8)}+C_{\infty}^{u\bar{d}(1)})$$ $$\displaystyle-2(c_{ud}^{(1)}-\frac{1}{6}c_{ud}^{(8)})=\int\frac{ds}{\pi s}\left(\sigma_{u\bar{d}}^{(8)}-\frac{\sigma_{ud}^{(\bar{3})}+\sigma_{ud}^{(6)}}{2}\right)+C_{\infty}^{u\bar{d}(8)}$$ (37) Rewriting the result in terms of uncolored cross section, we will obtain $$\displaystyle-2c_{ud}^{(1)}=\int\frac{ds}{\pi s}\left(\sigma_{u\bar{d}}-\sigma_{ud}\right)+\frac{8}{9}C_{\infty}^{u\bar{d}(8)}+\frac{1}{9}C_{\infty}^{u\bar{d}(1)}$$ (38) Interestingly, we see that no constraints can be obtained for $c_{ud}^{(8)}$ if we don’t have precise information about the color structure of the initial state. Experiments which are sensitive only to the total scattering cross section will be blind to $c_{ud}^{(8)}$. 4.2.4 $O_{eu},O_{ed}$ The only operators with sum rule are of the form $$(\bar{e}_{Ri}\gamma^{\mu}e_{Ri})(\bar{u}_{Rja}\gamma_{\mu}u_{Rja}),$$ (39) where no summation over $i,j$ is assumed. The sum rule is identical one for both $u$ and $d$ quarks and is given by: $$-2c^{iijj}_{eu}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{e}_{i}u_{j}}-\sigma_{\bar{e}_{i}\bar{u}_{j}}\bigg{)}+C_{\infty}^{\bar{e}u}\hskip 8.53581pt\textrm{and}\hskip 8.53581ptu\leftrightarrow d.$$ (40) UV completions are as before, with a positive sign for $u$($d$) coming from a charge $1/3$($4/3$) scalar and a negative sign from a charge $5/3$($2/3$) vector field $V$, note that the amplitude is convergent in the forward limit and the infinite integrals do vanish). Neutral charge $Z^{\prime}$ can lead to the arbitrary sign of the Wilson coefficient; again, in this case the dispersion relations are saturated by the integrals at infinity. 4.3 SUM RULES FOR EW DOUBLETS In the next 2 subsections we study operators that contribute to doublet-singlet scattering. 4.3.1 $O_{le},O_{lu},O_{ld},O_{qe}$ Let us start with the fully leptonic operator and study the forward scattering of $l^{p}q$ where $p=1,2$ is the isospin, in which case the sum rules are of the form $$\displaystyle-2c_{le}^{iijj}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{{l}^{ip}_{L}\bar{e}^{j}_{R}}-\sigma_{{l}^{ip}_{L}{e}^{j}_{R}}\bigg{)}+C_{\infty}^{l_{i}e_{j}}$$ $$\displaystyle=\int\frac{ds}{\pi s}\bigg{(}\sigma_{e_{L}^{i}\bar{e}^{j}_{R}}-\sigma_{e^{i}_{L}e^{j}_{R}}\bigg{)}+C_{\infty}^{l_{i}e_{j}}$$ $$\displaystyle=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\nu_{L}^{i}\bar{e}^{j}_{R}}-\sigma_{\nu^{i}_{L}e^{j}_{R}}\bigg{)}+C_{\infty}^{l_{i}e_{j}}$$ (41) Similarly, we can write down the sum rules for the the quark lepton operators- $$\displaystyle-2c^{iijj}_{lu}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{l}^{p}_{i}u_{j}}-\sigma_{{l}^{p}_{i}{u}_{j}}\bigg{)}+C_{\infty}^{l_{i}u_{j}}\hskip 8.53581pt\textrm{and}\hskip 8.53581ptu\leftrightarrow d$$ $$\displaystyle-2c^{iijj}_{qe}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{q}^{p}_{i}e_{j}}-\sigma_{q^{p}_{i}e_{j}}\bigg{)}+C_{\infty}^{q_{i}e_{j}},$$ (42) where again $p$ stand for the $SU(2)_{L}$ index. Note that these sum rules hold true for any isospin for the lepton and any color of the quark. 4.3.2 $O_{qu}^{(1),(8)},O_{qd}^{(1),(8)}$ In this case, the discussion follows closely the one for the quark singlets, and so we arrive at two sum rules(we again suppress the flavour index for brevity) $$\displaystyle-2(c_{qd(u)}^{(1)}-\frac{1}{6}c_{qd(u)}^{(8)}+\frac{1}{2}c_{qd(u)}^{(8)})=\int\frac{ds}{\pi s}\left(\frac{2\sigma^{(8)}_{q\bar{d}(\bar{u})}+\sigma^{(1)}_{q\bar{d}(\bar{u})}}{3}-\sigma_{qd(u)}^{(6)}\right)+\frac{1}{3}(2C_{\infty}^{q\bar{d}(\bar{u})(8)}+C_{\infty}^{q\bar{d}(\bar{u})(1)})$$ $$\displaystyle-2(c_{qd(u)}^{(1)}-\frac{1}{6}c_{qd(u)}^{(8)})=\int\frac{ds}{\pi s}\left(\sigma_{q\bar{d}(\bar{u})}^{(8)}-\frac{\sigma_{qd(u)}^{(\bar{3})}+\sigma_{qd(u)}^{(6)}}{2}\right)+C_{\infty}^{q\bar{d}(\bar{u})(8)}.$$ (43) Note that $\sigma_{q}$ stands for $\sigma_{q^{p}}$ where $p$ is a $SU(2)$ index and cross sections on the right hand side of the Eq.4.3.2 can be taken for any component of the quark doublet. Rewriting the result in terms of uncolored cross section, we will obtain $$\displaystyle-2c_{qd(u)}^{(1)}=\int\frac{ds}{\pi s}\left(\sigma_{q\bar{d}(\bar{u})}-\sigma_{qd(u)}\right)+\frac{8}{9}C_{\infty}^{q\bar{d}(\bar{u})(8)}+\frac{1}{9}C_{\infty}^{q\bar{d}(\bar{u})(1)}.$$ (44) Finally, we now study the left handed operators that contribute to doublet-doublet scattering, where the doublet is that of weak isospin. 4.3.3 $O_{ll}$ Let us start with the four lepton operator $O_{ll}^{(iijj,ijji)}$. Expanding in components, the following sum rules can be derived (we assume $i\neq j$ and we do not write the operators obtained by interchange of $i\leftrightarrow j$ which are identical, just as in the discussion for up quarks; see Eq. B.2) $$\displaystyle-2c^{iijj}_{ll}-2c^{ijji}_{ll}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{e_{i}}e_{j}}-\sigma_{e_{i}e_{j}}\bigg{)}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{\nu}_{i}\nu_{j}}-\sigma_{{\nu}_{i}\bar{\nu}_{j}}\bigg{)}+C^{ee,e\nu}_{\infty}$$ $$\displaystyle-2c^{iijj}_{ll}=\int\frac{ds}{\pi s}\bigg{(}\sigma_{\bar{e_{i}}\nu_{j}}-\sigma_{e_{i}\nu_{j}}\bigg{)}+C_{\infty}^{e\nu}.$$ (45) We can decompose the amplitude into the weak isospin amplitudes (see appendix C for details) to obtain the following dispersion relations $$\displaystyle-2c^{iijj}_{ll}-2c^{ijji}_{ll}=\int\frac{ds}{\pi s}\left[\frac{1}{2}\left(\sigma_{i\bar{j}}^{(1)}+\sigma_{i\bar{j}}^{(3)}\right)-\sigma_{ij}^{(3)}\right]-C_{\infty}^{ij(3)}$$ $$\displaystyle-2c^{iijj}_{ll}=\int\frac{ds}{\pi s}\left[\sigma_{i\bar{j}}^{(3)}-\frac{1}{2}\left(\sigma_{ij}^{(3)}+\sigma_{ij}^{(1)}\right)\right]+C_{\infty}^{(i\bar{j}(3))}$$ (46) where $(i,j)$ and $(i,\bar{j})$ refer to the leptons from $l_{i},l_{j}(\bar{l}_{j})$ doublets and $\sigma^{(3,1)}_{ij,(i\bar{j})}$ refers to cross section from the triplet and singlet initial state formed by $ij$ or $i\bar{j}$. In the case of an operator formed by just one lepton family, we will obtain: $$\displaystyle-8c_{ll}=\int\frac{ds}{\pi s}\left[\sigma_{e\bar{e}(\nu\bar{\nu})}-\sigma_{ee,(\nu\nu)}\right]+C_{\infty}^{ee}=\int\frac{ds}{\pi s}\left[\frac{1}{2}\left(\sigma_{l\bar{l}}^{(1)}+\sigma_{l\bar{l}}^{(3)}\right)-\sigma_{ll}^{(3)}\right]-C_{\infty}^{ll(3)}$$ $$\displaystyle-4c_{ll}=\int\frac{ds}{\pi s}\left[\sigma_{e\bar{\nu}}-\sigma_{e\nu}\right]+C_{\infty}^{e\nu}=\int\frac{ds}{\pi s}\left[\sigma_{l\bar{l}}^{(3)}-\frac{1}{2}\left(\sigma_{ll}^{(3)}+\sigma_{ll}^{(1)}\right)\right]+C_{\infty}^{(l\bar{l}(3))}$$ (47) 4.3.4 $O^{(3),(1)}_{lq}$ In this case, only the operators with $iijj$ flavour structure can contribute; and we arrive at the following dispersion relations- $$\displaystyle-2c_{lq}^{(1)}-2c_{lq}^{(3)}=\int\frac{ds}{\pi s}\left[\sigma_{e\bar{d}(\nu\bar{u})}-\sigma_{ed(\nu u)}\right]+C_{\infty}^{e\bar{d}(\nu\bar{u})}$$ $$\displaystyle-2c_{lq}^{(1)}+2c_{lq}^{(3)}=\int\frac{ds}{\pi s}\left[\sigma_{e\bar{u}(\nu\bar{d})}-\sigma_{eu(\nu d)}\right]+C_{\infty}^{e\bar{u}(\nu\bar{d})}$$ (48) As before, decomposing cross section under isospin we will obtain $$\displaystyle-2c_{lq}^{(1)}-2c_{lq}^{(3)}=\int\frac{ds}{\pi s}\left[\frac{1}{2}\left(\sigma_{l\bar{q}}^{(1)}+\sigma_{l\bar{q}}^{(3)}\right)-\sigma^{(3)}_{lq}\right]-C_{\infty}^{lq(3)}$$ $$\displaystyle-2c_{lq}^{(1)}+2c_{lq}^{(3)}=\int\frac{ds}{\pi s}\left[\sigma_{l\bar{q}}^{(3)}-\frac{1}{2}\left(\sigma_{ql}^{(1)}+\sigma_{ql}^{(3)}\right)\right]+C_{\infty}^{l\bar{q}(3)}$$ (49) 4.3.5 $O_{qq}$ and $O^{(3)}_{qq}$ Let us start with one family, in terms of the octet and singlet cross sections, $$\displaystyle-8\left(c_{qq}^{(1)}+c_{qq}^{(3)}\right)=\int\frac{ds}{\pi s}\left[\frac{2\sigma^{(8)}_{u\bar{u}}+\sigma^{(1)}_{u\bar{u}}}{3}-\sigma_{uu}^{(6)}\right]-C_{\infty}^{(6)uu}$$ $$\displaystyle-4\left(c_{qq}^{(1)}+c_{qq}^{(3)}\right)=\int\frac{ds}{\pi s}\left[\sigma^{(8)}_{u\bar{u}}-\frac{\sigma^{\bar{3}}_{uu}+\sigma^{(6)}_{uu}}{2}\right]+C_{\infty}^{(8)u\bar{u}}$$ $$\displaystyle-4(c_{qq}^{(1)}+c_{qq}^{(3)})=\int\frac{ds}{\pi s}\left[\frac{2\sigma^{(8)}_{u\bar{d}}+\sigma^{(1)}_{u\bar{d}}}{3}-\sigma_{ud}^{(6)}\right]-C_{\infty}^{ud(6)}$$ $$\displaystyle-4(c_{qq}^{(1)}-c_{qq}^{(3)})=\int\frac{ds}{\pi s}\left[\sigma^{(8)}_{u\bar{d}}-\frac{\sigma_{ud}^{(6)}+\sigma_{ud}^{(\bar{3})}}{2}\right]+C_{\infty}^{ud(8)}$$ (50) We can proceed further by performing the double decomposition in terms of the $SU(2)_{L}$ multiplets using the relations $$\displaystyle\sigma_{u\bar{u}}=\frac{1}{2}\left(\sigma^{(1)}_{q\bar{q}}+\sigma^{(3)}_{q\bar{q}}\right),~{}~{}~{}\sigma_{u\bar{d}}=\sigma^{(3)}_{q\bar{q}}$$ $$\displaystyle\sigma_{uu}=\sigma^{(3)}_{qq},~{}~{}~{}\sigma_{ud}=\frac{1}{2}\left(\sigma_{qq}^{(1)}+\sigma_{qq}^{(3)}\right).$$ (51) Then we will obtain (the first index will refer now to QCD multiplet and the second one to electroweak). $$\displaystyle-8\left(c_{qq}^{(1)}+c_{qq}^{(3)}\right)=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{6}\big{(}(2\sigma^{(8,1)}_{q\bar{q}}+\sigma^{(1,1)}_{q\bar{q}}+2\sigma^{(8,3)}_{q\bar{q}}+\sigma^{(1,3)}_{q\bar{q}})\big{)}-\sigma^{(6,3)}_{qq}\bigg{]}-C^{(6,3)}_{qq\infty}$$ $$\displaystyle-4\left(c_{qq}^{(1)}+c_{qq}^{(3)}\right)=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{2}\big{(}\sigma^{(8,1)}_{q\bar{q}}+\sigma^{(8,3)}_{q\bar{q}}\big{)}-\frac{1}{2}\big{(}\sigma^{(\bar{3},3)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}+\frac{C_{q\bar{q}\infty}^{(8,1)}+C_{q\bar{q}\infty}^{(8,3)}}{2}$$ $$\displaystyle-4(c_{qq}^{(1)}+c_{qq}^{(3)})=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{3}\big{(}2\sigma^{(8,3)}_{q\bar{q}}+\sigma^{(1,3)}_{q\bar{q}}\big{)}-\frac{1}{2}\big{(}\sigma^{(6,1)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}-\frac{C_{qq\infty}^{(6,1)}+C_{qq\infty}^{(6,3)}}{2}$$ $$\displaystyle-4(c_{qq}^{(1)}-c_{qq}^{(3)})=\int\frac{ds}{\pi s}\bigg{[}\sigma^{(8,3)}_{q\bar{q}}-\frac{1}{4}\big{(}\sigma^{(\bar{3},1)}_{qq}+\sigma^{(6,1)}_{qq}+\sigma^{(\bar{3},3)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}+C_{q\bar{q}\infty}^{(8,3)}$$ (52) In terms of the color averaged cross sections, $$\displaystyle\frac{16}{3}\left(c_{qq}^{(1)}+c_{qq}^{(3)}\right)=\int\frac{ds}{\pi s}\left(\frac{\sigma_{q\bar{q}}^{(3)}+\sigma_{q\bar{q}}^{(1)}}{2}-\sigma_{qq}^{(3)}\right)-\frac{C_{qq\infty}^{(6,3)}}{3}+\frac{C_{\bar{q}q\infty}^{(8,1)}+C_{\bar{q}q\infty}^{(8,3)}}{3}$$ $$\displaystyle-4\left(c_{qq}^{(1)}-\frac{c_{qq}^{(3)}}{3}\right)=\int\frac{ds}{\pi s}\left(\sigma^{(3)}_{q\bar{q}}-\frac{\sigma_{qq}^{(1)}+\sigma_{qq}^{(3)}}{2}\right)-\frac{C_{qq\infty}^{(6,1)}+C_{qq\infty}^{(6,3)}}{6}+\frac{2C_{q\bar{q}\infty}^{(8,3)}}{3}$$ (53) In the case of two flavours, the disperion relations become: $$\displaystyle-2(c^{iijj}_{qq}+c^{ijji}_{qq}+c^{(3)iijj}_{qq}+c^{(3)ijji}_{qq})=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{6}\big{(}2\sigma^{(8,1)}_{q\bar{q}}+\sigma^{(1,1)}_{q\bar{q}}+2\sigma^{(8,3)}_{q\bar{q}}+\sigma^{(1,3)}_{q\bar{q}}\big{)}-\sigma^{(6,3)}_{qq}\bigg{]}-C_{qq\infty}^{(6,3)}$$ $$\displaystyle-2(c^{iijj}_{qq}+c^{(3)iijj}_{qq})=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{2}\big{(}\sigma^{(8,1)}_{q\bar{q}}+\sigma^{(8,3)}_{q\bar{q}}\big{)}-\frac{1}{2}\big{(}\sigma^{(\bar{3},3)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}+\frac{C_{q\bar{q}\infty}^{(8,1)}+C_{q\bar{q}\infty}^{(8,3)}}{2}$$ $$\displaystyle-2(c^{iijj}_{qq}-c^{(3)iijj}_{qq}+2c^{(3)ijji}_{qq})=\int\frac{ds}{\pi s}\bigg{[}\frac{1}{3}\big{(}2\sigma^{(8,3)}_{q\bar{q}}+\sigma^{(1,3)}_{q\bar{q}}\big{)}-\frac{1}{2}\big{(}\sigma^{(6,1)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}-\frac{C_{qq\infty}^{(6,1)}+C_{qq\infty}^{(6,3)}}{2}$$ $$\displaystyle-2(c^{iijj}_{qq}-c^{(3)iijj}_{qq})=\int\frac{ds}{\pi s}\bigg{[}\sigma^{(8,3)}_{q\bar{q}}-\frac{1}{4}\big{(}\sigma^{(\bar{3},1)}_{qq}+\sigma^{(6,1)}_{qq}+\sigma^{(\bar{3},3)}_{qq}+\sigma^{(6,3)}_{qq}\big{)}\bigg{]}-C_{q\bar{q}\infty}^{(8,3)}$$ The power of these relations relations allows to understand immediately the signs of the Wilson coefficients in the various UV completions. For example, for a scalar diquark which is in $(\bar{6},1,-1/3)$ representation under $SU(3)\times SU(2)\times U(1)_{Y}$ we will get: $$\displaystyle c^{iijj}_{qq,\bar{6}}=c^{(3)ijji}_{qq,\bar{6}}=-c^{(3)iijj}_{qq,\bar{6}}=-c_{qq,\bar{6}}^{ijji}>0.$$ (55) Similarly, for a scalar diquark which is in $(3,1,-1/3)$ will get: $$\displaystyle c^{iijj}_{qq,3}=c_{qq,3}^{ijji}=-c^{(3)ijji}_{qq,3}=-c^{(3)iijj}_{qq,3}>0.$$ (56) Finally, we can sum and report these sum rules in terms of color averaged cross sections, which yield 2 equations depending on whether the initial and final state form $SU(2)_{L}$ triplets or singlets. $$\displaystyle-2(c^{iijj}_{qq}+c^{(3)iijj}_{qq}+\frac{1}{3}c^{ijji}_{qq}+\frac{1}{3}c^{(3)ijji}_{qq})=\int\frac{ds}{\pi s}\left[\frac{\sigma_{q\bar{q}}^{(3)}+\sigma_{q\bar{q}}^{(1)}}{2}-\sigma_{qq}^{(3)}\right]-\frac{C_{qq}^{(6,3)}}{3}+\frac{C_{\bar{q}q\infty}^{(8,1)}+C_{\bar{q}q\infty}^{(8,3)}}{3},$$ $$\displaystyle-2(c^{iijj}_{qq}-c^{(3)iijj}_{qq}+\frac{2}{3}c^{(3)ijji}_{qq})=\int\frac{ds}{\pi s}\left[\sigma^{(3)}_{q\bar{q}}-\frac{\sigma_{qq}^{(1)}+\sigma_{qq}^{(3)}}{2}\right]-\frac{C_{qq\infty}^{(6,1)}+C_{qq\infty}^{(6,3)}}{6}+\frac{2C_{q\bar{q}\infty}^{(8,3)}}{3}.$$ 5 Summary In this work, we explored the sum rules for four-fermion operators at dimension six level. As expected, the convergence of the dispersion integrals leading to the dimension six Wilson coefficients is not guaranteed, and in particular is spoiled by the t-channel exchange of the vector bosons. This additional feature can modify the predictions of the dispersion relations for sign and strength of IR interactions, and for some UV completions the value of the Wilson coefficients can be even saturated by the pole at infinity. However we find that this ambiguity of IR couplings is not related to the (non)convergence of the dispersion integrals and as an example, we have constructed, in addition to tree level, 1-loop weakly coupled models (see section 3.3) where both signs become available even when the integral over the infinite circle vanishes. We presented forward dispersion relations for all possible four-fermion dimension six operators. To facilitate the connection between the values of the Wilson coefficients and new physics scenarios, we have performed the decomposition in terms of the $SU(2)$ and $SU(3)$ multiplets. Such relations predict in a model independent way processes with enhanced cross section in the case of discoveries in low energy experiments. We carefully indicate all the relevant quantum numbers of the quantities involved in our dispersion relations in order to provide a convenient dictionary for future measurements, where the precise structure of initial states is often unavailable. This can have interesting consequences; for example, Eq.38 tells us that measuring uncoloured cross sections in the UV clouds any information about $c^{(8)}_{ud,(qu),(qd)}$ Wilson coefficients, despite it contributing formally to sum rules with fixed initial colours. We emphasize that these sum rules are to be interpreted as a model independent link between UV and IR measurements, as opposed to the usual positivity bounds. Even though less constraining on the EFT parameter space, these relations can instead be used as a powerful tool for model building to unearth the underlying, fundamental physics that is to be explored in the coming years. Acknowledgements AA in part was supported by the MIUR contract 2017L5W2PT. DG acknowledges support through the Ramanujan Fellowship and MATRICS Grant of the Department of Science and Technology, Government of India. We would like to thank Joan Elias Miro for discussion and comments. Appendix A Massless spinor helicity conventions We will briefly summarize the key results relevant to us (for a pedagogical introduction see [29] ) in the $(+,-,-,-)$ signature (we will follow the conventions discussed in [30, 31, 32]). We have the 2 component spinors $v_{L/R},u_{L/R}$ and their barred versions. They are related by crossing symmetry, $u_{L/R}=v_{R/L},\bar{u}_{L/R}=\bar{v}_{R/L}$. It is important to realise that for antiparticles, the spinor has opposite handedness to the field that describes it. For instance, a right chiral field $e_{R}$ has an antiparticle which has the spinor $v_{L}$, while the particle carries the spinor $u_{R}$. In other words, both $u_{R},v_{L}$ correspond to a right chiral field; whereas $v_{R},u_{L}$ correspond to a left chiral field. To be absolutely clear, we will just refer to the handedness of the relevant spinor as opposed to the helicity of a particle/antiparticle wherever necessary. Operationally, we will assign the brackets $$\bar{v}_{L}=\bar{u}_{R}\equiv[,\hskip 8.53581pt\bar{v}_{R}=\bar{u}_{L}\equiv\langle,\hskip 8.53581ptv_{L}=u_{R}\equiv\rangle,\hskip 8.53581ptv_{R}=u_{L}\equiv].$$ (58) The inner product is antisymmetric-as is expected for grassman-valued quantities- $$\langle pq\rangle=-\langle qp\rangle\hskip 14.22636pt[pq]=-[qp]$$ (59) Note that this also means that $\langle pp\rangle=0=[pp]$. Mixed brackets vanish. The formalism encodes a lot of power-for example, it tells us that a $\langle$ and $]$ type spinor cannot occur at a vertex unless there’s a $\gamma^{\mu}$ involved-a vector connects opposite helicity particles. Similarly, same helicity spinors making up a vertex indicate a scalar is involved. We will not insist on taking all momenta ingoing/outgoing; in our calculations, the momenta labelled $1,2$ are always incoming and $3,4$ are always outgoing. We can freely work with negative momenta via the standard analytic continuation- $$|-p\rangle=i|p\rangle\hskip 14.22636pt|-p]=i|p]$$ (60) These brackets satisfy the property $$\langle 1|\gamma^{\mu}2]=[2|\gamma^{\mu}1\rangle$$ (61) Furthermore, we have $$[i|\gamma_{\mu}|i\rangle=2p_{i}\hskip 14.22636pt\langle ij\rangle[ij]=-2p_{i}\cdot p_{j}=(p_{i}-p_{j})^{2}$$ (62) since $p_{i}^{2}=0$ for massless spinors. We therefore have our mandelstam variables- $$s=2p_{1}\cdot p_{2}=-[12]\langle 12\rangle\hskip 11.38109ptt=-2p_{3}\cdot p_{1}=[13]\langle 13\rangle\hskip 11.38109ptu=-2p_{4}\cdot p_{1}=[14]\langle 14\rangle$$ (63) Finally, we have the all important Fierz rearrangement- $$[1|\gamma^{\mu}|2\rangle[3|\gamma_{\mu}|4\rangle=-2[13]\langle 24\rangle$$ (64) Appendix B Details about cross sections and loop amplitudes In this appendix we will give details about explicit verification of the dispersion relations presented in the text for various models. B.1 $Z^{\prime}$ at tree level Let us start with neutral vector $Z^{\prime}$ coupled to right-handed current via $\lambda Z^{\prime}_{\mu}\bar{e}_{R}\gamma^{\mu}e_{R}$. It generates $e_{R}\overline{e_{R}}$ scattering through the diagrams The full amplitude will be given by $$A_{e\bar{e}}=-2\lambda^{2}[14]\langle 23\rangle\bigg{(}\frac{1}{s-m^{2}}+\frac{1}{t-m^{2}}\bigg{)}$$ (65) Matching the IR and UV amplitudes at low energies we will obtain $$-8c^{1111}_{ee}[14]\langle 23\rangle=-2\lambda^{2}[14]\langle 23\rangle\bigg{(}\frac{1}{-m^{2}}+\frac{1}{-m^{2}}\bigg{)}\implies c^{1111}_{ee}=-\frac{\lambda^{2}}{2m^{2}}$$ (66) Let us verify that this is consistent with our dispersion relation. With a vector $Z^{\prime}$ at order $O(\lambda^{2})$ in perturbation theory we have $\sigma_{e\bar{e}}\neq 0$ and$\sigma_{ee}=0$. To calculate the cross sections, note that by the optical theorem, we have $$Im(e\bar{e}\to e\bar{e})=s\sigma^{tot}_{e\bar{e}}$$ (67) We use the fact that $Im\bigg{(}\frac{1}{p^{2}-m^{2}+i\epsilon}\bigg{)}=-\pi\delta(p^{2}-m^{2})$ which, when substituted in the amplitude (14) gives us $$Im(e^{+}_{L}e^{-}_{R}\to e^{+}_{L}e^{-}_{R})=2\lambda^{2}\pi s\delta(s-m^{2})$$ (68) Starting from dispersion relation in Eq.10 we will get, $$-8c^{1111}_{ee}=\int\frac{ds}{\pi s}(\sigma_{e\bar{e}}-0)+C_{\infty}=\int\frac{ds}{\pi s^{2}}Im(e\bar{e}\to e\bar{e})+C_{\infty}=\frac{2\lambda^{2}}{m^{2}}+C_{\infty}.$$ (69) Calculating explicitly $C_{\infty}$ we will obtain: $$\displaystyle C_{\infty}=\int^{2\pi}_{0}\frac{d\theta}{2\pi}\frac{A(|s_{\Lambda}|e^{i\theta},0)}{(|s_{\Lambda}|e^{i\theta})^{2}}\cdot(|s_{\Lambda}|e^{i\theta})=\frac{2\lambda^{2}}{m^{2}}$$ $$\displaystyle A|_{t\to 0}=-2\lambda^{2}s\left(\frac{1}{s-m^{2}}+\frac{1}{-m^{2}}\right)$$ (70) Which is of the same sign as the dispersion integral, and therefore we find $$-8c^{1111}_{ee}=4\lambda^{2}/m^{2}\implies c^{1111}_{ee}=-\lambda^{2}/2m^{2}$$ (71) as claimed in (12), and our dispersion relation is explicitly verified. B.2 Integrating out color octet Very similarly to the charge neutral $Z^{\prime}$ we can consider effects coming from integrating out color octet $V$ which has zero electric charge. Let us look for example on octet interacting with right -haned up quark current: $$g_{ij}V^{A}_{\mu}(\bar{u}_{i}\gamma^{\mu}T^{A}u_{j})\implies c^{ijkl}_{uu}=\frac{-g_{kj}g_{il}}{M_{V}^{2}}+\frac{g_{ij}g_{kl}}{3M_{V}^{2}}.$$ (72) Let us assume that the octet couplings are universal and flavour diagonal, then $g_{ij}=g\delta_{ij}$, and the Wilson coefficients are equal to $$c^{iijj}_{uu}=\frac{2g^{2}}{3M_{V}^{2}},~{}~{}~{}c^{ijji}_{uu}=\frac{-2g^{2}}{M_{V}^{2}}$$ (73) Now let us look at dispersion relations for $i\neq j$, then similar to the discussion in Eq.19, the cross sections will vanish at $O(g^{2})$ and the right hand side of Eq. B.2 will be controlled by the contribution of the integrals over infinite contours. $$C_{\infty}^{(8)}=C_{\infty}^{\alpha\neq\beta}=\frac{8g^{2}}{3M^{2}_{V}},~{}~{}~{}~{}C_{\infty}^{\alpha\alpha}=-C_{\infty}^{(6)}=-\frac{4g^{2}}{3M_{V}^{2}}$$ (74) which confirm the dispersion relations $$\displaystyle-2c_{u}^{iijj}=-C_{\infty}^{uu,(6)}$$ $$\displaystyle-2(c_{u}^{iijj}+c_{u}^{ijji})=C_{\infty}^{u\bar{u},(8)}.$$ (75) B.3 Charge 2 scalar at tree level Let us build a model where only $\sigma_{ee(\bar{e}\bar{e})}$ is present at the lowest order in perturbation theory. This can be done with a charge (2) scalar, which interacts as follows $(\lambda\phi\bar{e}_{R}e_{R}^{c}+h.c)$ where the $c$ subscript stands for charge conjugation. Matching the amplitudes in EFT and UV theory we will obtain $$-8c^{1111}_{ee}[14]\langle 23\rangle=-2!2!\lambda^{2}\frac{[14]\langle 32\rangle}{-m^{2}}\implies c^{1111}_{ee}=+\frac{\lambda^{2}}{2m^{2}}$$ (76) Then the scattering cross section is equal to: $$\sigma^{tot}_{\bar{e}\bar{e}}=4\lambda^{2}\pi\delta(s-m^{2}).$$ (77) So that dispersion relation becomes: $$-8c^{1111}_{ee}=\int\bigg{(}0-\frac{ds}{\pi s}\sigma_{++}\bigg{)}=-\frac{4\lambda^{2}}{m^{2}}$$ (78) and as expected we find: $c^{1111}_{ee}=+\frac{\lambda^{2}}{2m^{2}}$. B.4 Dispersion relation at 1-loop At last let us consider the following UV completion for the $(\bar{e}\gamma_{\mu}e)(\bar{e}\gamma_{\mu}e)$ operator. It will demonstrate that it is possible to have a negative Wilson coefficient with vanishing integrals over infinite crircles. Let us extend SM with a new heavy scalar and fermion with interactions $$\displaystyle\lambda(\Phi\bar{e}_{R}\Psi)+h.c,$$ (79) where electric charges of new fields satisfy $Q[\Phi]+Q[\Psi]=-1$. Let us start by deriving the $c_{ee}$ Wilson coefficient. We consider $e\bar{e}\to e\bar{e}$ scattering; then the amplitude will be given by a box diagram and it’s crossed version . In order to match with EFT predictions, we can focus on the processes where external particles have vanishing momentum, in which case the amplitude will be given by $$iM=\lambda^{4}[1|\gamma_{\mu}|2\rangle[4|\gamma_{\nu}|3\rangle\int\frac{d^{D}k}{(2\pi)^{D}}\frac{k^{\mu}k^{\nu}}{(k^{2}-m^{2})^{4}}-(2\leftrightarrow 3).$$ (80) Now, we have assumed that the masses of the new fields are equal $m[\Phi]=m[\Psi]=m$; the loop function for arbirary masses is reported in the main text. Performing the integral , which is finite, and doing the Fierz rearrangements we will obtain: $$M=\frac{1}{3}\frac{\lambda^{4}}{16\pi^{2}m^{2}}[14]\langle 23\rangle,\Rightarrow c^{1111}_{ee}=-\frac{1}{3}\frac{\lambda^{4}}{128\pi^{2}m^{2}}$$ (81) So we see that sign of the Wilson coefficient is indeed negative. By looking at the amplitude at $s\to\infty$ we can see that $A(s)/s\to 0$ at infinite circle, so all we need to know is the cross section for $e\bar{e}$ scattering to verify the dispersion relations. The total cross section at the order $O(\lambda^{4})$ will be given by the two processes $e\bar{e}\to\Psi\bar{\Psi}$ and $e\bar{e}\to\Phi\Phi^{*}$, and there will be no processes $ee\to$ anything at $O(\lambda^{4})$. Performing the calculation we obtain $$\displaystyle\sigma(e\bar{e}\to\Psi\bar{\Psi})=\frac{\lambda^{4}}{16\pi s^{2}}\sqrt{(s(s-4m^{2})}$$ $$\displaystyle\sigma(e\bar{e}\to\Phi\Phi^{*})=\frac{\lambda^{4}}{64\pi s^{2}}\bigg{(}-8\sqrt{s(s-4m^{2})}-4s\log\bigg{(}\frac{s-\sqrt{s(s-4m^{2})}}{s+\sqrt{s(s-4m^{2}}}\bigg{)}\bigg{)}$$ (82) Performing the calculation for the dispersion integral we will obtain: $$\int\frac{ds}{\pi s}(\sigma(e\bar{e}\to\Psi\bar{\Psi})+\sigma(e\bar{e}\to\Phi\Phi^{*}))=\frac{\lambda^{4}}{\pi^{2}m^{2}}(1/96+1/96)=\frac{\lambda^{4}}{48m^{2}\pi^{2}}=-8c_{ee}^{1111}$$ (83) satisfying the identity of Eq.10. Appendix C Decomposition of cross sections in terms of $SU(2)$ and $SU(3)$ irreps In this section, we will give details of the decomposition of amplitudes in terms of the irreducible representations of the electoweak $SU(2)$ and QCD $SU(3)$ groups. The Wigner-Eckart theorem tells us that the resulting amplitudes and cross sections will depend only on representations of the initial state (see for similar decompositions of the isospin [33, 34], custodial [4, 35] and other groups [5, 13]). Let us start with two lepton doublet scattering $L_{1}L_{2}\rightarrow L_{1}L_{2}$ where $L_{1},L_{2}$ are $SU(2)_{L}$ doublet leptons, for eg $(\nu_{e},e)^{T}$. Then, the initial state can be decomposed as a singlet and a triplet under SU(2): $2\otimes 2=3\oplus 1$ where the singlet and triplet states are defined as follows: $$\displaystyle S=\textrm{singlet}=\frac{1}{\sqrt{2}}(|\nu e\rangle-|e\nu\rangle)$$ $$\displaystyle T=\textrm{ triplet }=\left\{\begin{array}[]{c}|\nu\nu\rangle\\ \frac{1}{\sqrt{2}}(\left|\nu e\rangle+|e\nu\right\rangle)\\ |ee\rangle\end{array}\right.,$$ (87) where $(\nu,e)$ are the components of EW doublet. Similarly, we can decompose the states for the lepton and anti-lepton scattering, where we will find: $$\displaystyle L_{1}=(\nu_{1},e_{1})^{T},\hskip 14.22636pt\bar{L}_{2}=(-\bar{e}_{2},\bar{\nu}_{2})^{T}$$ (88) $$\displaystyle\tilde{S}=\textrm{singlet}=\frac{1}{\sqrt{2}}(|e\bar{e}\rangle+|\nu\bar{\nu}\rangle)$$ (89) $$\displaystyle\tilde{T}=\text{ triplet }=\left\{\begin{array}[]{c}-|\nu\bar{e}\rangle\\ \frac{1}{\sqrt{2}}(|\nu\bar{\nu}\rangle-|e\bar{e}\rangle)\\ |e\bar{\nu}\rangle\end{array}\right.$$ (93) Using this decomposition we can immediately see that the amplitude for the forward scatterings of the various components of the doublets will be decomposed as $$\displaystyle A_{ee}=A_{LL}^{(3)},\quad A_{e\bar{e}}=\frac{A_{L\bar{L}}^{(3)}+A_{L\bar{L}}^{(1)}}{2},$$ $$\displaystyle A_{\nu e}=\frac{A_{LL}^{(1)}+A_{LL}^{(3)}}{2},~{}~{}A_{\bar{\nu}e}=A_{L\bar{L}}^{(3)},$$ $$\displaystyle A_{\nu\nu}=A_{LL}^{(3)},\quad A_{\nu\bar{\nu}}=\frac{A_{L\bar{L}}^{(3)}+A_{L\bar{L}}^{(1)}}{2}$$ (94) and similarly, we can decompose the cross sections for quark lepton doublet scatterings. Note that forward amplitudes will satisfy the following crossing relations: $$\displaystyle A_{LL}^{(3)}(s,u)=\frac{A_{L\bar{L}}^{(3)}(u,s)+A_{L\bar{L}}^{(1)}(u,s)}{2},~{}~{}~{}\frac{A_{LL}^{(3)}(s,u)+A_{LL}^{(1)}(s,u)}{2}=A_{L\bar{L}}^{(3)}(u,s).$$ (95) Since we are looking at the dispersion relations for dimension six operators and the amplitudes in IR scale linearly with $s$, the integrals over infinite circle contours must satisfy: $$\displaystyle C_{\infty}^{LL(L\bar{L})}\equiv\int_{\rm infinite~{}circle}\frac{ds}{s^{2}}A^{LL(L\bar{L})}(s),$$ $$\displaystyle-C_{\infty}^{LL(3)}=\frac{C_{\infty}^{L\bar{L}(3)}+C_{\infty}^{L\bar{L}(1)}}{2},~{}~{}~{}-\frac{C_{\infty}^{LL(3)}+C_{\infty}^{LL(1)}}{2}=C_{\infty}^{L\bar{L}(3)}.$$ (96) The situation is very similar for the quark quark doublet scattering but there we can decompose the initial state in the representations of the color $SU(3)$ as well (see [5] for an example). C.1 $SU(3)$ decomposition Let us consider for simplicity scattering of the quarks which are singlets under $SU(2)$, in which case $$\displaystyle 3\otimes 3=\bar{3}\oplus 6,~{}~{}~{}3\otimes\bar{3}=1\oplus 8$$ (97) In the case of two particle scattering, the only two possibilities are when initial particles have the same, or different colors. For the quark antiquark scattering,various initial color states can be decomposed as $$\displaystyle|1\bar{1}\rangle=\frac{S}{\sqrt{3}}+\frac{\lambda_{8}}{\sqrt{6}}+\frac{\lambda_{2}}{\sqrt{2}},~{}~{}|2\bar{2}\rangle=\frac{S}{\sqrt{3}}+\frac{\lambda_{8}}{\sqrt{6}}-\frac{\lambda_{2}}{\sqrt{2}}$$ $$\displaystyle|3\bar{3}\rangle=\frac{S-\sqrt{2}\lambda_{8}}{\sqrt{3}},~{}~{}|1\bar{2}\rangle=\frac{\lambda_{1}+i\lambda_{2}}{\sqrt{2}},~{}~{}|2\bar{1}\rangle=\frac{\lambda_{1}-i\lambda_{2}}{\sqrt{2}}$$ $$\displaystyle|1\bar{3}\rangle=\frac{\lambda_{4}+i\lambda_{5}}{\sqrt{2}},~{}~{}|3\bar{1}\rangle=\frac{\lambda_{4}-i\lambda_{5}}{\sqrt{2}},~{}~{}|2\bar{3}\rangle=\frac{\lambda_{6}+i\lambda_{7}}{\sqrt{2}},~{}~{}|3\bar{2}\rangle=\frac{\lambda_{6}-i\lambda_{7}}{\sqrt{2}}$$ (98) Where $S=\frac{|1\bar{1}\rangle+|2\bar{2}\rangle+|3\bar{3}\rangle}{\sqrt{3}}$ is a $SU(3)$ singlet state and $(\lambda_{1}...\lambda_{8})$ are components of an octet, which can be formed Using Gell Mann matrices (our normalization is $\langle\lambda_{i}|\lambda_{j}\rangle=\delta_{ij})$. Similarly,we can decompose the quark-quark initial state in terms of the $\bf 6$ and $\bf\bar{3}$ . Note that in this case, the same and different color initial states can be schematically decomposed as $$\displaystyle|\alpha\alpha\rangle={\bf 6}_{\alpha\alpha},|\alpha\beta\rangle_{\alpha\neq\beta}=\frac{{\bf 6}_{\alpha\beta}\pm\bar{\bf 3}_{\alpha\beta}}{\sqrt{2}}$$ (99) Then, the Wigner Eckart theorem tells us that the total cross sections and forward scattering amplitudes will satisfy the following relations: $$\displaystyle\sigma_{\alpha\alpha}=\sigma^{(6)},~{}~{}\sigma_{\alpha\beta}|_{\alpha\neq\beta}=\frac{1}{2}(\sigma^{(\bar{3})}+\sigma^{(6)}),$$ (100) $$\displaystyle\sigma_{\alpha\bar{\alpha}}=\frac{\sigma^{(1)}+2\sigma^{(8)}}{3},~{}~{}~{}\sigma_{\alpha\bar{\beta}}|_{\alpha\neq\beta}=\sigma^{(8)},$$ (101) where ${\alpha(\bar{\beta})}$ indices indicate whether we are looking at the same or different color scatterings in $qq$, or $q\bar{q}$ channels ($q$ here stands for a quark, which can be either up or down type). In case we are interested in the color averaged cross sections, these will be related to the above as follows $$\displaystyle\sigma_{qq}\equiv\left(\sigma_{qq}\right)_{col.aver.}=\frac{2}{3}\sigma^{(6)}+\frac{1}{3}\sigma^{(\bar{3})}$$ $$\displaystyle\sigma_{q\bar{q}}\equiv\left(\sigma_{q\bar{q}}\right)_{col.aver.}=\frac{1}{9}\sigma^{(1)}+\frac{8}{9}\sigma^{(8)}$$ At last, forward amplitudes decomposed under QCD representations will satisfy the following crossing relations: $$\displaystyle A_{qq}^{(6)}(s,u)=\frac{A_{q\bar{q}}^{(1)}(u,s)+2A_{q\bar{q}}^{(8)}(u,s)}{3},~{}~{}~{}\frac{A_{qq}^{(\bar{3})}(s,u)+A_{qq}^{(6)}(s,u)}{2}=A_{q\bar{q}}^{(8)}(u,s)$$ (103) Similarly, the contours over the infinite circles will be related as follows: $$\displaystyle-C_{\infty}^{qq(6)}=\frac{C_{\infty}^{q\bar{q}(1)}+2C_{\infty}^{q\bar{q}(8)}}{3},~{}~{}~{}-\frac{C_{\infty}^{qq(\bar{3})}+C_{\infty}^{qq(6)}}{2}=C_{\infty}^{q\bar{q}(8)}.$$ (104) References [1] A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis, and R. Rattazzi JHEP 10 (2006) 014, [hep-th/0602178]. [2] R. J. Eden, P. V. Landshoff, D. I. Olive, and J. C. Polkinghorne, The analytic S-matrix. Cambridge Univ. Press, Cambridge, 1966. [3] V. N. Gribov, Strong interactions of hadrons at high emnergies: Gribov lectures on Theoretical Physics. Cambridge University Press, 10, 2012. [4] A. Falkowski, S. Rychkov, and A. Urbano JHEP 04 (2012) 073, [arXiv:1202.1532]. [5] T. Trott arXiv:2011.10058. [6] J. Gu and L.-T. Wang JHEP 03 (2021) 149, [arXiv:2008.07551]. [7] C. Zhang and S.-Y. Zhou Phys. Rev. Lett. 125 (2020), no. 20 201601, [arXiv:2005.03047]. [8] G. N. Remmen and N. L. Rodd Phys. Rev. Lett. 125 (2020), no. 8 081601, [arXiv:2004.02885]. [Erratum: Phys.Rev.Lett. 127, 149901 (2021)]. [9] Q. Bonnefoy, E. Gendy, and C. Grojean JHEP 04 (2021) 115, [arXiv:2011.12855]. [10] B. Fuks, Y. Liu, C. Zhang, and S.-Y. Zhou Chin. Phys. C 45 (2021), no. 2 023108, [arXiv:2009.02212]. [11] J. Gu, L.-T. Wang, and C. Zhang arXiv:2011.03055. [12] I. Low, R. Rattazzi, and A. Vichi JHEP 04 (2010) 126, [arXiv:0907.5413]. [13] B. Bellazzini, L. Martucci, and R. Torre JHEP 09 (2014) 100, [arXiv:1405.2960]. [14] M. Froissart Phys. Rev. 123 (1961) 1053–1057. [15] A. Martin Phys. Rev. 129 (1963) 1432–1436. [16] A. Martin Nuovo Cim. A 42 (1965) 930–953. [17] B. Bellazzini JHEP 02 (2017) 034, [arXiv:1605.06111]. [18] C. de Rham, S. Melville, A. J. Tolley, and S.-Y. Zhou JHEP 03 (2018) 011, [arXiv:1706.02712]. [19] M. Chala and J. Santiago arXiv:2110.01624. [20] G. N. Remmen and N. L. Rodd arXiv:2010.04723. [21] B. Grzadkowski, M. Iskrzynski, M. Misiak, and J. Rosiek JHEP 10 (2010) 085, [arXiv:1008.4884]. [22] J. de Blas, J. C. Criado, M. Perez-Victoria, and J. Santiago JHEP 03 (2018) 109, [arXiv:1711.10391]. [23] A. Falkowski and K. Mimouni JHEP 02 (2016) 086, [arXiv:1511.07434]. [24] A. Falkowski, M. González-Alonso, and K. Mimouni JHEP 08 (2017) 123, [arXiv:1706.03783]. [25] ALEPH, DELPHI, L3, OPAL, LEP Electroweak Collaboration, S. Schael et al. Phys. Rept. 532 (2013) 119–244, [arXiv:1302.3415]. [26] SLAC E158 Collaboration, P. L. Anthony et al. Phys. Rev. Lett. 95 (2005) 081601, [hep-ex/0504049]. [27] S. Alioli, M. Farina, D. Pappadopulo, and J. T. Ruderman Phys. Rev. Lett. 120 (2018), no. 10 101801, [arXiv:1712.02347]. [28] M. Farina, G. Panico, D. Pappadopulo, J. T. Ruderman, R. Torre, and A. Wulzer Phys. Lett. B 772 (2017) 210–215, [arXiv:1609.08157]. [29] H. Elvang and Y.-t. Huang arXiv:1308.1697. [30] P. Baratella, C. Fernandez, and A. Pomarol Nucl. Phys. B 959 (2020) 115155, [arXiv:2005.07129]. [31] H. K. Dreiner, H. E. Haber, and S. P. Martin Phys. Rept. 494 (2010) 1–196, [arXiv:0812.1594]. [32] M. L. Mangano and S. J. Parke Phys. Rept. 200 (1991) 301–367, [hep-th/0509223]. [33] M. G. Olsson Phys. Rev. 162 (1967), no. 5 1338. [34] S. L. Adler Phys. Rev. 140 (Nov, 1965) B736–B747. [35] A. Urbano JHEP 06 (2014) 060, [arXiv:1310.5733].
Success and luck in creative careers Milan Janosov ${}^{1}$ Federico Battiston${}^{1}$ Roberta Sinatra${}^{2,3,4}$111Corresponding author: rsin@itu.dk (January 15, 2021) Abstract Luck is considered to be a crucial ingredient to achieve impact in all creative domains, despite their diversity. For instance, in science, the movie industry, music, and art, the occurrence of the highest impact work and of a hot streak within a creative career are very difficult to predict. Are there domains that are more prone to luck than others? Here, we provide new insights on the role of randomness in impact in creative careers in two ways: (i) we systematically untangle luck and individual ability to generate impact in the movie, music, and book industries, and in science, and compare the luck factor between these fields; (ii) we show the limited predictive power of collaboration networks to predict career hits. Taken together, our analysis suggests that luck consistently affects career impact across all considered sectors and improves our understanding in pinpointing the key elements in the prediction of success. Keywords: success, dynamics of impact, creative careers, science of science \setstretch 1.05 \SetWatermarkTextDRAFT \SetWatermarkScale1.3 \SetWatermarkLightness0.9 ††affiliationtext: $1$) Department of Network and Data Science Central European University, H-1051 Budapest, Hungary $2$) Department of Computer Science, IT University of Copenhagen, 2300 Copenhagen, Denmark $3$) ISI Foundation, 10126 Torino, Italy $4$) Complexity Science Hub Vienna, 1080 Vienna, Austria 1. Introduction Research in developmental psychology has studied careers of prominent artists and scientists for decades, advocating the importance of chance for the successful unfolding of careers in various creative domains [1, 2, 3, 4]. In recent years, the availability of big databases on scientific publications [5] and artistic records, from books to movies [6, 7, 8], has made it possible to test a number of previously suggested hypotheses on a large scale. For instance, in previous work [9, 10] the analysis of thousands of creative careers has shown that the biggest hit of an individual occurs randomly within an individual’s career, a finding named the equal-odds-rule [3]. This rule explains the variability in the occurrence of creative individuals’ best hit. Yet, career hits are not only the results of luck but also of other individual and team properties [11, 12, 13, 14, 15, 16, 17]. While previous literature suggests that luck and individual ability are both necessary to excel in art and science charts [18, 19, 20, 21, 22], a quantification of the role of luck across different creative domains is still lacking. In which creative fields are individuals more likely to go from rags to riches and vice-versa? Does the position of an individual in a network predict the occurrence of a hit? In this work, we quantify luck fluctuations in impact across creative careers from movies, music, literature, and science, and create a framework to compare the broad observed differences in impact [23, 7]. Do these random fluctuations have the same magnitude across careers? To address this question we build on the mathematical framework known as the $Q$-model proposed in Ref. [9] to untangle the impact into two components, one encoding fluctuations that can be interpreted as luck, and another depending only on the individual. We show that this model is consistent with the classical test theory [24], also known as the true score theory [25], stating that the measured value of a certain measurable attribute consists of the sum of its true – error-free – score, and a stochastic error term. We find that the value of such randomness varies depending on the creative fields. By comparing this stochastic term to the typical impact score associated with each artist and scientist, we identify creative domains where the impact of single creative products are the hardest to predict and fluctuate the most within individual careers. The high importance of luck to achieve success in creative careers is confirmed by the lack of power of the collaboration networks to predict the best hit of an individual. To carry out these analyses, we rely on a large-scale data set covering more than four million individuals from c. 1902 up until 2017. The outline of this paper is the following. First, we test the validity of the requirements of the Q-model proposed in Ref. [9]. Second, we use the Q-model impact decomposition method to factor impact in creative careers. Third, we apply the classical test theory to quantify the role of luck within each field and discuss the observed differences across fields. Finally, we construct the collaboration network within each domain and compare the time of the best hit of creative individuals to the time at which they reach their highest score in network centrality. 2. Data We compiled four data sets and individual careers across the movie, music, and book industries, and across scientific fields, covering overall 28 different types of creative domains: 1. We mined the Internet Movie Database (IMDb [26]) and compiled a data set of 803,013 individuals in the movie industry working as movie directors, producers, art directors, soundtrack composers, and scriptwriters, altogether contributing to 1,297,275 movies. 2. By using the Discogs [27, 28] and LastFM [29] platforms, we constructed a database of 379,366 musicians released 31,841,981 songs in the genres of electronic, rock, pop, funk, folk, jazz, hip-hop, and classical music. 3. We extracted data from Goodreads [30] and built a data set containing information about 2,069,891 book authors and 6,604,144 books. 4. We used the Web of Science database [5] to reconstruct the scientific careers of 1,204,688 scientists from the fields of chemistry, mathematics, physics, applied physics, space science and astronomy, zoology, geology, agronomy, engineering, theoretical computer science, biology, environmental science, political science, and health science, altogether authoring approximately 87,4 million papers. See further details about the data sets and data collection in SI Section S1.1. To measure the impact of movies, songs, books, and articles, we use their cumulated impact on large audiences, as captured by the rating counts for movies and books, the play counts for songs, and the number of citations received within the first ten years after publication for scientific papers [31] (SI Section S1.2). The existence of these cumulative impact measures in all data sets allows us to reconstruct individual careers consistently across domains by building the historical time series of each person. In Figure 1a-d we illustrate career examples in the four different databases: movie director Stanley Kubrick, pop singer Michael Jackson, writer Agatha Christie, and mathematician Paul Erdős. Alternative impact measures, like the average rating for movies or books, or rescaled citations for papers [32], highly correlate with the cumulative measures used here, indicating that the impact patterns do not depend on the chosen measure (see SI Section S1.3 for details). 3. The $Q$-model: decomposing luck and individual ability in impact Kubrick’s highest impact movie was released 30 years after his career start, while Michael Jackson had his biggest hit earlier in his career. These anecdotal examples suggest that a career’s biggest hit can occur at any time. Indeed, a rigorous analysis of our data sets indicates that any work in a career has an equal chance to be the highest impact work, following the so-called random-impact-rule, consistently with what previously found for large data sets of artists and scientists [9, 10] (SI Section S2.1 for this replication analysis). The magnitude of a career impact is not random though: individual impact distributions differ broadly from each other. These broad differences are reproduced and explained by the $Q$-model [9], a mechanistic stochastic model (SI Section S2.2). According to this model, the impact $S_{i,\alpha}$ of a work $\alpha$ created by an individual $i$ can be decomposed as the product of two independent factors $S_{i,\alpha}=Q_{i}p_{i,\alpha}$, where $Q_{i}$ is an individual variable, depending only on individual $i$, and $p_{i,\alpha}$ is a stochastic variable, independently drawn for every work from a field-specific distribution. The values of $Q_{i}$ and $p_{i,\alpha}$ are obtained by maximizing a likelihood function which takes as input all the impact $S$ of all products of all creative careers in a given field  [9, 33]. Under the main assumption that the covariance $\sigma^{2}_{QN}$ between the distributions of productivity $N$ and parameter $Q$ is negligible compared to the variance of the $p$ and $N$ distributions – an assumption that we verify and validate in SI Section S2.3 – we can write a simple approximated formula for $Q_{i}$: $$\displaystyle Q_{i}$$ $$\displaystyle=$$ $$\displaystyle e^{\left\langle\log{S_{i,\alpha}}\right\rangle-\mu_{p}},$$ (1) where $\mu_{p}$ is the mean of the $p$ distribution within a given field. Eq. (1) indicates that the exponent of $Q_{i}$ is the average of the order of magnitude of the impact of $i$’s works, minus a constant equal for all individuals in a field. To establish whether the $Q$-model reproduces the individual impact distributions in our data sets, we first check the hypothesis that both $S$ and $N$ follow a log-normal distributions (SI Section S2.3). We then estimate the parameters associated with the distributions of $p$ and $Q$, finding that within each creative domain both $Q_{i}$ and $p_{i,\alpha}$ are also log-normally distributed (SI Subsection S2.3.3). The negligible measured covariance $\sigma^{2}_{pN}$ and $\sigma^{2}_{pQ}$ predict that the individual rescaled impact, $p_{i,\alpha}=S_{i,\alpha}/Q_{i}$, should follow a universal distribution, independent of $Q_{i}$. We use this prediction to validate the model in our data sets: we measure the distribution $p_{i,\alpha}=S_{i,\alpha}/Q_{i}$ and show that it collapses roughly on a single curve for different careers (Figures 1e-h). Since this rescaled distribution is independent of individual variables like $N_{i}$ and $Q_{i}$, we can interpret $p$ as a “luck factor" driving impact [9]. Finally, we compare the data with the the scaling of the highest impact work with productivity as predicted by the $Q$-model, and show that the $Q$-model gives significantly better results than the random model (SI Section S2.4). A single high impact work in a career is not sufficient to have a high $Q_{i}$; rather an individual needs to perform consistently well throughout her career. For instance, the movie director with the highest $Q_{i}$, Christopher Nolan, has a $Q_{i}=1719.3$, due to his many high impact movies like “Inception" or “Interstellar". In contrast, one-hit wonders, who achieved fame with a single song or movie, and whose success was neither anticipated nor repeated throughout their career with many high impact works, are typically characterized by lower values of $Q_{i}$. An example is Michael Curtiz (1886–1962), director of the all-time classic Casablanca, who has only a modest $Q_{i}=4.8$ as he did not direct any other movies with outstanding impact. In this case, the large impact of their career’s biggest hit is explained by a lucky draw of a high $p$, rather than being due to the individual ability to consistently produce work of high impact, encoded in a high $Q$. Taken together, the $Q$-model well reproduces the career impact of individuals in our data sets. 4. From the $Q$-model to classical test theory to compare luck across different domains Here we introduce a quantitative approach, based on the $Q$-model, to compare the fluctuations in luck and variations in the typical impact across different creative fields. Recalling the impact decomposition $S_{i,\alpha}=Q_{i}p_{i,\alpha}$ presented in in Section 3, we can write: $$\displaystyle\hat{S}_{i,\alpha}$$ $$\displaystyle=$$ $$\displaystyle\hat{Q}_{i}+\hat{p}_{i,\alpha},$$ (2) where $\hat{S}_{i,\alpha}=\log S_{i,\alpha}$, $\hat{Q_{i}}=\log Q_{i}$ and $\hat{p}_{i,\alpha}=\log p_{i,\alpha}$. Because $p$ and $Q$ are log-normally distributed (SI subsection S2.3.3), $\hat{p}$ and $\hat{Q}$ are normally distributed. In addition, the covariance $\sigma^{2}_{pQ}\approx 0$, then $\sigma^{2}_{\hat{p}\hat{Q}}\approx 0$. Therefore, Eq. (2) takes the form proposed by classical test theory [25, 34, 35, 36, 37, 38] for decomposing the measured value of a certain quantity. Namely, according to this theory, the measurable value of an observed attribute, in this case $\hat{S}$, can be decomposed as the sum of two uncorrelated variables both following normal distributions. One of these two variables encodes the true score of the quantity, in this case, $\hat{Q}$, and the other variable encodes a random error term, $\hat{p}$ (Figure 2a). The two normal distributions of the variables $\hat{Q_{i}}$ and $\hat{p}_{i,\alpha}$ are in line with previous studies, suggesting that individual variables like skill and talent, and global ones such as luck, are typically normally distributed [39, 24, 40, 37, 38, 21]. Building on Eq. (2), on the properties of normal distributions and on the measured properties of the $Q$ and $p$ variables in our data sets, we can express the variance of $\hat{S}_{i,\alpha}$, $\sigma^{2}_{\hat{S}}$, as: $$\displaystyle\sigma^{2}_{\hat{S}}$$ $$\displaystyle=$$ $$\displaystyle\sigma^{2}_{\hat{Q}}+\sigma^{2}_{\hat{p}},$$ (3) where $\sigma^{2}_{\hat{p}}$ and $\sigma^{2}_{\hat{Q}}$ are the variance of the distributions of $\hat{p}$ and $\hat{Q}$, respectively. This decomposition allows us to measure the relative importance of the luck component compared to the individual component in determining impact. Building on previous work [37, 38], we define the randomness index $R$ capturing the share of luck in the overall impact variance as: $$\displaystyle R$$ $$\displaystyle=$$ $$\displaystyle\frac{\sigma^{2}_{\hat{p}}}{\sigma^{2}_{\hat{S}}}.$$ (4) When individuals in a domain have a similar ability, captured by a narrow $\hat{Q_{i}}$ distribution, differences in impact are mainly driven by luck, and we have that $R\to 1$. In contrast, when p has a low variance compared to S, then $R\to 0$, and luck plays only a small role. This index allows us to compare the role of randomness across 28 different creative fields (Figure 2b). 5. Randomness in creative careers In which creative domains are inequalities driven more by luck than by individual ability? Using the $Q$-model, we measure $\sigma^{2}_{\hat{Q}}$ and $\sigma^{2}_{\hat{p}}$ for 28 types of creative careers in the movie, music, and book industries, and in science (Figure 2c). We also report the linear regression between $\sigma^{2}_{\hat{p}}$ and $\sigma^{2}_{\hat{Q}}$ (black dashed line on Figure 2c). This figure offers a number of findings. First, we observe that all the fields are placed above the diagonal line ($\sigma^{2}_{\hat{p}}>\sigma^{2}_{\hat{Q}}$), indicating that within each domain fluctuations in luck are broader than those in the typical career impact of individuals. Second, we do not observe any domain-specific clustering on the $\left(\sigma^{2}_{\hat{Q}},\sigma^{2}_{\hat{p}}\right)$ plane, which suggests that the studied domains do not differ from each other for luck. Third, we report that the linear regression has a slope lower than one; therefore, it intercepts the diagonal for high $\sigma^{2}_{\hat{Q}}$. Because $\sigma^{2}_{\hat{S}}=\sigma^{2}_{\hat{p}}+\sigma^{2}_{\hat{Q}}$ and the regression slope is equal to the ratio $\sigma^{2}_{\hat{p}}/\sigma^{2}_{\hat{Q}}$, a value smaller than 1 indicates that as $\sigma^{2}_{\hat{S}}$ increases (illustrated by the shading on Figure 2c), the value of $\sigma^{2}_{\hat{Q}}$ increases faster than $\sigma^{2}_{\hat{p}}$. Hence large fluctuations in impact are dominated by large fluctuations in individual ability, captured by $Q$, rather than fluctuations in luck. Next, we measure the randomness index $R$ of Eq. (4) to compare the characteristics of career success across domains (Figure 2d). We find that on the one hand, within the movie industry, producers’ careers are the most driven by luck, followed by composers. On the other hand, being an art director is associated with the lowest $R$ index, suggesting that high impact as an art director happens less by chance than in other careers within the movie industry. It is also interesting to compare the randomness index of scriptwriters ($R=0.528$) and book authors ($R=0.546$), due to the apparent similar nature of these two creative careers. The value of the indexes show that writing for the movie industry is less driven by luck than in the book industry. In music, classical and hip-hop are the most robust against luck fluctuations with the lowest randomness index of our data set, $R=0.507$. This could be explained by classical music being more dependent on skills, experience, and musical training. Regarding hip-hop music, we could speculate that being largely an underground genre, it is less exposed to the rich-gets-richer effect and leaves more space for rising fresh talents. In contrast, the most popular genres, namely electronic music ($R=0.546$) and rock music ($R=0.530$), are on the other side of the spectrum with the highest $R$. These two genres contain the largest number of one-hit wonder careers; therefore impact has more pronounced fluctuations. Regarding science, we also find a large spectrum of randomness, with space science and astronomy ($R=0.555$) and political science ($R=0.546$), at one extreme for the highest $R$-index fields, and theoretical computer science ($R=0.517$) and engineering ($R=0.523$) being among the most robust fields against luck fluctuations. 6. Lack of predictive power of the collaboration network In the previous sections, we have analyzed the randomness and magnitude of impact focusing on individual careers. However, a movie, a song or a paper is rarely the result of the work of only one individual. Therefore next we ask: Can collaborations between individuals improve our ability to predict the magnitude of success and the occurrence of career big hits? Previous research suggests that scientific career success and network position can be connected [41, 42, 43, 44, 15, 45]. We reconstruct the temporal aggregated network of movie directors, pop musicians, and mathematicians to study the relationship between their network positions and impact. We use a yearly time resolution. In this network, each individual is represented by a node, and the strength of the connection between nodes at year $T$ is the Jaccard-index of the works of the two nodes, that is the number of works the two both individuals collaborated on, divided by the total number of works they contributed to until year $T$. Based on this definition, the final aggregated collaboration network of movie directors consists of 8,091,208 links between 184,220 people active between 1927-2017 (giant connected component only). In the pop music network, we have 52,366 musicians active between 1926-2017 connected by 8,232,349 links, while in mathematics, we have 94,755 links between 27,401 mathematicians between 1944 and 2016. For each individual, we measure her degree centrality, PageRank centrality, and clustering coefficient in the aggregated network at the time she has produced a work. We then create individual time-series for each of these network measures, where time points correspond to the works in the individual career. Finally we study these network-based time-series together with the evolution of individual impact over a career. Our hypothesis is that the dynamics of the network position and the dynamics of impact are correlated over time, however with a delay of $\tau$. We measure $\tau$ by shifting the network time-series with respect to the impact time-series, and choose the value for which we obtain the maximum correlation between the time-series (see Figure 3 SI Section S3). By analyzing the time-series of movie directors, pop musicians, and mathematicians, we find that there are two groups of individuals: those for whom the network measures peak before the highest impact work occurs, and those for whom the peak occurs after. For example, the director Francis Ford Coppola ($\tau=5$) belongs to the first category, while George Lucas ($\tau$ = -1) to the second (Figure 3a). However, there are no discernible differences between these two groups when we measure impact: the two groups have similar distributions of the $Q$-parameter (Figure 2b) and of the magnitude of the highest success withing a career (Figure 2c). Given the indistinguishable nature of impact in these two groups, we ask whether the observed shift $\tau$ is different from that obtained from reshuffled time-series, where time correlations are canceled. We measure the distribution of the delay parameter $\tau$, and compared it to the distribution of a randomized data set in which the time-series are randomly reshuffled. The two distributions are closely overlapping, confirmed by the double-sided Kolmogorov-Smirnov test (Figure 3d, details about the KS test in SI Section S3). Taken together, the collaboration network among individuals does not improve our ability to predict the timing of the biggest hit, suggesting that chance has much higher importance than the collaboration network to determine the timing of the biggest hit within a career. Conclusion In this work, we provided a framework to understand and quantify the role of randomness in the success of creative fields across different domains. To understand the emergence of high-impact creative works, we built large-scale data sets and investigated thousands of careers from the movie, book, and music industries, and from science. We built on an existing model, known as $Q$-model, to decompose the impact of the individual creative works into two independent components, one expressing the ability of an individual to have consistently high or low typical impact, captured by the $Q$-parameter, and one associated to random fluctuations, capturing the role of luck. We also cast the model into the framework of classical test theory, which aims to disentangle the true score of a variable from noisy fluctuations. Using this framework, we found that on average fluctuations in impact of single creative works are more influenced by luck than by individual ability. However, we conclude that the fluctuations in the individual parameter are more pronounced for fields with large fluctuations in impact. The extrapolated linear trend between fluctuations in individual parameter and in luck predicts that when impact fluctuations become large ($\sigma^{2}_{\hat{Q}}\approx 0.6$), the fluctuations in individual parameter become larger than the random ones. In this ideal, not observed case, the fluctuations in impact would be mainly due to individual differences. Moreover, we found that sub-disciplines within different domains cannot be clustered according to the relative magnitude of these fluctuations. The absence of clustering suggests the magnitude of luck is not a distinctive feature of domains. We introduced a synthetic randomness index, defined as the relative ratio of the variance of the random component to that of success, and investigated its score across different domains. We found that the randomness index varies in a relatively narrow range, despite the differences in typical impact. This further confirms the lack of distinct typical scales of random fluctuations associated with the four different domains investigated in the paper. Finally, in this narrow range of randomness, we found that the careers with highest luck are those of movie producers, electronic music artists, book authors, and scientists working in the fields of space science, and political science. On the other hand, randomness has the lowest influence on hip-hop and classical music, theoretical computer science, and movie art directors. Finally, we also studied the temporal relationship between success and centrality in the collaboration network for movie directors, pop musicians, and mathematicians as a case study. For each individual, we compared the temporal evolution of their network centrality to the evolution of their impact. We found that these two are correlated, yet with a delay. We computed these delay parameters and found two distinct classes of creative careers regardless to their creative domain. Individuals belonging to the first group produce their big hit first, and become well-connected in the network only after the occurrence of the hit, while people falling into the second category first build favorable connections, and produce their big hit afterwards. However, we found no correlation between individual impact and group the individual belongs to. We also showed that the delay between the impact and the network time-series follows the same distribution as randomized data. Future studies could further untangle the individual $Q$-parameter and pinpoint what $Q$ means, for example in terms of access to resources or early career steps. Also the variable $p$, interpreted here as luck, could contain more information than just randomness, if further data is incorporated in the analysis. Nevertheless, its universal distribution across careers suggests that this information is homogeneously distributed among individuals. Competing interests The authors declare that they have no competing financial or non-financial interests. Author’s contributions R.S. conceived the study. M.J., F.B., and R.S. collaboratively designed the study, and drafted, revised, and edited the manuscript. M.J. analyzed the data and ran all numerical analyses. Acknowledgements Special thanks to Emőke-Ágnes Horváth, János Kertész, Federico Musciotto, Rossano Schifanella, Michael Szell, Gábor Vásárhelyi for their valuable suggestions. Funding M.J. and R.S. acknowledge support from Air Force Office of Scientific Research grant FA9550-15-1-0364. The authors declare that they have no competing financial interests. Availability of data and materials The processed data files and scripts to reproduce the results presented on the figures are available here: https://github.com/milanjanosov/Success-and-randomness-in-creative-careers References [1] Harvey C Lehman. Age and achievement. Princeton, NJ, 1953. [2] Donald T Campbell. Blind variation and selective retentions in creative thought as in other knowledge processes. Psychological review, 67(6):380, 1960. [3] Dean Keith Simonton. Creative productivity and age: A mathematical model based on a two-step cognitive process. Dev Rev, 4(1):77–111, 1984. [4] Dean K Simonton. Age and outstanding achievement: What do we know after a century of research? Psychol bull, 104(2):251, 1988. [5] https://webofknowledge.com. Web of science. Date accessed: 2018.11.06. [6] Andreas Spitz and Emőke-Ágnes Horvát. Measuring long-term impact based on network centrality: Unraveling cinematic citations. PloS one, 9(10):e108857, 2014. [7] Burcu Yucesoy, Xindi Wang, Junming Huang, and Albert-László Barabási. Success in books: a big data approach to bestsellers. EPJ Data Science, 7(1):7, 2018. [8] Oliver E Williams, Lucas Lacasa, and Vito Latora. Quantifying and predicting success in show business. arXiv preprint arXiv:1901.01392, 2019. [9] Roberta Sinatra, Dashun Wang, Pierre Deville, Chaoming Song, and Albert-László Barabási. Quantifying the evolution of individual scientific impact. Science, 354(6312):aaf5239, 2016. [10] Lu Liu, Yang Wang, Roberta Sinatra, C Lee Giles, Chaoming Song, and Dashun Wang. Hot streaks in artistic, cultural, and scientific careers. Nature, 559(7714):396, 2018. [11] Roger Guimera, Brian Uzzi, Jarrett Spiro, and Luis A Nunes Amaral. Team assembly mechanisms determine collaboration network structure and team performance. Science, 308(5722):697–702, 2005. [12] Brian Uzzi, Satyam Mukherjee, Michael Stringer, and Ben Jones. Atypical combinations and scientific impact. Science, 342(6157):468–472, 2013. [13] Dashun Wang, Chaoming Song, and Albert-László Barabási. Quantifying long-term scientific impact. Science, 342(6154):127–132, 2013. [14] You-Na Lee, John P Walsh, and Jian Wang. Creativity in scientific teams: Unpacking novelty and impact. Res Policy, 44(3):684–697, 2015. [15] Olga Zagovora, Katrin Weller, Milan Janosov, Claudia Wagner, and Isabella Peters. What increases (social) media attention: Research impact, author prominence or title attractiveness? Proceedings of the 23rd International Conference on Science and Technology Indicators, pages 1182–1190, 2018. [16] Santo Fortunato, Carl T Bergstrom, Katy Börner, James A Evans, Dirk Helbing, Staša Milojević, Alexander M Petersen, Filippo Radicchi, Roberta Sinatra, Brian Uzzi, et al. Science of science. Science, 359(6379):eaao0185, 2018. [17] Mohsen Jadidi, Fariba Karimi, Haiko Lietz, and Claudia Wagner. Gender disparities in science? dropout, productivity, collaborations and success of male and female computer scientists. Adv Complex Syst, 21(03n04):1750011, 2018. [18] Francis Galton. Hereditary genius. 1869. Natural Inheritance, 1889. [19] John Carl Flugel and Donald J West. A hundred years of psychology. 1964. [20] Alexander M Petersen, Woo-Sung Jung, Jae-Suk Yang, and H Eugene Stanley. Quantitative and empirical demonstration of the matthew effect in a study of career longevity. P Nat Acad Sci, 108(1):18–23, 2011. [21] A Pluchino, AE Biondo, and A Rapisarda. Talent vs luck: the role of randomness in success and failure. arXiv preprint arXiv:1802.07068, 2018. [22] Alessandro Pluchino, Giulio Burgio, Andrea Rapisarda, Alessio Emanuele Biondo, Alfredo Pulvirenti, Alfredo Ferro, and Toni Giorgino. Exploring the role of interdisciplinarity in physics: Success, talent and luck. PloS one, 14(6):e0218793, 2019. [23] Filippo Radicchi, Santo Fortunato, and Claudio Castellano. Universality of citation distributions: Toward an objective measure of scientific impact. Proc Nat Acad Sci, 105(45):17268–17272, 2008. [24] Linda Crocker and James Algina. Introduction to classical and modern test theory. ERIC, 1986. [25] Frederic M Lord. A strong true-score theory, with applications. Psychometrika, 30(3):239–270, 1965. [26] www.imdb.com. Internet movie database. Date accessed: 2017.02.04. [27] www.discogs.com. Discogs music release database. Date accessed: 2017.02.04. [28] Joseph Hartnett. Discogs. com. The Charleston Advisor, 16(4):26–33, 2015. [29] www.last.fm. Lastfm. Date accessed: 2017.02.06. [30] www.goodreads.com. Goodreads book database. Date accessed: 2017.02.04. [31] Eugene Garfield and Robert King Merton. Citation indexing: Its theory and application in science, technology, and humanities, volume 8. Wiley New York, 1979. [32] Filippo Radicchi and Claudio Castellano. Rescaling citations of publications in physics. Phys Rev E, 83(4):046116, 2011. [33] Gábor Vásárhelyi, Csaba Virágh, Gergő Somorjai, Tamás Nepusz, Agoston E Eiben, and Tamás Vicsek. Optimized flocking of autonomous drones in confined environments. Science Robotics, 3(20):eaat3536, 2018. [34] Walter Kristof. Estimation of reliability and true score variance from a split of a test into three arbitrary parts. Psychometrika, 39(4):491–499, 1974. [35] Theresa Kline. Psychological testing: A practical approach to design and evaluation. Sage, 2005. [36] Jacob Kean and Jamie Reilly. Item response theory. Handbook for Clinical Research: Design, Statistics and Implementation. [37] MJ Mauboussin. Untangling skill and luck: How to think about outcomes—past, present, and future. Legg Mason Capital Management, 2010. [38] Michael J Mauboussin. The success equation: Untangling skill and luck in business, sports, and investing. Harvard Business Press, 2012. [39] James Stewart. The distribution of talent. Marilyn Zurmuehlen Working Papers in Art Education, 2(1):21–22, 1983. [40] Mary J Allen and Wendy M Yen. Introduction to measurement theory. Waveland Press, 2001. [41] William D Figg, Lara Dunn, David J Liewehr, Seth M Steinberg, Paul W Thurman, J Carl Barrett, and Julian Birkinshaw. Scientific collaboration results in higher citation rates of published articles. Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy, 26(6):759–767, 2006. [42] Jiann-Wien Hsu and Ding-Wei Huang. Correlation between impact and collaboration. Scientometrics, 86(2):317–324, 2011. [43] Filippo Radicchi. In science “there is no bad publicity”: Papers criticized in comments have high scientific impact. Sci Rep, 2:815, 2012. [44] Emre Sarigöl, René Pfitzner, Ingo Scholtes, Antonios Garas, and Frank Schweitzer. Predicting scientific success based on coauthorship networks. EPJ Data Sci, 3(1):9, 2014. [45] M Janosov, F Musciotto, F Battiston, and G Iñiguez. Elites, communities and the limited benefits of mentorship in electronic music. arXiv preprint arXiv:1908.10968, 2019. [46] http://www.metacritic.com. Metacritic database using expert’s evaluations. Date accessed: 2017.02.04. [47] Burcu Yucesoy and Albert-László Barabási. Untangling performance from success. EPJ Data Sci, 5(1):17, 2016. [48] Albert-László Barabási. The Formula: The Universal Laws of Success. Hachette UK, 2018. [49] Lev Muchnik, Sinan Aral, and Sean J Taylor. Social influence bias: A randomized experiment. Science, 341(6146):647–651, 2013. [50] Nikolaus Hansen and Andreas Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Evolutionary Computation, 1996., Proceedings of IEEE International Conference on, pages 312–317. IEEE, 1996. 7. Supplementary information S.I. 7.1. Data S.I. 7.1.1 Data sets Our research was based on four different data sources which were collected during the period of June-August 2017. IMDb dataset. We collected information on individuals active in the movie industry based on the Internet Movie Database (IMDb [26]). To this scope we first used the Advanced Title Search222http://www.imdb.com/search/title function and sent multiple queries to obtain the list of all movie identifiers which received a vote from at least one user. Using the list of unique movie identifiers ($\sim$1.3 million) we downloaded the HTML source code of each movie’s site. After processing all the raw HTML files about the movies we filtered out $\sim$0.8 million distinct names being listed as director, producer, scriptwriter, composer, and art-director, and created the career files by associating each movie to the corresponding individuals, for each professions separately (e.g. directors, producers). We attached the six different success measures present in the database: average rating, rating count, metascore [46], gross revenue to each career, the number of user and critic reviews, to each career and constructed the individuals’ career trajectories as time series of these quantities. Discogs and LastFM dataset. To cover individuals active in the music industry we relied on Discogs333https://www.discogs.com/search/ [27], a crowd-sourced music discography website. Via its search functionality, we listed all the master releases from the genres of rock, pop, electronic, folk, funk, hip-hop, classical, and jazz music to obtain a comprehensive list of $\sim$0.4 million artists combined. After crawling their discographies based on their unique identifiers from Discogs and parsing them into tracklists, we used the API of LastFM, a music providing service (www.lastfm.com [29]), to extract the play counts used as impact measures. For each song we queried the complete tracklist of the artists, and kept only those which had been played at least once. This way, we obtained a dataset consisting of $\sim$31 million songs. Then we combined the timestamped discography and the song – play count datasets to reconstruct the musicians’ careers for each genre. Goodreads dataset. We gathered data about book authors using Goodreads (www.goodreads.com, [30]), a social network site for readers by crawling the HTML website of the profile of $\sim$2.1 millions individuals authored $\sim$6.6 millions books. By extracting information from the authors’ biography profiles we built their career trajectories. Goodreads provides three different way for measuring impact: the average users’ ratings of a book, the total number of ratings, and the number of editions a book has, from which we used the rating count for further analysis. Web of Science dataset. We used the Web of Science [5] database to reconstruct the careers of scientists from 15 scientific disciplines: Agronomy, Applied Physics, Biology, Chemistry, Engineering, Environmental Science, Geology, Health Science, Mathematics, Physics, Political Science, Space Science Or Astronomy, Theoretical Computer Science, Zoology. In total, we analyse the careers of 1.55 million scientists, who authored 87.4 million papers. Each paper has been associated with the number of citations received.The career of a scientist consists of her publication record and the citation impact of each paper. After collecting these data sets, to limit the analysis to careers with sustained productivity, we set a filtering thresholds to 10 movies and papers for individuals with movie and scientific careers (except art-directors, for whom it was 20), 50 books for authors, and 80 songs for musicians. The relatively high threshold for artists active in the music industry is due to releases usually containing multiple songs. S.I. 7.1.2 Measuring success in artistic domains Our research premise is that success is a social phenomenon, and as such we aim to capture “a community’s reactions to the performance of the individuals” [47, 48]. For this reason, the movies, songs, books, and scientific papers in our database are associated with measures of success of different nature based on their social context. On one hand, there are success measures that are based on the evaluation of experts of the field, who have supposedly more insights on the underlying performance associated with the artistic product. On the other hand, success measures based on the opinion of the general public have larger statistics. However, they are also more likely to be biased by external factors, such as the rich-gets-richer phenomenon or the peer-effect [49]. From a statistical perspective, success measures can be either obtained as an average of responses over time or as the result of cumulative activities through time (Figure S.I.1). We based our analysis on the cumulative measures since these are the only ones present in all different available data sets. This also allowed us to adopt existing techniques and methodologies previously used for the study of paper and scientific careers, which all capture success through different metrics based on the accumulation of citations . S.I. 7.1.3 Correlations between different success measures Two of our data sets, covering movie and books, contain more than one type of success measures. Here we compare them by computing the correlations between pairs of measures of different kind. We find that different cumulative measures show high correlations with each other (see Fig. S.I.2b-c), indicating that results are robust to the choice of the specific cumulative measure. Average measures, like Metascore (Fig. S.I.2a) or average rating Fig. S.I.2d), do not correlate well with cumulative measures, indicating a different process generating these measures. Since these average measures have a broad distribution, and previous literature offers methods and finding mainly about cumulative measures, we opted to use cumulative measures. S.I. 7.2. Q-model S.I. 7.2.1 Testing the random impact rule The random impact rule states that the chronological rank of the best product ($N^{*}$) over a career with a length of $N$ is uniformly randomly distributed across a large sample of careers, meaning that the probability distribution $P(N^{*}/N)$ is well approximated by a uniform $U(0,1)$ distribution, as prior work has already shown for scientific fields [9] and other creative domains [10]. To test this hypothesis in our multiple creative domains, we compared the observed success cumulative distribution function (CDF) $P(>N^{*}/N)$ with both the CDF of the theoretical $U(0,1)$ distribution, and the CDF in a set of synthetic careers. In the synthetic careers, we randomly reshuffled the products, making sure that $N^{*}$ takes a uniformly randomly assigned position over the career. To obtain statistically reliable results, we repeated this randomization 100 times. We quantified the goodness of the fit by computing the $R^{2}$ deviation and the Kolmogorov–Smirnov distance of the original and the randomized data from the theoretical null model, i.e. the cumulative distribution function of the $U(0,1)$ uniform distribution. Results for four examples from each domain are shown on Figure S.I.3. The goodness of the fit for all the studied professions is measured by the $R^{2}$ value comparing the data to the $U(0,1)$. These results are summarized in Table S.I. 1. S.I. 7.2.2 Q-model The Q-model, proposed in [9], assumes that when the distribution of the impact of scientific papers can be described by log-normal functions, then the impact can be expressed as the trivariate log-normal distribution of three variables: (i) the productivity of the individuals (e.g. the number of papers they publish, $N$), (ii) an individual based parameter only depending on the individual’s prior works’ success, $Q$), and (iii) a random parameter representing outer factors ($p$). By transforming these variables to the logarithmic space ($\hat{N}=\log{N}$, $\hat{Q}=\log{Q}$, $\hat{p}=\log{p}$), the impact $P(\hat{S})$ distribution reads: $$\displaystyle P(\hat{S})$$ $$\displaystyle=$$ $$\displaystyle P(\hat{p},\hat{Q},\hat{N})=\frac{1}{\sqrt{(2\pi)^{3}}}\exp\bigg{% (}-\frac{1}{2}({\bf X}-\mu)^{T}\Sigma^{-1}({\bf X}-\mu)\bigg{)},$$ (5) where ${\bf X}=(\hat{p},\hat{Q},\hat{N})$, $\mu=(\mu_{N},\mu_{p},\mu_{Q})$ is the average vector, and $\Sigma$ the covariance matrix: $\Sigma$ = $\left({\begin{array}[]{ccc}\sigma^{2}_{p}&\sigma_{p,Q}&\sigma_{p,N}\\ \sigma_{p,Q}&\sigma^{2}_{Q}&\sigma_{Q,N}\\ \sigma_{p,N}&\sigma_{Q,N}&\sigma^{2}_{N}\\ \end{array}}\right)$. If the cross terms $\sigma_{p,Q}$ and $\sigma_{p,N}$ are close to zero, then the distribution of $p$ does not depend on variables capturing individual careers. In this case a number of simplications can be made, and the impact rescaled by the individual parameter $Q$ collapses on the same distribution for all individuals. To obtain the covariance matrix of the trivariate log-normal distribution of Eq. 5, we fit the theoretical distribution to the data by using CMA-ES [50, 33] (Covariance Matrix Adaptation Evolution Strategy. Evolution strategies), from which we obtained the parameters in Table S.I. 2. The shown results are consistent with the reported findings about scientific careers in [9]. S.I. 7.2.3 Requirements of the Q-model To apply the Q-model to a creative domain, the data has to fulfill a number of requirements. First, the observed impact distribution should follow a log-normal distribution, as shown in Section S.I. 7.2.3. Second, the random-impact-rule should hold (proven in Section S.I. 7.2.1). Third, $p$ should be uncorrelated from $Q$, and $N$ (their pairwise correlations should be negligible compared to their variances), which is shown in Table S.I. 2. Finally, the distributions of $p$, $Q$, and $N$ is log-normal which we show in the following. Fitting the impact distributions To model the distribution of the success measure on the different fields – rating count for movies and books, play count for songs, and citations for scientific papers – we assumed a log-normal shape and fitted the cumulative distribution function of the data (Examples from each data set are on Figure S.I.5). We quantified the goodness of the fit by computing $R^{2}$ values, whose values are reported for all the fields in Table S.I. 3 Since different fields reach a different audience, the impact of creative products across domains spans different ranges. In order to compare the decompositions of impacts across the $Q$ and $p$ components for several fields, beforehand we apply a min-max scaling to the measured impacts. This transforms $P(S_{a})$, the impact distribution of field $a$, in the following way: $$\displaystyle P(S_{a})\quad\to\quad\frac{P(S_{a})-\min(P(S_{a})}{\max(P(S_{a})% )-\min(P(S_{a}))}\cdot\max(P(S_{c})),$$ (6) where $P(S_{c})$ denotes the distribution of all the fields combined. The re-scaled impact distributions of the different fields are visualized in Figure S.I.5. Career length distributions Figure S.I.6 shows the log-normal distributions fitted on the career distributions for four selected, representative fields. The goodness of the fit is summarized in Table S.I. 4. $P(Q)$ and $P(p)$ The distributions $P(Q)$ and $P(p)$ described by log-normal functions, as illustrated by the fitted graphs on four representative fields on Figure S.I.7, and the results being summarized in Table S.I. 5 S.I. 7.2.4 Comparison to the data We tested a simple model for the success of creative products in individual careers, based on the random impact rule, by generating sets of careers on each field based on the random impact rule. We then compared the highest impact for work in the synthetic careers to that of the observed data. To ensure that the set of synthetic careers is directly comparable to data, we constructed them by randomly reshuffling the time events of the careers found in the data, then repeated this random shuffling 100 times and averaged them to minimize the level of noise. As the random-impact-rule holds, this means that the best product within a creative career occurs at random. However, can we say the same about the magnitude of the success of an individual’s best hit? If each artistic product has the same probability to be the most successful, and success does not depend on any intrinsic ability of an individual, success will only be affected by its productivity. this hypothesis (black lines, Figure S.I.8), known as the R-model [9], does not capture the observed patterns of impact in artistic domains (colored lines, Figure S.I.8). This finding was first observed in Ref. [9] for scientific careers. We also compared the expected highest impact of the individuals as a function of their productivity based on the $Q$-model. In order to do so we generated synthetic careers by combining the given career length $N_{i}$ and measured $Q_{i}$ parameter of the individual $i$, and randomly re-distributed the possible $p_{j}$ parameters (picking exactly $N_{i}$ $p_{j}$ values for individual $i$) among them to compute the impacts of the synthetic careers by using the Equation proposing the $Q$-model ($S_{i,\alpha}=Q_{i}p_{i,\alpha}$). After repeating this 100 times to minimize the noise level we arrived at a set of synthetic careers following the $Q$-model. We conducted this comparison on all the studied fields, for which the results are summarized in Table S.I. 6. S.I. 7.3. Randomness in networking We tested the relationship between the collaboration network of an individual and her success for a number of creative fields (movie directors, pop musicians, mathematicians). Results show two different types of networking behavior. For one type of individuals, their impact peaks first, and an increase in network centrality follows. For the others, the opposite is observed. Figure S.I.9-S.I.10 shows the distribution of the $Q$ parameter and the impact $S$ for these two groups of individuals, their network relevance measured by a set of standard network features, namely their degree, clustering, and PageRank centrality in the collaboration networks. Results show that there is no significant difference between the success patterns of these two groups (KS statistics in Table S.I. 7-S.I. 8). In Figure S.I.11 and Table S.I. 9 We also compare the value of $\tau$, a shifting parameters determined from the data associated to each individuals’ career, to the $\tau$ values we obtain in a randomized null-model data which we generate by reshuffling the original time series.
Is Supervised Learning With Adversarial Features Provably Better Than Sole Supervision? Litu Rout$\dagger$ $\dagger$Optical Data Processing Division, Signal and Image Processing Group, SAC, ISRO, Ahmedabad, India - 380015. mail id: lr@sac.isro.gov.in Abstract Generative Adversarial Networks (GAN) have shown promising results on a wide variety of complex tasks. Recent experiments show adversarial training provides useful gradients to the generator that helps attain better performance. In this paper, we intend to theoretically analyze whether supervised learning with adversarial features can outperform sole supervision, or not. First, we show that supervised learning without adversarial features suffer from vanishing gradient issue in near optimal region. Second, we analyze how adversarial learning augmented with supervised signal mitigates this vanishing gradient issue. Finally, we prove our main result that shows supervised learning with adversarial features can be better than sole supervision (under some mild assumptions). We support our main result on two fronts (i) expected empirical risk and (ii) rate of convergence. Adversarial learning, Supervised Learning, Deep Learning, Generative Adversarial Networks, Fast Convergence. I Introduction Over the past few years, the advancement of deep neural networks has opened up unprecedented opportunities in complex real world problems. The advent of high end computing infrastructure has played a vital role in this remarkable progress. Of particular interest, supervised learning, a domain of artificial intelligence focusing on learning via paired supervised training samples, has been quite effective in wide variety of challenging problems [1, 2, 3]. Despite the progress, it is often useful to heed the difficulty in acquiring sufficient amount of paired data for reliable supervised training. In this regard, the discovery of Generative Adversarial Networks (GANs) has provided a mechanism to reduce human effort in preparation of training data. By bringing this insight into fruition, computer vision problems, where it is almost impossible to gather paired data, are now being addressed with fair amount of certainty [4, 5, 6]. In particular, the requirement of paired supervised training samples is relaxed to some extent due to the vantage of GANs. The adversarial game between generator and discriminator allows generation of realistic looking artificial samples. Particularly intriguing is the phenomenon of generating samples from a high dimensional distribution without even explicitly estimating its density. From this point of view, an adversarial generator succinctly learns to generate realistic looking samples lying on a compact manifold of low dimensional space. In recent years, GANs are being used in addressing problems which were believed to be extremely challenging. The pervasive use of GANs has drawn a significant attention of the research community in various domain. Among many applications, some require that a particular sample is generated subject to conditional inputs. For this reason, recent methods propose to regularize the generation process through expert feedback. In photo-realistic image super resolution, the empirical risk of generator is regularized by a metric that minimizes distance between predicted and actual high resolution image [7]. In visual object tracking via adversarial learning, Euclidean norm is used to regulate the generated mask such that it lies within a small neighbourhood of the actual mask identifying discriminative features [8]. In medical image segmentation, multi-scale $L_{1}$-loss with adversarial features is shown to achieve better performance in terms of state-of-the-art evaluation metrics [9]. Performance gain in these diverse practical applications provides a clear indication of better empirical results of adversarial learning. Numerous attempts seek to provide empirical evidence on generative adversarial networks outperforming previously used sole supervised approaches. Recent studies suggest purely supervised learning driven reconstructed images have inferior visual perceptual quality as compared to adversarial learning [3]. So far the theoretical investigation shows that the empirical risk of supervised learning augmented with adversarial features does not become arbitrarily large during training. Hence, there exists a small constant that bounds the total empirical risk above [9]. However, these benign properties of loss surface does not necessarily provide enough theoretical evidence on whether supervised learning augmented with adversarial features is better than sole supervision, or not. Furthermore, the regularized generator achieves faster convergence due to efficient flow of useful gradients from the discriminator, but the theoretical understanding remains elusive. Therefore, several questions arise: • Why do updates take longer time to converge in case of completely supervised learning as compared to regularized adversarial learning? • Do adversarial features alleviate this slow convergence issue? • Is supervised learning augmented with adversarial features provably better than sole supervision? • If so, under which circumstance and on what basis it is better? I-A Summary of Contributions The fundamental contributions of this paper are the answers to these aforementioned questions. Specifically, we provide theoretical evidence to corroborate our answers. It is to be noted that we interchangeably use supervised learning with adversarial features and adversarial learning with expert regularization. By expert regularization we directly minimize a distance between predicted and true samples. • We show that a purely supervised objective suffers from vanishing gradient issue within the tiny landscape of empirical risk, provided the trainable parameters fall in the near optimal region. • Further, we provide mathematical explanations on adversarial discriminator being able to mitigate the issue of vanishing gradient under some mild assumptions. • As a part of our main contribution, we finally prove that supervised learning with adversarial features can be provably better than a purely supervised learning task. • More broadly, our theoretical investigation suggests that by augmenting adversarial features in a supervised learning framework, the expected empirical risk and rate of convergence is guaranteed to be at least as good as sole supervision. II Preliminaries Here, we briefly explain the architectures under study and summarize our notations. Given positive integers $a$ and $b$, where $a<b$, by $\left[a\right]$ we mean the set $\left\{1,2,\dots,a\right\}$, and by $\left[a,b\right]$, the set $\left\{a,a+1,\dots,b\right\}$. Let $X\subset\mathbb{R}^{d_{x}}$, $Y\subset\mathbb{R}^{d_{y}}$, and $Z\subset\mathbb{R}^{d_{z}}$. Given a vector $x$, $\left\|x\right\|$ represent the Euclidean norm. Given a matrix $M$, $\left\|M\right\|$ represent the spectral norm. By $f(\theta)|_{\theta_{i}}$, we mean $f(\theta)$ evaluated at $\theta_{i}$. Let $x\in X$ be the input vector. We consider an $L$-block resnet, $f_{\theta}(.)$ as the common architecture for generator of adversarial network and supervised learning. The output is computed as following, $$\begin{split}\displaystyle f_{\theta}(x)&\displaystyle=\omega^{\textit{T}}h_{L% }(x),\\ \displaystyle h_{l}(x)&\displaystyle=h_{l-1}(x)+V_{l}\phi_{z}^{l}(U_{l}h_{l-1}% (x)),l=1,2,\dots,L,\\ \displaystyle h_{0}(x)&\displaystyle=x.\end{split}$$ Here, $\phi_{z}(.)$ represents a neural network with parameters $z$. $\theta$ denotes the collection of parameters $\left\{w,z,U_{1},V_{1},U_{2},V_{2},\dots,U_{L},V_{L}\right\}$ of appropriate dimensions [10]. The discriminator, $g_{\psi}(.)$ of adversarial network has trainable parameters collected by $\psi$. By $\mathcal{J}_{\theta}\left(f_{\theta}\left(x\right)\right)$, we mean Jacobian matrix of $f_{\theta}(x)$ evaluated at $\theta$. III Motivation III-A Vanishing Gradient of Supervised Objective in Near Optimal Region Assumption 3.1 The loss function $l(p;y)$ is a convex and continuously differentiable function of $p$, i.e., $l(p;y)\in{\mathscr{C}}^{\prime}(Y)$. We also assume $l(p;y)$ be a locally $K-$Lipschitz, i.e., given $y\in Y$, $|{l}^{\prime}(p;y)|\leq K,~{}\forall~{}p$. Theorem 3.1 Suppose Assumption 3.1 holds. Let $f_{\theta}:X\mapsto Y$ be a differentiable function. Let $\mathcal{P}$ be an empirical distribution over training samples. If (i) $\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\mathcal{J}_{\theta}\left(f_{% \theta}\left(x\right)\right)\right\|^{2}\right]\leq M^{2}$ and (ii) trainable parameters are in the near optimal region, i.e., $\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|f_{\theta}(x)-f_{\theta^{*}}(x)% \right\|\right]\leq\epsilon$, then the expected gradient of purely supervised objective vanishes. That is, $\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}% (x);y\right)\right]\right\|\leq\lambda M$. Proof. $$\begin{split}\displaystyle\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{% P}}\left[l\left(f_{\theta}(x);y\right)\right]\right\|^{2}&\displaystyle\leq% \mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\nabla_{\theta}l\left(f_{\theta}% (x);y\right)\right\|^{2}\right]\\ &\displaystyle\leq\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\nabla_{\hat{y% }}l\left(f_{\theta}(x);y\right)\nabla_{\theta}f_{\theta}(x)\right\|^{2}\right]% ,\text{where}~{}\hat{y}=f_{\theta}(x)\\ &\displaystyle\leq\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\nabla_{\hat{y% }}l\left(f_{\theta}(x);y\right)\right\|^{2}\left\|\nabla_{\theta}f_{\theta}(x)% \right\|^{2}\right]\\ &\displaystyle\leq\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\nabla_{\hat{y% }}l\left(f_{\theta}(x);y\right)\right\|^{2}\right]\mathbb{E}_{(x,y)\sim% \mathcal{P}}\left[\left\|\mathcal{J}_{\theta}\left(f_{\theta}\left(x\right)% \right)\right\|^{2}\right]\end{split}$$ By continuously differentiable property, it is required that to every $q\in Y$ and to every $\lambda\geq 0$ corresponds an $\epsilon\geq 0$ such that if $p\in Y$ and $\left\|p-q\right\|\leq\epsilon$, then $\left\|{l}^{\prime}(p;y)-{l}^{\prime}(q;y)\right\|\leq\lambda$. Now, substitute $p=f_{\theta}(x)$ and $q=f_{\theta^{*}}(x)$. Condition $\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|f_{\theta}(x)-f_{\theta^{*}}(x)% \right\|_{2}\right]\leq\epsilon$ holds. Therefore, $$\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{\prime}(f_{\theta}(x);y)-{l% }^{\prime}(f_{\theta^{*}}(x);y)\right\|\right]\leq\lambda$$ (1) Since $\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{\prime}\left(f_{\theta}(x);% y\right)-{l}^{\prime}\left(f_{\theta^{*}}(x);y\right)\right\|\right]\geq% \mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{\prime}\left(f_{\theta}(x);% y\right)\right\|\right]-\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{% \prime}\left(f_{\theta^{*}}(x);y\right)\right\|\right]$, equation (1) implies, $$\begin{split}&\displaystyle\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{% \prime}\left(f_{\theta}(x);y\right)\right\|\right]-\mathbb{E}_{(x,y)\sim% \mathcal{P}}\left[\left\|{l}^{\prime}\left(f_{\theta^{*}}(x);y\right)\right\|% \right]\leq\lambda\\ &\displaystyle\implies\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|{l}^{% \prime}\left(f_{\theta}(x);y\right)\right\|\right]\leq\lambda,~{}(\mathbb{E}_{% (x,y)\sim\mathcal{P}}\left[\left\|{l}^{\prime}\left(f_{\theta^{*}}(x);y\right)% \right\|\right]=0,~{}\because\theta^{*}\text{ is optimal})\\ &\displaystyle\implies\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\nabla_{% \hat{y}}l\left(f_{\theta}(x);y\right)\right\|^{2}\right]\leq\lambda^{2}\end{split}$$ Now, $$\begin{split}&\displaystyle\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal% {P}}\left[l\left(f_{\theta}(x);y\right)\right]\right\|^{2}\leq\lambda^{2}M^{2}% \\ &\displaystyle\implies\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}% \left[l\left(f_{\theta}(x);y\right)\right]\right\|\leq\lambda M.\text{ This % finishes the proof. \hfill}\square\end{split}$$ Theorem 3.1 provides an upper bound on the expected gradient over empirical distribution $\mathcal{P}$ in near optimal region. The expected gradient shrinks in proportional to spectral norm of Jacobian matrix and approximation error. Thus, the small gradients in purely supervised learning exacerbates training progress within this tiny landscape of empirical risk. Furthermore, the rate of convergence slows down drastically. In other words, this shows the gradient updates become smaller as the training progresses which resonates with intuitive understanding of gradient descent. Therefore, a fundamental question arises. Can we attain convergence faster without having to loose any empirical risk benefits? We discuss this question in the following section. III-B Mitigating Vanishing Gradient with Adversarial Features Wasserstein Objective: The generator cost function of WGAN is given by, $$\arg\min_{\theta}-\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[g_{\psi}\left(f_{% \theta}\left(x\right)\right)\right],$$ and the discriminator cost function, $$\arg\min_{\psi}\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[g_{\psi}\left(f_{\theta}% \left(x\right)\right)\right]-\mathbb{E}_{y\sim\mathcal{P}_{Y}}\left[g_{\psi}% \left(y\right)\right].$$ (2) Theorem 3.2 Suppose condition $(i)$ of Theorem 3.1 holds. Let $g_{\psi}:Y\mapsto\mathbb{R}$ be a differentiable discriminator. If $\left\|g-g^{*}\right\|\leq\delta$, where $g^{*}:=g_{\psi^{*}}$ denote optimal discriminator, then $\left\|-\nabla_{\theta}\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[g_{\psi}\left(f_% {\theta}\left(x\right)\right)\right]\right\|\leq\delta M$. Proof. $$\begin{split}\displaystyle\left\|-\nabla_{\theta}\mathbb{E}_{x\sim\mathcal{P}_% {X}}\left[g_{\psi}\left(f_{\theta}\left(x\right)\right)\right]\right\|^{2}&% \displaystyle\leq\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[\left\|\nabla_{\theta}% g_{\psi}\left(f_{\theta}\left(x\right)\right)\right\|^{2}\right]\\ &\displaystyle\leq\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[\left\|\nabla_{\hat{y% }}g_{\psi}\left(f_{\theta}\left(x\right)\right)\right\|^{2}\left\|\nabla_{% \theta}f_{\theta}(x))\right\|^{2}\right],\text{ where }\hat{y}=f_{\theta}(x)\\ &\displaystyle\leq\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[\left\|\nabla_{\hat{y% }}g_{\psi}\left(f_{\theta}\left(x\right)\right)\right\|^{2}\right]\mathbb{E}_{% x\sim\mathcal{P}_{X}}\left[\left\|\nabla_{\theta}f_{\theta}(x))\right\|^{2}% \right]\\ &\displaystyle\leq\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[\left(\left\|\nabla_{% \hat{y}}g_{\psi^{*}}\left(f_{\theta}\left(x\right)\right)\right\|+\delta\right% )^{2}\right]\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|\mathcal{J}_{\theta}% \left(f_{\theta}\left(x\right)\right)\right\|^{2}\right]\\ &\displaystyle\leq\delta^{2}M^{2},\left(\left\|\nabla_{\hat{y}}g_{\psi^{*}}% \left(f_{\theta}\left(x\right)\right)\right\|=0,~{}\because\psi^{*}\text{ is % optimal}\right)\end{split}$$ Taking square root we get $\left\|-\nabla_{\theta}\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[g_{\psi}\left(f_% {\theta}\left(x\right)\right)\right]\right\|\leq\delta M\text{ which finishes % the proof. }\square$ Theorem 3.2 indicates that the expected gradient of purely adversarial generator is proportional to spectral norm of Jacobian matrix and convergence error of discriminator. To put more succinctly, given a generator, the convergence error $\delta\rightarrow 0$ for a sufficiently trained discriminator. Thus, the adversarial discriminator does not produce erroneous gradients in the near optimal region, suggesting well behaved empirical risk. Augmented Objective: Unlike sole supervision, the mapping function $f_{\theta}(.)$ in augmented objective has access to feedback signal from the discriminator. The optimization carried out in supervised learning with adversarial features is given by, $$\arg\min_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}(x);y% \right)-g_{\psi}\left(f_{\theta}\left(x\right)\right)\right].$$ The discriminator cost function remains identical to Wasserstein discriminator as given by equation (2). Theorem 3.3 Assume conditions of Theorem 3.1 and Theorem 3.2 hold. The expected gradient of supervised learning augmented with adversarial discriminator is bounded above by $(\lambda+\delta)M$. That is, $\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}% (x);y\right)-g_{\psi}\left(f_{\theta}\left(x\right)\right)\right]\right\|\leq% \left(\lambda+\delta\right)M$. Proof. $$\begin{split}\displaystyle\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{% P}}\left[l\left(f_{\theta}(x);y\right)-g_{\psi}\left(f_{\theta}\left(x\right)% \right)\right]\right\|^{2}&\displaystyle\leq\left\|\nabla_{\theta}\mathbb{E}_{% (x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}(x);y\right)\right]\right\|^{2}+% \left\|-\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[g_{\psi}\left(f_% {\theta}\left(x\right)\right)\right]\right\|^{2}\\ &\displaystyle\leq\lambda^{2}M^{2}+\delta^{2}M^{2}\\ &\displaystyle\leq M^{2}(\lambda^{2}+\delta^{2}+2\lambda\delta),~{}(\because% \lambda\delta\geq 0)\\ &\displaystyle\leq M^{2}(\lambda+\delta)^{2}\end{split}$$ Therefore, we get $\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}% (x);y\right)-g_{\psi}\left(f_{\theta}\left(x\right)\right)\right]\right\|\leq% \left(\lambda+\delta\right)M$ which finishes the proof. $\square$ According to Theorem 3.3, the expected gradient in augmented adversarial learning does not vanish in the near optimal region, i.e., $\left\|\Delta\theta\right\|\rightarrow\delta M$ as $\lambda\rightarrow 0$. In addition, the mapping function is guided through useful gradients from discriminator that allows efficient parametric update. Furthermore, the upper bound of Theorem 3.2 ensures that mapping function, $f_{\theta}(.)$ remains within a small neighbourhood of optimal function approximator, $f_{\theta^{*}}(.)$. IV Supervised learning with adversarial features can be better than sole supervision IV-A Expected Empirical Risks Definition 4.1 We define the empirical risk in supervised learning with adversarial features and sole supervision as following. $$\begin{split}\displaystyle\mathscr{R}_{aug}&\displaystyle:=\inf_{\theta_{N}}% \left\{\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|l\left(f_{\theta_{N}}^{% aug}(x);y\right)\right\|\right]\right\}\\ \displaystyle\mathscr{R}_{sup}&\displaystyle:=\inf_{\theta_{N}}\left\{\mathbb{% E}_{(x,y)\sim\mathcal{P}}\left[\left\|l\left(f_{\theta_{N}}^{sup}(x);y\right)% \right\|\right]\right\}\end{split}$$ Note that the total number of iterations ($N$) remains unchanged in both approaches. To compare supervised and adversarial learning, it is required that both methods be initialized with same set of parameters $\theta_{0}$. Theorem 4.1 Suppose the conditions of Theorem 3.1 and Theorem 3.2 holds. For a fixed number of iteration, the expected empirical risk of supervised learning augmented with adversarial features can be better than purely supervised learning. That is, $\mathscr{R}_{aug}\leq\mathscr{R}_{sup}$. Proof. Let both algorithms are initialized with $\theta_{0}$. From Theorem 3.1 and Theorem 3.3 we get, $\left\|-\nabla_{\theta}\mathbb{E}_{x\sim\mathcal{P}_{X}}\left[g_{\psi}\left(f_% {\theta}\left(x\right)\right)\right]\right\|\leq\delta M$ and $\left\|\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}% (x);y\right)-g_{\psi}\left(f_{\theta}\left(x\right)\right)\right]\right\|\leq% \left(\lambda+\delta\right)M$. Since $f_{\theta}(x)$ is differentiable, each parametric update is undertaken in the following manner. $$\begin{split}\displaystyle f_{\theta_{n}}(x)-f_{\theta_{n+1}}(x)&\displaystyle% =\nabla_{\theta}f_{\theta}(x)\rvert_{\theta_{n}}(\theta_{n}-\theta_{n+1}),n=0,% 1,\dots,N-1\\ \displaystyle\implies f_{\theta_{0}}(x)-f_{\theta_{N}}(x)&\displaystyle=% \displaystyle\sum\limits_{i=0}^{N-1}\nabla_{\theta}f_{\theta}(x)\rvert_{\theta% _{i}}(\theta_{i}-\theta_{i+1})\end{split}$$ Now, $$\begin{split}\displaystyle\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert f_{\theta_{% 0}}(x)-f_{\theta_{N}}^{aug}(x)\rVert]&\displaystyle\leq\mathbb{E}_{(x,y)\sim% \mathcal{P}}[\displaystyle\sum\limits_{i=0}^{N-1}\lVert\nabla_{\theta}f_{% \theta}(x)\rvert_{\theta_{i}}\rVert\lVert(\theta_{i}-\theta_{i+1})\rVert]\\ &\displaystyle\leq M\displaystyle\sum\limits_{i=0}^{N-1}\lVert(\theta_{i}-% \theta_{i+1})\rVert,~{}(\because\mathbb{E}_{(x,y)\sim\mathcal{P}}\left[\left\|% \mathcal{J}_{\theta}\left(f_{\theta}\left(x\right)\right)\right\|\right]\leq M% )\\ &\displaystyle\leq M\displaystyle\sum\limits_{i=0}^{N-1}\left\|\nabla_{\theta}% \mathbb{E}_{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}(x);y\right)-g_{\psi}% \left(f_{\theta}\left(x\right)\right)\right]|_{\theta_{i}}\right\|\\ &\displaystyle\leq MN\sup_{i\in[0,N-1]}\left\{\left\|\nabla_{\theta}\mathbb{E}% _{(x,y)\sim\mathcal{P}}\left[l\left(f_{\theta}(x);y\right)-g_{\psi}\left(f_{% \theta}\left(x\right)\right)\right]|_{\theta_{i}}\right\|\right\}\\ &\displaystyle\leq M^{2}N(\lambda+\delta),~{}(\text{from {Theorem 3.3}})\end{split}$$ Similarly, for purely supervised learning we get $\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert f_{\theta_{0}}(x)-f_{\theta_{N}}^{sup% }(x)\rVert]\leq M^{2}N\lambda$. By locally $K-$Lipschitz continuous property of loss function, $$\begin{split}\displaystyle\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{% \theta_{0}}(x);y\right)-l\left(f_{\theta_{N}}^{aug}(x);y\right)\rVert]&% \displaystyle\leq K\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert f_{\theta_{0}}(x)-% f_{\theta_{N}}^{aug}(x)\rVert]\\ &\displaystyle\leq KM^{2}N(\lambda+\delta).\end{split}$$ Similarly, $$\begin{split}\displaystyle\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{% \theta_{0}}(x);y\right)-l\left(f_{\theta_{N}}^{sup}(x);y\right)\rVert]&% \displaystyle\leq K\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert f_{\theta_{0}}(x)-% f_{\theta_{N}}^{sup}(x)\rVert]\\ &\displaystyle\leq KM^{2}N\lambda\\ \displaystyle\implies\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{\theta% _{0}}(x);y\right)\rVert]-\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{% \theta_{N}}^{sup}(x);y\right)\rVert]&\displaystyle\leq KM^{2}N\lambda\\ \displaystyle\implies\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{\theta% _{N}}^{sup}(x);y\right)\rVert]&\displaystyle\geq\mathbb{E}_{(x,y)\sim\mathcal{% P}}[\lVert l\left(f_{\theta_{0}}(x);y\right)\rVert]-KM^{2}N\lambda\\ \end{split}$$ For augmented objective, $$\begin{split}\displaystyle\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{% \theta_{N}}^{aug}(x);y\right)\rVert]&\displaystyle\geq\mathbb{E}_{(x,y)\sim% \mathcal{P}}[\lVert l\left(f_{\theta_{0}}(x);y\right)\rVert]-KM^{2}N(\lambda+% \delta)\\ &\displaystyle\geq\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l\left(f_{\theta_{0% }}(x);y\right)\rVert]-KM^{2}N\lambda-KM^{2}N\delta\\ &\displaystyle\geq\inf_{\theta_{N}}\left\{\mathbb{E}_{(x,y)\sim\mathcal{P}}[% \lVert l\left(f_{\theta_{N}}^{sup}(x);y\right)\rVert]\right\}-KM^{2}N\delta% \end{split}$$ Thus, the expected empirical risk of supervised learning with adversarial features becomes, $$\begin{split}\displaystyle\inf_{\theta_{N}}\left\{\mathbb{E}_{(x,y)\sim% \mathcal{P}}[\lVert l\left(f_{\theta_{N}}^{aug}(x);y\right)\rVert]\right\}&% \displaystyle=\inf_{\theta_{N}}\left\{\mathbb{E}_{(x,y)\sim\mathcal{P}}[\lVert l% \left(f_{\theta_{N}}^{sup}(x);y\right)\rVert]\right\}-KM^{2}N\delta\\ \displaystyle\implies\mathscr{R}_{aug}&\displaystyle=\mathscr{R}_{sup}-KM^{2}N% \delta\\ \displaystyle\implies\mathscr{R}_{aug}&\displaystyle\leq\mathscr{R}_{sup},~{}(% \because KM^{2}N\delta\geq=0).\text{ This finishes the proof.}~{}\square\end{split}$$ IV-B Rate of Convergence Definition 4.2 We define $N^{*}$ to be the minimum number of iterations required to achieve optimal set of parameters, provided it exists. Theorem 4.2 If the conditions of Theorem 3.1 and Theorem 3.2 are satisfied, then it is cheaper to achieve optimal set of parameters in case of augmented objective as compared to sole supervision. That is, $N_{aug}^{*}\leq N_{sup}^{*}$. Proof. Let $N$ denote the number of iterations required to attain optimal set of parameters $\theta^{*}$. The trainable parameters are updated as per the following rule. $$\begin{split}&\displaystyle\theta_{n+1}=\theta_{n}-\Delta\theta_{n},~{}n=0,1,% \dots,N-1\\ \displaystyle\implies&\displaystyle\theta^{*}=\theta_{0}-\sum_{i=0}^{N-1}% \Delta\theta_{i}\\ \displaystyle\implies&\displaystyle\lVert\theta^{*}-\theta_{0}\rVert=\lVert-% \sum_{i=0}^{N-1}\Delta\theta_{i}\rVert\leq\sum_{i=0}^{N-1}\lVert\Delta\theta_{% i}\rVert\leq N\sup_{i\in[0,N-1]}\left\{\lVert\Delta\theta_{i}\rVert\right\}\\ \displaystyle\implies&\displaystyle N\geq\frac{\lVert\theta^{*}-\theta_{0}% \rVert}{\sup_{i\in[0,N-1]}\left\{\lVert\Delta\theta_{i}\rVert\right\}}\end{split}$$ Therefore, the minimum number of iterations required to achieve optimal empirical risk is given by, $$\begin{split}\displaystyle N_{aug}^{*}&\displaystyle=\frac{\lVert\theta^{*}-% \theta_{0}\rVert}{\left(\lambda+\delta\right)M}~{}\text{({ from Theorem 3.3})}% \\ \displaystyle N_{sup}^{*}&\displaystyle=\frac{\lVert\theta^{*}-\theta_{0}% \rVert}{\lambda M.}~{}\text{({ from Theorem 3.1})}\end{split}$$ Now, taking ratio of both optimal iterations, $$\begin{split}&\displaystyle\frac{N_{aug}^{*}}{N_{sup}^{*}}=\frac{\lambda M}{% \left(\lambda+\delta\right)M}\\ \displaystyle\implies&\displaystyle N_{aug}^{*}=N_{sup}^{*}\left(\frac{\lambda% }{\lambda+\delta}\right)\\ \displaystyle\implies&\displaystyle N_{aug}^{*}\leq N_{sup}^{*},~{}(\because% \delta\geq 0).~{}\text{This finishes the proof.}~{}\square\end{split}$$ V Conclusion In this study, we investigated the reason behind slow convergence of purely supervised learning in near optimal region. Further, our analysis showed how adversarial features contribute towards mitigating this convergence issue. Finally, we provided theoretical proofs to corroborate our main result that shows supervised learning with adversarial regularization can be better than purely supervised learning. At the end, we complemented our hypothesis by showing that the expected empirical risk and rate of convergence is relatively better in regularized adversarial learning as compared to sole supervision. References [1] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708, 2017. [2] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017. [3] Z. Wang, J. Chen, and S. C. Hoi, “Deep learning for image super-resolution: A survey,” arXiv preprint arXiv:1902.06068, 2019. [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014. [5] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134, 2017. [6] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, pp. 2223–2232, 2017. [7] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690, 2017. [8] Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, R. W. Lau, and M.-H. Yang, “Vital: Visual tracking via adversarial learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8990–8999, 2018. [9] Y. Xue, T. Xu, H. Zhang, L. R. Long, and X. Huang, “Segan: Adversarial network with multi-scale l 1 loss for medical image segmentation,” Neuroinformatics, vol. 16, no. 3-4, pp. 383–392, 2018. [10] C. Yun, S. Sra, and A. Jadbabaie, “Are deep resnets provably better than linear predictors?,” arXiv preprint arXiv:1907.03922, 2019.
March 1997 Are Isocurvature Fluctuations of the M-theory Axion Observable? M. Kawasaki${}^{a,*}$ and T. Yanagida${}^{b,\dagger}$ ${}^{a}$Institute for Cosmic Ray Research, University of Tokyo, Tokyo 188, Japan ${}^{b}$Department of Physics, University of Tokyo, Tokyo 133, Japan ${}^{*}$e-mail:kawasaki@icrr.u-tokyo.ac.jp ${}^{\dagger}$e-mail:yanagida@kanquro.phys.s.u-tokyo.ac.jp Banks and Dine have recently shown that the M theory naturally accommodates the Peccei-Quinn axion. Since the decay constant $F_{a}$ of the axion is large as $F_{a}\simeq 10^{15}-10^{16}$GeV, the halo axion is hardly detected in coming axion-search experiments. However, we show that isocurvature fluctuations of the M-theory axion produced at the inflationary epoch are most likely detectable in future satellite experiments on anisotropies of the cosmic microwave background radiation. If the unification scale ($\sim 10^{16}$GeV) of three known gauge groups corresponds to compactification scale of extra six dimensional space, the string theory is in a strong-coupling regime. It has been argued by Horava and Witten [1] that the strong-coupling heterotic string theory is M theory whose low-energy limit is well described by eleven dimensional supergravity. In this M-theory the fundamental energy scale is the eleven dimensional Planck mass $M_{11}$ ($\simeq 2\times 10^{16}$GeV) rather than the four dimensional one $M_{4}$ ($\simeq 2\times 10^{18}$GeV) [2]. The four dimensional Planck mass $M_{4}$ is only an effective parameter at low energies. This M-theory description of strong-coupling heterotic string theory leads to various interesting new phenomenologies. Recently, it has been pointed out by Banks and Dine [3] that some of string axions survive at low energies, since the world-sheet instanton effects are suppressed owing to the large compactification radius in string tension unit. Thus, they play a role of the Peccei-Quinn axion [4]. The decay constant $F_{a}$ of the M-theory axion is expected as [3] $$F_{a}\simeq 10^{15}-10^{16}\mathrm{GeV},$$ (1) which contradicts the constraint, $10^{10}\mathrm{GeV}{\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}\lower 5.0pt% \vbox{\hbox{$\sim$}}}}\ }F_{a}{\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}% \lower 5.0pt\vbox{\hbox{$\sim$}}}}\ }10^{12}\mathrm{GeV}$, derived in the standard cosmology [5]. However, this problem may be solved by late-time entropy production through decays of moduli (or Polonyi) fields [6, 7]. Since the late-time decays produce so many LSP that the universe is overclosed, the LSP must be unstable [8]. This implies rather an interesting case in which axions are the dark matter in the present universe. Unfortunately, the halo axions are hardly detected in near future axion-search experiments, since the decay constant $F_{a}$ is too large as shown in eq.(1). In this short letter we point out that inflationary universe produces isocurvature fluctuations of the M-theory axion which are most likely detectable in future satellite experiments on anisotropies of the cosmic microwave background radiation (CMBR). Let us first discuss inflaton sector. Since the fundamental energy scale is $M_{11}\sim 2\times 10^{16}\mathrm{GeV}$ in the M theory [2, 3], it is natural to consider that the inflaton takes a value of the order of $M_{11}$ at the beginning of the universe. Therefore, the most natural inflation model seems to be the hybrid inflation model [9]. In the hybrid inflation model the inflation occurs with a constant vacuum energy density which is quickly eaten by another field when the inflaton reaches a critical value. The first guess of the vacuum energy density may be $M_{11}^{4}$. However, it is too large to have sufficiently long inflationary epoch. This is because the initial value of the inflaton is about the same as the critical value at which the inflation ends. The next choice is $M_{11}^{6}/M_{4}^{2}\sim(2\times 10^{15}\mathrm{GeV})^{4}$ as argued in ref. [7]. This vacuum energy density is also suggested by a recent analysis [10] in a hybrid inflation model.111Linde and Riotto [10] have recently proposed a hybrid inflation model based on supergravity. From COBE data they have derived the vacuum energy density $V\sim(2\times 10^{15}\mathrm{GeV})^{4}$. Thus, we assume hereafter the vacuum energy density $$V\sim(2\times 10^{15}\mathrm{GeV})^{4}$$ (2) during the inflation, which leads to the Hubble constant $$H_{\mathrm{inf}}\simeq\frac{\sqrt{V}}{\sqrt{3}M_{4}}\sim 10^{12}\mathrm{GeV}.$$ (3) We are now at the point to estimate isocurvature fluctuations of the M-theory axion. It is known [11] that in the deSitter universe massless fields $\varphi$ have quantum fluctuations $\delta\varphi$ which are given by $$\delta\varphi\simeq\frac{H_{\mathrm{i}nf}}{2\pi}.$$ (4) Since the axion does not have potential energy during inflation, its fluctuations do not give any contribution to those of the total energy density of the universe. Thus the axion fluctuations are of isocurvature type. The isocurvature fluctuations of the dark matter density (provided that the axion is the dark matter) are given by $$\left(\frac{\delta\rho_{a}}{\rho_{a}}\right)_{\mathrm{iso}}\simeq\frac{H_{% \mathrm{inf}}}{\pi F_{a}}\simeq(3-0.3)\times 10^{-4}.$$ (5) Here we have used eqs.(1) and (3). On the other hand, the inflaton itself produces adiabatic fluctuations as usual. Thus we have a mixture model of adiabatic and isocurvature fluctuations of the dark matter (axion) density. The observation of the anisotropies of CMBR by COBE [12] gives a constraint on the isocurvature fluctuations as $$\left(\frac{\delta\rho}{\rho}\right)_{\mathrm{iso}}{\ \lower-1.2pt\vbox{\hbox{% \hbox to 0.0pt{$<$}\lower 5.0pt\vbox{\hbox{$\sim$}}}}\ }2\times 10^{-5}.$$ (6) The prediction in eq.(5) already violates this constraint. However, we should not take eq.(5) at the face value, since our estimation of $H_{\mathrm{inf}}$ and $F_{a}$ may contain various ambiguities. The above analysis, nevertheless, shows that the isocurvature fluctuations of the M-theory axion most likely give a non-negligible contribution to CMBR.222Interestingly, it has been pointed out [13] that if isocurvature fluctuations are comparably mixed with adiabatic ones, the matter power spectrum normalized by COBE data give a better fit to the data of observations of the large scale structure of the universe than in the case of pure adiabatic fluctuations. In the standard cold dark matter scenario with pure adiabatic fluctuations, the density fluctuations at scales of galaxies and clusters are too large if the power spectrum $P(k)$ is normalized by COBE data. However, since isocurvature fluctuations give a six times larger contribution to CMBR anisotropies at COBE scales, the mixture of isocurvature fluctuations decreases the amplitude of the matter fluctuations at scales of galaxies and clusters, which leads to a better fit to the observations. If $(\delta\rho_{a}/\rho_{a})_{\mathrm{iso}}\simeq(1-2)\times 10^{-5}$, anisotropies of CMBR induced by isocurvature fluctuations can be distinguished from those produced by pure adiabatic fluctuations [13] because the shapes of power spectrum of CMBR anisotropies are quite different from each other at small angular scales. For example, there will be no large Doppler peak in the spectrum of CMBR as pointed out in ref. [13], while the pure adiabatic fluctuation models predict a significant peak at degree scales. Since direct searches for the M-theory axion seem impossible, the future satellite experiments on CMBR anisotropies is crucial to test the M-theory axion hypothesis. Acknowledgment One of the authors (T.Y.) thanks M. Hashimoto and K.-I. Izawa for useful discussions. References [1] P. Horava and E. Witten, Nucl. Phys. B460 (1996) 506, hep-th/9510209; Phys. Rev. D54 (1996) 7561, hep-th/9603142. [2] E. Witten, Nucl. Phys. B471 (1996) 135, hep-th/9602070. [3] T. Banks and M. Dine, hep-th/9605136. [4] R.D. Peccei and H.R. Quinn, Phys. Rev. Lett. 38 (1977) 1440. [5] E.W. Kolb and M.S. Turner, The Early Universe, Addison-Wesley, (1990). [6] M. Kawasaki, T. Moroi and T. Yanagida, Phys. Lett. B383 (1996) 313. [7] T. Banks and M. Dine, hep-th/9608197. [8] M. Kawasaki, T. Moroi and T. Yanagida, Phys. Lett. B370 (1996) 52. [9] A. Linde, Phys. Lett. B259 (1991) 38. [10] A. Linde and A. Riotto, hep-ph/9703209. [11] A. Linde, Particle Physics and Inflationary Universe (1990) Harwood Academic Publisher. [12] G.F. Smoot et al., Astrophys. J. 396 (1992) L1. [13] M. Kawasaki, N. Sugiyama and T. Yanagida, Phys. Rev. D 54 (1996) 2442.
Density waves in the shearing sheet III. Disc heating B. Fuchs Astronomisches Rechen–Institut, Mönchhofstr. 12–14, 69120 Heidelberg, Germany (Accepted Received ; in original form 2000 May) Abstract The problem of dynamical heating of galactic discs by spiral density waves is discussed using the shearing sheet model. The secular evolution of the disc is described quantitatively by a diffusion equation for the distribution function of stars in the space spanned by integrals of motion of the stars, in particular the radial action integral and an integral related to the angular momentum. Specifically, disc heating by a succession of transient, ‘swing amplified’ density waves is studied. It is shown that such density waves lead predominantly to diffusion of stars in radial action space. The stochastical changes of angular momenta of the stars and the corresponding stochastic changes of the guiding centre radii of the stellar orbits induced by this process are much smaller. keywords: galaxies: kinematics and dynamics ††pagerange: Density waves in the shearing sheet III. Disc heating–References††pubyear: 2001 1 Introduction The shearing sheet (Goldreich & Lynden–Bell 1965, Julian & Toomre 1966) model has been developed as a tool to study the dynamics of galactic discs and is particularly well suited to describe theoretically the dynamical mechanisms responsible for the formation of spiral arms. For the sake of simplicity the model describes only the dynamics of a patch of a galactic disc. It is assumed to be infinitesimally thin and its radial size is assumed to be much smaller than the disc. Polar coordinates can be therefore rectified to pseudo-cartesian coordinates and the velocity field of the differential rotation of the disc can be approximated by a linear shear flow. These simplifications allow an analytical treatment of the problem, which helps to clarify the underlying physical processes operating in the disc. In two previous papers (Fuchs 2001a, b, referred hereafter to as papers I and II) I considered the stellardynamical model of a shearing sheet and discussed the unbounded sheet and then the dynamical consequences, when inner boundary conditions are applied. The aim was to give a consistent theoretical description of pure swing amplification (Toomre 1981) as well as exponentially growing modes in the framework of the same model. It is well known from numerous studies that, if a star disc is perturbed by a succession of spiral density waves, the stars are scattered randomly by the spiral arms and their velocity dispersion grows steadily so that the background states of the discs are evolving (Julian 1967, Carlberg & Sellwood 1985, Binney & Lacey 1988, Jenkins & Binney 1990). Dynamical disc heating has also been demonstrated in numerical simulations of the dynamical evolution of star discs such as by Sellwood & Carlberg (1984) or Toomre (1990). In the present paper I discuss disc heating of the shearing sheet. I follow in particular the theory of diffusion of stars in the two–dimensional action integral space due to wave – star scattering developed by Dekker (1976). In section (2) I briefly describe the formal derivation of the diffusion equation in the framework of quasi–linear theory (Hall & Sturrock 1967) and in section (3) I calculate diffusion coefficients for scattering of stars by swing amplified density waves. This kind of spiral density waves has been shown to be relevant for disc heating by Toomre (1990), wheras mode–like density waves, on the other hand, do not heat effectively (Barbanis & Woltjer 1967, Lynden–Bell & Kalnajs 1972). 2 Derivation of the diffusion equation When describing the disc heating effects of a succession of transient spiral density waves, one has to distinguish between the long time scales, on which the overall distribution function of stars in phase space is evolving, i.e. the dynamical heating time scale, from the shorter time scales, on which the individual density waves develop. As is well known from plasma physics (Hall & Sturrock 1967) this concept allows to derive from the Boltzmann equation in quasi–linear approximation a Fokker–Planck equation for the long term evolution of the distribution function. I follow here in particular the adaption of the formalism to the dynamics of stellar discs by Dekker (1976). The phase space distribution function can be expressed with the aid of the action and angle variables, $J_{\rm 1},J_{\rm 2}$ and $w_{\rm 1},w_{\rm 2}$, respectively, $$f(J_{\rm 1},J_{\rm 2},w_{\rm 1},w_{\rm 2};t)\,.$$ (1) The evolution of the distribution function with time is determined by the collisionless Boltzmann equation, $$\frac{\partial f}{\partial t}+\left[f,H\right]=0\,,$$ (2) where $H$ is the Hamiltonian of the stellar orbits, and the square bracket denotes the usual Poisson bracket. Following Dekker (1976) I define a suitable mean of the distribution function by averaging it over the angle variables, $$\langle f\rangle=\frac{1}{4\pi^{2}}\int^{2\pi}_{0}dw_{\rm 1}\int^{2\pi}_{0}dw_% {\rm 2}f(J_{\rm 1},J_{\rm 2},w_{\rm 1},w_{\rm 2};t)\,.$$ (3) This averaged distribution function will evolve slowly on the long time scale. For the rapidly fluctuating part of the distribution function $\delta f=f-\langle f\rangle$ one obtains from the general Boltzmann equation (2) $$\frac{\partial\delta f}{\partial t}+\frac{\partial\langle f\rangle}{\partial t% }+\left[\langle f\rangle,H_{\rm 0}\right]+\left[\delta f,H_{\rm 0}\right]+% \left[\langle f\rangle,\delta\Phi\right]+\left[\delta f,\delta\Phi\right]=0\,,$$ (4) where the Hamiltonian has been split up as $H=H_{\rm 0}+\delta\Phi$ with $\delta\Phi$ denoting the fluctuations of the gravitational potential. The first Poisson bracket in equation (4) vanishes, because $\langle f\rangle$ depends only on the action integrals. The time derivative of the averaged distribution function is expected to be much smaller than that of the fluctuating part. Thus equation (4) can be cast into a Boltzmann equation for $\delta f$, and, if the quadratic term $[\delta f,\delta\Phi]$ is neglected, into a linearized Boltzmann equation, $$\frac{\partial\delta f}{\partial t}+\left[\delta f,H_{\rm 0}\right]=-\left[% \langle f\rangle,\delta\Phi\right]\,,$$ (5) which has been widely used to study the dynamics of galactic discs. $\langle f\rangle$ has taken the role of the axisymmetric, stationary background distribution. Its slow evolution with time is neglected on short time scales, but taking the average of the Boltzmann equation (2) leads to $$\frac{\partial\langle f\rangle}{\partial t}+\langle\left[\delta f,H_{\rm 0}% \right]\rangle+\langle\left[\langle\delta f\rangle,\delta\Phi\right]\rangle+% \langle\left[\delta f,\delta\Phi\right]\rangle=0\,.$$ (6) The second and third terms of equation (6) vanish because $\langle\delta f\rangle=\langle\delta\Phi\rangle=0$ by definition, so that equation (6) takes the form $$\frac{\partial\langle f\rangle}{\partial t}+\langle\left[\delta f,\delta\Phi% \right]\rangle=0\,,$$ (7) which descibes the long–term evolution of the averaged distribution function in quasi–linear approximation. Equation (7) is also valid, if instead of action and angle variables other variables are used, as long as $J_{1}$, $J_{2}$ are integrals of motion and $w_{1}$, $w_{2}$ their canonical conjugates. The potential perturbation $\delta\Phi$ can be Fourier transformed with respect to the angle variables, $$\delta\Phi=\int dl_{\rm 1}\int dl_{\rm 2}\delta\Phi_{\rm{\bf l}}({\bf J};t)e^{% i\left(l_{\rm 1}w_{\rm 1}+l_{\rm 2}w_{\rm 2}\right)}\,,$$ (8) and similarly $$\delta f=\int dl_{\rm 1}\int dl_{\rm 2}\delta f_{\rm{\bf l}}({\bf J};t)e^{i% \left(l_{\rm 1}w_{\rm 1}+l_{\rm 2}w_{\rm 2}\right)}\,.$$ (9) On the other hand, the lhs of equation (5) represents the total derivative of $\delta f$ along stellar orbits in the axisymmetric part of the gravitational potential. Thus $$\displaystyle\delta f$$ $$\displaystyle=$$ $$\displaystyle-\int^{t}_{t_{\rm 0}}dt^{\prime}\left[\langle f\rangle({\bf J}_{% \rm t^{\prime}};t^{\prime}),\delta\Phi({\bf J}_{\rm t^{\prime}},{\bf w}_{\rm t% ^{\prime}};t^{\prime})\right]$$ (10) $$\displaystyle=$$ $$\displaystyle\int^{t}_{t_{\rm 0}}dt^{\prime}\sum_{\rm n}\frac{\partial\langle f% \rangle}{\partial J_{\rm n}}\Big{|}_{t^{\prime}}\int dl_{\rm 1}\int dl_{\rm 2}% \delta\Phi_{\rm{\bf l}}({\bf J}_{\rm t^{\prime}};t^{\prime})$$ $$\displaystyle\cdot il_{\rm n}e^{i\left(l_{\rm 1}w_{\rm 1,t^{\prime}}+l_{\rm 2}% w_{\rm 2,t^{\prime}}\right)}\,,$$ where the integration is to be taken along ‘unperturbed’ orbits. The indices of the action and angle variables indicate that the variables, which are the independent variables of the distribution function and the gravitational potential, respectively, must be chosen according the ‘unperturbed’ orbit starting at ${\bf J}_{\rm t_{0}}$, ${\bf w}_{\rm t_{0}}$ and terminating at ${\bf J}_{\rm t}$, ${\bf w}_{\rm t}$. In the next section I will apply equation (10) to a succession of uncorrelated swing amplification events of short duration. The typical integration interval $t-t_{\rm 0}$ will be then much smaller than the time scale, on which the averaged distribution function $\langle f\rangle$ is evolving and equation (10) can be simplified to $$\displaystyle\delta f$$ $$\displaystyle=$$ $$\displaystyle\sum_{\rm n}\frac{\partial\langle f\rangle}{\partial J_{\rm n}}% \Big{|}_{t}\int^{t}_{t_{\rm 0}}dt^{\prime}\int dl_{\rm 1}\int dl_{\rm 2}\delta% \Phi_{\rm{\bf l}}({\bf J}_{\rm t},t^{\prime})$$ (11) $$\displaystyle\cdot il_{\rm n}e^{i\left(l_{\rm 1}w_{\rm 1,t^{\prime}}+l_{\rm 2}% w_{\rm 2,t^{\prime}}\right)}\,.$$ Comparison of equations (9) and (11) shows that $$\displaystyle\delta f_{\rm{\bf l}}$$ $$\displaystyle=$$ $$\displaystyle\sum_{\rm n}\frac{\partial\langle f\rangle}{\partial J_{\rm n}}il% _{\rm n}\int^{t}_{t_{\rm 0}}dt^{\prime}\delta\Phi_{\rm{\bf l}}({\bf J}_{\rm t}% ,t^{\prime})$$ (12) $$\displaystyle\cdot e^{i\left(l_{\rm 1}(w_{\rm 1,t^{\prime}}-w_{\rm 1,t})+l_{% \rm 2}(w_{\rm 2,t^{\prime}}-w_{\rm 2,t})\right)}\,.$$ Upon inserting expressions (8) and (9) into equation (7) it is straightforward to evaluate the Poisson bracket and carry out the averaging with respect to the angle variables. After some algebra one obtains $$\frac{\partial\langle f\rangle}{\partial t}=-i\int dl_{\rm 1}\int dl_{\rm 2}% \sum_{\rm n}l_{\rm n}\frac{\partial}{\partial J_{\rm n}}\langle\delta f_{\rm{% \bf l}}\delta\Phi^{*}_{\rm{\bf l}}\rangle\,,$$ (13) where use of the fact has been made that $\delta\Phi_{\rm{\bf-l}}=\delta\Phi^{*}_{\rm{\bf l}}$ so that $\delta\Phi$ is real. Since the potential perturbations are supposed to be a succession of short lived, uncorrelated fluctuations, it is customary to include into the averaging process an ensemble average over these fluctuations. Inserting finally equation (12) into (13) leads to a diffusion equation in action integral space, $$\frac{\partial\langle f\rangle}{\partial t}=\frac{1}{2}\sum_{\rm m,n}\frac{% \partial}{\partial J_{\rm m}}D_{\rm mn}\frac{\partial\langle f\rangle}{% \partial J_{\rm n}}\,,$$ (14) with diffusion coefficients $$\displaystyle D_{\rm mn}$$ $$\displaystyle=$$ $$\displaystyle 2\int dl_{\rm 1}\int dl_{\rm 2}l_{\rm m}l_{\rm n}\int^{t}_{t_{% \rm 0}}dt^{\prime}\langle\Phi^{*}_{\rm{\bf l}}(t)\Phi_{\rm{\bf l}}(t^{\prime})\rangle$$ (15) $$\displaystyle\cdot e^{i\left(l_{\rm 1}(w_{\rm 1,t^{\prime}}-w_{\rm 1,t})+l_{% \rm 2}(w_{\rm 2,t^{\prime}}-w_{\rm 2,t})\right)}\,.$$ Equations (14) and (15) are valid for any gravitational perturbations of the stellar disc with moderate amplitudes and short correlation time scales. Very similar relations have been derived by Binney & Lacey (1988) in a different way under more general assumptions. However in Dekker’s (1976) approach the duality of equations (5) and (7) is particularly instructive. 3 Disc heating by swing amplified spiral density waves I apply the formalism of the previous section to calculate the diffusion coefficients for wave – star scattering by shearing, swing amplified spiral density waves. The dynamics of the density waves are modelled by a shearing sheet made of stars. This describes a patch of a thin galactic disc. Its centre orbits the galactic centre at galactocentric radius $r_{\rm 0}$ with an angular velocity $\Omega_{\rm 0}$. Pseudo–cartesian coordinates are defined with respect to the centre of the patch, $$x=r-r_{\rm 0},\,y=r_{\rm 0}(\theta-\Omega_{\rm 0}t)\,,$$ (16) where $r$ and $\theta$ denote galactic polar coordinates, respectively. As explained in paper (I) the equations of motion of the stars are derived from the Hamiltonian $$H_{\rm 0}=\frac{1}{2}\dot{r}^{2}+\frac{1}{2}r^{2}_{\rm 0}(\dot{\theta}-\Omega_% {\rm 0})^{2}-2A\Omega_{\rm 0}(r-r_{\rm 0})^{2}\,,$$ (17) or alternatively $$H_{\rm 0}=\kappa J_{\rm 1}+\frac{A}{2B}J^{2}_{\rm 2}-\frac{1}{2}\Omega_{\rm 0}% ^{2}r_{\rm 0}^{2}\,,$$ (18) where $A$ and $B$ denote Oort’s constants. The resulting orbits are simple epicyclic motions, which can be written as $$x=\frac{J_{\rm 2}}{-2B}+\sqrt{\frac{2J_{\rm 1}}{\kappa}}\sin{w_{\rm 1}},\,y=w_% {\rm 2}-\frac{\sqrt{2\kappa J_{\rm 1}}}{2B}\cos{w_{\rm 1}}\,,$$ (19) with $\kappa=\sqrt{-4\Omega_{\rm 0}B}$ the epicyclic frequency. $J_{\rm 1}$ is the radial action integral of an orbit. $J_{\rm 2}$ denotes the integral $J_{2}=\dot{y}+2\Omega_{0}x$ of an epicyclic orbit and is related to the angular momentum of a star as $J_{2}=(r^{2}\dot{\theta}-r_{0}^{2}\Omega_{0})/r_{0}$. As can be seen from equation (19) the guiding centre radius of the orbit is given by $$x_{\rm g}=\frac{J_{2}}{-2B}\,.$$ (20) $w_{\rm 1}=\kappa t$ and $w_{\rm 2}=\frac{A}{B}J_{\rm 2}t$ are variables canonical conjugate to $J_{1}$ and $J_{2}$, respectively. The radial and circumferential velocities are given by $$u=\sqrt{2\kappa J_{\rm 1}}\cos{w_{\rm 1}},\,v=\frac{2B}{\kappa}\sqrt{2\kappa J% _{\rm 1}}\sin{w_{\rm 1}}\,,$$ (21) where $v$ is defined relative to mean shearing velocity $\dot{y}=-2Ax$. The dynamics of a shearing sheet made of stars has been studied extensively by Julian & Toomre (1966) (cf. also Toomre 1981) and is discussed at length in paper (I) using strictly Eulerian coordinates. The principal result is that the wave crests of the density waves, which appear in the disc, swing around following the mean shearing motion of the stars. While the waves swing around their amplitudes are amplified transitorily and then die away. In paper (II) it is shown that also mode – like, quasi–stationary solutions can be constructed for the shearing sheet by introducing an inner reflecting boundary in the disc. Such kind of density waves is of no concern in the present context, however, since they hardly heat the disc at all. The perturbations of the gravitational potential of the shearing sheet are customarily Fourier analyzed as $$\displaystyle\delta\Phi=\int dk_{\rm x}\int dk_{\rm y}\delta\Phi_{\rm{\bf k}}e% ^{i\left[k_{\rm x}x+k_{\rm y}y\right]}$$ $$\displaystyle=\int dk_{\rm x}\int dk_{\rm y}\delta\Phi_{\rm{\bf k}}{\rm exp}i% \Big{[}k_{\rm x}\frac{J_{\rm 2}}{-2B}+k_{\rm x}\sqrt{\frac{2J_{\rm 1}}{\kappa}% }\sin{w_{\rm 1}}$$ $$\displaystyle+k_{\rm y}w_{\rm 2}-k_{\rm y}\frac{\sqrt{2\kappa J_{\rm 1}}}{2B}% \cos{w_{\rm 1}}\Big{]}\,.$$ (22) The functional dependence on the variable $w_{2}$ is of the form as in equation (8) and I use in the following the wave number $l_{2}$ instead of $k_{\rm y}$. The dependence on the angle variable $w_{1}$ can be easily adapted to the form of equation (8) by taking an inverse Fourier transform of equation (22) with respect to $w_{1}$. The resulting Fourier coefficients are inserted then into equation (15). There is, however, a difference of the meaning of the variable $w_{2}$ in this section from the meaning of the corresponding variable in the previous section. $w_{2}$ measures now the drift of the guiding centre of the orbit in the $y$–direction and is thus not an angle variable. Averaging the distribution function with respect to $w_{2}$ is impractical and I consider in the following the evolution of the distribution function in ($J_{1},J_{2}$)–space, i.e. the distribution function integrated over $w_{1}$ and $w_{2}$, respectively, which is conceptually slightly different from the average (3). This leads to expressions for the diffusion coefficients of the form $$\displaystyle D_{\rm mn}=2\int^{2\pi}_{0}dw_{1}\int^{2\pi}_{0}dw^{\prime}_{1}% \int^{\infty}_{-\infty}dk_{\rm x}\int^{\infty}_{-\infty}dk^{\prime}_{\rm x}% \int^{\infty}_{-\infty}dl_{\rm 1}$$ $$\displaystyle\int^{\infty}_{-\infty}dl_{\rm 2}\int^{t}_{t_{\rm 0}}dt^{\prime}e% ^{i\big{[}-l_{\rm 1}w_{\rm 1}+k_{\rm x}\sqrt{\frac{2J_{\rm 1}}{\kappa}}\sin{w_% {\rm 1}}-l_{\rm 2}\frac{\sqrt{2\kappa J_{\rm 1}}}{2B}\cos{w_{\rm 1}}\big{]}}$$ $$\displaystyle\cdot l_{\rm m}l_{\rm n}\langle\delta\Phi_{\rm k_{\rm x},l_{\rm 2% }}(t^{\prime})\delta\Phi^{*}_{\rm k^{\prime}_{\rm x},l_{\rm 2}}(t)\rangle$$ (23) $$\displaystyle\cdot e^{-i\big{[}l_{\rm 1}\kappa+l_{\rm 2}\frac{A}{B}J_{\rm 2}% \big{]}(t-t^{\prime})}e^{i\left(k_{\rm x}-k^{\prime}_{\rm x}\right)\frac{J_{% \rm 2}}{-2B}}$$ $$\displaystyle\cdot e^{-i\big{[}-l_{\rm 1}w^{\prime}_{\rm 1}+k^{\prime}_{\rm x}% \sqrt{\frac{2J_{\rm 1}}{\kappa}}\sin{w^{\prime}_{\rm 1}}-l_{\rm 2}\frac{\sqrt{% 2\kappa J_{\rm 1}}}{2B}\cos{w^{\prime}_{\rm 1}}\big{]}}\,.$$ Note that the diffusion tensor is symmetric by construction. In paper (I) it is illustrated how the shearing sheet responds to internal and external perturbations by developing density waves with shearing crests and amplitudes amplified while the waves swing around. Fig. 1 shows the distribution of amplitudes of the potential perturbations as function of wave numbers $k_{\rm x},k_{\rm y}$, respectively, calculated using the formulae of paper (I) (section 11.4) for the case when the swing amplification mechanism is fed by white noise. Toomre (1990) discusses the source of the noise and argues convincingly that swing amplified white noise explains exactly the behaviour of the star discs in his numerical simulations or that by Sellwood & Carlberg (1984). The distribution of amplitudes, which represents the superposition of many short lived shearing density waves, is quasi–stationary and can be modelled for positive wave numbers $k_{\rm y}$ empirically as $$\delta\Phi_{\rm{\bf k}}=\tilde{\Phi}e^{-\frac{(k_{\rm x}-k_{\rm x0})^{2}}{2% \sigma^{2}_{\rm k_{\rm x}}}-\frac{(k_{\rm y}-k_{\rm y0})^{2}}{2\sigma^{2}_{\rm k% _{\rm y}}}}\,,$$ (24) and continued at negative wave numbers $k_{\rm y}$ as $\delta\Phi_{\rm{\bf-k}}=\delta\Phi_{\rm{\bf k}}$. The parameters are estimated as $k_{\rm x0}$ = 1.5 $k_{\rm crit}$, $k_{\rm y0}$ = 0.5 $k_{\rm crit}$, $\sigma_{\rm k_{\rm x}}$ = 0.7 $k_{\rm crit}$, and $\sigma_{\rm k_{\rm y}}$ = 0.1 $k_{\rm crit}$ for the case $A=-B=\frac{1}{2}\Omega_{\rm 0}$. The critical wave number is defined as $k_{\rm crit}=\kappa^{2}/(2\pi G\Sigma_{\rm d})$ with $G$ the constant of gravity and $\Sigma_{\rm d}$ the surface density of the disc, respectively. Using this parametric model the quadratures in equation (23) can be carried out explicitely. Each spike in Fig. 1 represents the superpostion of swing amplified density waves all travelling at a constant wave number $k_{\rm y}$ along the $k_{\rm x}$ abscissae at a speed of $\dot{k}_{\rm x,eff}=2Ak_{\rm y}$ with an amplitude which can be modelled as $$\displaystyle\tilde{\Phi}_{\rm k_{\rm x}^{\rm in},l_{2}}(t^{\rm in})e^{-\frac{% (k_{\rm x}^{\rm in}+2Al_{2}(t^{\prime}-t^{\rm in})-k_{\rm x_{0}})^{2}}{2\sigma% _{\rm k_{\rm x}}^{2}}-\frac{(l_{2}-k_{\rm y0})^{2}}{2\sigma_{\rm k_{\rm y}}^{2% }}}$$ $$\displaystyle\cdot\delta(k_{\rm x}^{\rm in}+2Al_{2}(t^{\prime}-t^{\rm in})-k_{% \rm x})\,.$$ (25) In equation (25) $k_{\rm x}^{\rm in}$ denotes the original radial wave numbers of the waves and $t^{\rm in}$ the time, when the waves were launched. $\tilde{\Phi}_{\rm k_{\rm x}^{\rm in},l_{2}}(t^{\rm in})$ parameterizes the excitation rate of the waves and the peak amplification factor of the swing amplifier (cf. paper I). Since white noise is a stationary random process, $\tilde{\Phi}_{\rm k_{\rm x}^{\rm in},k_{2}}(t^{\rm in})$ is evenly distributed over wave number space and $t^{\rm in}$. Each of the density waves is only correlated with itself. This implies, if the autocorrelation function in equation (23) is considered, restrictions of the wave numbers and the time intervals to $$\displaystyle k_{\rm x}^{\rm in}+2Al_{2}(t^{\prime}-t^{\rm in})-k_{\rm x}=0,% \quad{\rm and}$$ $$\displaystyle k_{\rm x}^{\rm in}+2Al_{2}(t-t^{\rm in})-k^{\prime}_{\rm x}=0% \quad{\rm so}\,{\rm that}$$ $$\displaystyle k^{\prime}_{\rm x}-k_{\rm x}-2Al_{2}(t-t^{\prime})=0\,.$$ (26) This constraint and equation (25) lead to an autocorrelation function in the form of $$\displaystyle\langle\delta\Phi_{\rm k_{\rm x},l_{2}}(t^{\prime})\delta\Phi_{% \rm k^{\prime}_{\rm x},l_{2}}(t)\rangle$$ $$\displaystyle={|\Phi_{0}|}^{2}\delta(k^{\prime}_{\rm x}-k_{\rm x}-2Al_{2}(t-t^% {\prime}))$$ $$\displaystyle\cdot e^{-\frac{(k^{\prime}_{\rm x}-k_{\rm x_{0}})^{2}}{2\sigma^{% 2}_{\rm k_{\rm x}}}}e^{-\frac{(k_{\rm x}-k_{\rm x_{0}})^{2}}{2\sigma^{2}_{\rm k% _{\rm x}}}}e^{-\frac{(l_{\rm 2}-k_{\rm y_{0}})^{2}}{\sigma^{2}_{\rm k_{\rm y}}% }}\,,$$ (27) where $|\Phi_{0}|^{2}$ is a normalization constant. The time integral of the autocorrelation function according to equation (23) is given by ($l_{2}>0$) $$\displaystyle\int^{t}_{t_{0}}dt^{\prime}\delta(k^{\prime}_{\rm x}-k_{\rm x}-2% Al_{2}(t-t^{\prime}))e^{i[l_{1}\kappa+l_{2}\frac{A}{B}J_{2}](t^{\prime}-t)}$$ $$\displaystyle=\frac{1}{2Al_{2}}e^{-i[l_{1}\kappa+l_{2}\frac{A}{B}J_{2}]\frac{k% ^{\prime}_{\rm x}-k_{\rm x}}{2Al_{2}}}\,.$$ (28) I consider next in equation (23) the integration with respect to wave number $k_{\rm x}$, $$\displaystyle\int_{-\infty}^{\infty}dk_{\rm x}e^{-\frac{(k_{\rm x}-k_{\rm x_{0% }})^{2}}{2\sigma_{\rm k_{\rm x}}^{2}}}e^{ik_{\rm x}\left[\frac{J_{2}}{-2B}+% \sqrt{\frac{2J_{1}}{\kappa}}\sin{w_{1}}\right]}$$ $$\displaystyle\cdot e^{i\left[l_{1}\kappa+l_{2}\frac{A}{B}J_{2}\right]\frac{k_{% \rm x}-k^{\prime}_{\rm x}}{2Al_{2}}}$$ $$\displaystyle=\sqrt{2\pi}\sigma_{\rm k_{\rm x}}e^{-\frac{\sigma_{\rm k_{\rm x}% }^{2}}{2}\left[\sqrt{\frac{2J_{1}}{\kappa}}\sin{w_{1}}+\frac{l_{1}\kappa}{2Al_% {2}}\right]^{2}}$$ $$\displaystyle\cdot e^{ik_{\rm x_{0}}\left[\sqrt{\frac{2J_{1}}{\kappa}}\sin{w_{% 1}}+\frac{l_{1}\kappa}{2Al_{2}}\right]-i\left[l_{1}\kappa+l_{2}\frac{A}{B}J_{2% }\right]\frac{k^{\prime}_{\rm x}}{2Al_{2}}}\,,$$ (29) and similarly $$\displaystyle\int_{-\infty}^{\infty}dk^{\prime}_{\rm x}e^{-\frac{(k^{\prime}_{% \rm x}-k_{\rm x_{0}})^{2}}{2\sigma_{\rm k_{\rm x}}^{2}}}e^{-ik^{\prime}_{\rm x% }\left[\frac{J_{2}}{-2B}+\sqrt{\frac{2J_{1}}{\kappa}}\sin{w^{\prime}_{1}}% \right]}$$ $$\displaystyle\cdot e^{-i\left[l_{1}\kappa+l_{2}\frac{A}{B}J_{2}\right]\frac{k^% {\prime}_{\rm x}}{2Al_{2}}}$$ $$\displaystyle=\sqrt{2\pi}\sigma_{\rm k_{\rm x}}e^{-\frac{\sigma_{\rm k_{\rm x}% }^{2}}{2}\left[\sqrt{\frac{2J_{1}}{\kappa}}\sin{w^{\prime}_{1}}+\frac{l_{1}% \kappa}{2Al_{2}}\right]^{2}}$$ $$\displaystyle\cdot e^{-ik_{\rm x_{0}}\left[\sqrt{\frac{2J_{1}}{\kappa}}\sin{w^% {\prime}_{1}}+\frac{l_{1}\kappa}{2Al_{2}}\right]}\,.$$ (30) In equations (29) and (30) terms of the kind $$\sigma_{\rm k_{\rm x}}^{2}\left(\frac{2J_{\rm 1}}{\kappa}\right)$$ (31) are neglected as quadratically small. This is justified, because the majority of the stars have epicycle sizes smaller than the critical wave length $\lambda_{\rm crit}$, the typical spacing between spiral arms (Julian & Toomre 1966, cf. also paper I). The epicycle size is determined by $\sqrt{2J_{\rm 1}/\kappa}$, whereas $\sigma_{\rm k_{\rm x}}\propto k_{\rm crit}=2\pi/\lambda_{\rm crit}$ (cf. Fig. 1). Similarly terms of the kind $$\sigma_{\rm k_{\rm x}}^{2}\left(\frac{\kappa}{2Al_{2}}\right)^{2}$$ (32) will be neglected, because $$\frac{\sigma_{\rm k_{\rm x}}\kappa}{2Al_{2}}\propto\frac{\sigma_{\rm k_{\rm x}% }}{\dot{k}_{\rm x,eff}}\frac{1}{T_{\rm orb}}\propto\frac{T_{\rm acc}}{T_{\rm orb% }}\ll 1\,,$$ (33) where $T_{\rm acc}$ denotes the duration of disc heating by a single spike of the potential fluctuations, which is only effective close to the peak in Fig. 1, whereas $T_{\rm orb}$ is the orbital period of the stars. In this aspect the disc heating mechanism described here is rather impulsive. Next I consider the integration in equation (23) with respect to $l_{1}$, $$\int_{-\infty}^{\infty}dl_{1}\left\{\begin{array}[]{c}l_{1}^{2}\\ l_{1}\\ 1\end{array}\right\}e^{-il_{1}(w_{1}-w^{\prime}_{1})}\,,$$ (34) where the upper row refers to $D_{11}$, the middle row to $D_{12}=D_{21}$, and the lower row to $D_{22}$, respectively. The results are given by delta functions and derivatives thereof, $$2\pi\left\{\begin{array}[]{c}-\delta^{\prime\prime}(w_{1}-w^{\prime}_{1})\\ i\delta^{\prime}(w_{1}-w^{\prime}_{1})\\ \delta(w_{1}-w^{\prime}_{1})\end{array}\right\}\,.$$ (35) The next step is the integration with respect to the angle variable $w_{1}$, which leads for the diffusion coefficient $D_{22}$ to $$\displaystyle\int_{0}^{2\pi}dw_{1}\delta(w_{1}-w^{\prime}_{1})e^{-il_{2}\frac{% \sqrt{2\kappa J_{1}}}{2B}(\cos{w_{1}}-\cos{w^{\prime}_{1}})}$$ $$\displaystyle\cdot e^{ik_{\rm x_{0}}\sqrt{\frac{2J_{1}}{\kappa}}(\sin{w_{1}}-% \sin{w^{\prime}_{1}})}=1\,,$$ (36) for $D_{12}=D_{21}$ to $$\displaystyle i\int_{0}^{2\pi}dw_{1}\delta^{\prime}(w_{1}-w^{\prime}_{1})e^{-% il_{2}\frac{\sqrt{2\kappa J_{1}}}{2B}(\cos{w_{1}}-\cos{w^{\prime}_{1}})}$$ $$\displaystyle\cdot e^{ik_{\rm x_{0}}\sqrt{\frac{2J_{1}}{\kappa}}(\sin{w_{1}}-% \sin{w^{\prime}_{1}})}$$ $$\displaystyle=-l_{2}\frac{\sqrt{2\kappa J_{1}}}{2B}\sin{w^{\prime}_{1}}-k_{\rm x% _{0}}\sqrt{\frac{2J_{1}}{\kappa}}\cos{w^{\prime}_{1}}\,,$$ (37) and for $D_{11}$ to $$\displaystyle-\int_{0}^{2\pi}dw_{1}\delta^{\prime\prime}(w_{1}-w^{\prime}_{1})% e^{-il_{2}\frac{\sqrt{2\kappa J_{1}}}{2B}(\cos{w_{1}}-\cos{w^{\prime}_{1}})}$$ $$\displaystyle\cdot e^{ik_{\rm x_{0}}\sqrt{\frac{2J_{1}}{\kappa}}(\sin{w_{1}}-% \sin{w^{\prime}_{1}})}$$ $$\displaystyle=-\Big{[}il_{2}\frac{\sqrt{2\kappa J_{1}}}{2B}\cos{w^{\prime}_{1}% }-ik_{\rm x_{0}}\sqrt{\frac{2J_{1}}{\kappa}}\sin{w^{\prime}_{1}}$$ $$\displaystyle+\left(il_{2}\frac{\sqrt{2\kappa J_{1}}}{2B}\sin{w^{\prime}_{1}}+% ik_{\rm x_{0}}\sqrt{\frac{2J_{1}}{\kappa}}\cos{w^{\prime}_{1}}\right)^{2}\Big{% ]}\,.$$ (38) The integration over $w^{\prime}_{1}$ gives simply a factor of $2\pi$ for $D_{11}$, whereas the integration over the trigonometric functions in equation (37) leads to the result $$D_{12}=D_{21}=0\,.$$ (39) Integrating equation (38) with respect to $w^{\prime}_{1}$ gives $$\pi\left(l_{2}^{2}\frac{2\kappa J_{1}}{4B^{2}}+k_{\rm x_{0}}^{2}\frac{2J_{1}}{% \kappa}\right)\,.$$ (40) The final step is the integration with respect to $l_{2}$, $$\displaystyle\int_{-\infty}^{\infty}dl_{2}\frac{1}{l_{2}}\left\{\begin{array}[% ]{c}1\\ l_{2}^{2}\end{array}\right\}e^{-\frac{(l_{2}-k_{\rm y0})^{2}}{\sigma_{\rm k_{% \rm y}}^{2}}}$$ $$\displaystyle\cdot\left\{\begin{array}[]{c}\pi\left(l_{2}^{2}\frac{2\kappa J_{% 1}}{4B^{2}}+k_{\rm x_{0}}^{2}\frac{2J_{1}}{\kappa}\right)\\ 1\end{array}\right\}$$ $$\displaystyle=\left\{\begin{array}[]{c}\frac{2\pi J_{1}}{\kappa}k_{\rm y0}% \sqrt{\pi}\sigma_{\rm k_{\rm y}}\left(\frac{\kappa^{2}}{4B^{2}}+\frac{k_{\rm x% _{0}}^{2}}{k_{\rm y0}^{2}}\right)\\ k_{\rm y0}\sqrt{\pi}\sigma_{\rm k_{\rm y}}\end{array}\right\}\,,$$ (41) where the upper rows refer to the diffusion coefficient $D_{11}$ and the lower rows to $D_{22}$, respectively. There is a formal divergence at $l_{2}$ = 0 in the second term of $D_{11}$ on the lhs of equation (41). I have chosen to ignore this and have replaced the integral by a saddle point approximation. The reason is that the model of disc heating used here (cf. equation 25) becomes unphysical for small circumferential wave numbers $l_{2}$, because the density waves approach then the WKB limit and become long lived, so that they do not heat the disc effectively. Due to the symmetry of the distribution of amplitudes (24) with respect to ${\bf k}$ the effect of density waves with negative wave numbers $k_{\rm y}$ can be taken into account by multiplying the diffusion coefficients by a factor of two. Assembling all results leads a diffusion tensor of the form. $$D_{\rm mn}=D_{\rm 0}\left(\begin{array}[]{cc}\frac{1}{\kappa}(\frac{\kappa^{2}% }{4B^{2}}+\frac{k^{2}_{\rm x_{0}}}{k_{20}^{2}})J_{1}&0\\ 0&1\end{array}\right)\,,$$ (42) with $D_{\rm 0}=8\pi^{\frac{7}{2}}|\Phi_{0}|^{2}\sigma^{2}_{\rm k_{\rm x}}\sigma_{% \rm k_{\rm y}}k_{\rm 20}/A$. The diffusion equation takes the form $$\frac{\partial\langle f\rangle}{\partial t}=\frac{1}{2}\frac{\partial}{% \partial J_{\rm 1}}\left(\tilde{D}_{\rm 11}J_{\rm 1}\frac{\partial}{\partial J% _{\rm 1}}\langle f\rangle\right)+\frac{1}{2}D_{\rm 22}\frac{\partial^{2}% \langle f\rangle}{{\partial J_{\rm 2}}^{2}}\,,$$ (43) where the overhead tilde means the the $J_{1}$–dependence of $D_{11}$ has been written separately. For this kind of disc heating by transient spiral density waves no correlation between the diffsion in radial action and angular momentum space is found. The diffusion equation (43) is highly non–linear, because the diffusion coefficients depend on the distribution function $\langle f\rangle$ themselves. In particular, the effectivity of swing amplification of spiral density waves depends critically on the value of the Toomre stability parameter $Q$ = $\kappa\sigma_{\rm u}/(3.36G\Sigma_{\rm d})$, where $\sigma_{\rm u}$ denotes the radial velocity dispersion of the stars. Thus, when the disc heats up, the amplitudes $\Phi_{0}$ of the density waves and the diffusion coefficients (42) will decrease and the disc heating rate slows down to zero. Numerical simulations of the dynamical evolution of galactic discs (Sellwood & Carlberg 1984, Fuchs & v. Linden 1998) have shown that this can happen on comparatively short time scales, if the discs are left uncooled. Only if the discs are cooled dynamically by adding stars on low peculiar velocity orbits, the spiral density wave activity can be maintained at a constant level despite the rising velocity dispersions. In that case $D_{\rm 0}\approx$ const. and a simple solution of the diffusion equation (43) is found by separation of variables in the form $$\displaystyle\langle f\rangle$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{(c_{\rm 1}+\frac{\tilde{D}_{\rm 11}}{2}t)\sqrt{c_{\rm 2}% +\frac{D_{\rm 22}}{2}t}}$$ (44) $$\displaystyle\cdot$$ $$\displaystyle\exp{-\left\{\frac{J_{\rm 1}}{c_{\rm 1}+\frac{\tilde{D}_{\rm 11}}% {2}t}+\frac{J^{2}_{\rm 2}}{c_{\rm 2}+\frac{D_{\rm 22}}{2}t}\right\}}$$ with arbitrary constants $c_{\rm 1}$ and $c_{\rm 2}$. 4 Discussion and Conclusions The scattering of stars by shearing, short lived density waves leads to independent diffusion of stars in radial action – angular momentum space. This diffusion process has various implications. The radial action integral is related to the peculiar velocities of the stars as $$J_{\rm 1}=\frac{1}{2\kappa}\left(u^{2}+\frac{\kappa^{2}}{4B^{2}}v^{2}\right)\,.$$ (45) Thus a distribution function of the form $$\exp{-J_{\rm 1}/\left(c_{\rm 1}+\frac{\tilde{D}_{\rm 11}}{2}t\right)}$$ (46) implies a $\exp{-u^{2}/2\sigma^{2}_{\rm u}(t)}$ dependence of the velocity distribution with a predicted radial velocity dispersion of $\sigma_{\rm u}(t)$ = $\sqrt{\kappa(c_{\rm 1}+\tilde{D}_{\rm 11}t/2)}$. Such a rise of the velocity dispersion with time fits ideally to the actual age–velocity dispersion relation observed in the solar neighbourhood (cf. Fuchs et al. 2001 for a recent review of the observational data). Unfortunately the diffusion coefficients cannot be estimated quantitatively, because the constant $|\Phi_{0}|^{2}$, which parameterizes the white noise, is not known a priori. But judging from the shape of the heating law (46) wave – star scattering of the kind discussed here might well have played an important role in the Milky Way disc. Whether this is the only disc heating mechanism is still a matter of debate (Fuchs et al. 2001). The diffusion of the guiding centre radii of the stellar orbits can be estimated from the ratio of the diffusion coefficients, $$\frac{D_{\rm 22}}{D_{\rm 11}}=\frac{\kappa}{J_{\rm 1}}\frac{1}{\frac{\kappa^{2% }}{4B^{2}}+\frac{k^{2}_{\rm x_{0}}}{k^{2}_{20}}}\,.$$ (47) As shown above $\kappa D_{\rm 11}t/(2J_{\rm 1})$ is proportional to the square of the radial velocity dispersion, $\sigma^{2}_{\rm u}(t)$. According to epicyclic theory (cf. equation 20) $$\langle x^{2}_{\rm g}\rangle=\frac{1}{4B^{2}}\langle J^{2}_{\rm 2}\rangle\,,$$ (48) so that the dispersion of the guiding centre radii of the stellar orbits is given by $$\displaystyle\sqrt{\langle x^{2}_{\rm g}\rangle}=\frac{1}{-2B}\sqrt{\frac{D_{% \rm 22}}{4}t}=\frac{1}{-2B}\frac{\sigma_{\rm u}}{\sqrt{2\left(\frac{\kappa^{2}% }{4B^{2}}+\frac{k^{2}_{\rm x_{0}}}{k^{2}_{20}}\right)}}\,.$$ (49) Using the parameter values estimated above equation (49) implies $\sqrt{\langle x^{2}_{\rm g}\rangle}=0.2\sigma_{\rm u}/\Omega_{\rm 0}$, if a flat rotation curve is assumed. A star with an age of 5 Gyrs like the Sun has typically in the solar neighbourhood a velocity dispersion of 50 km/s. Thus $\sqrt{\langle x^{2}_{\rm g}\rangle}$ = 400 pc = 0.05 $r_{\rm 0}$ in the solar neighbourhood, if a local angular velocity of $\Omega_{\rm 0}$ = 26 km/s/kpc is adopted. This confirms the conclusion of Binney & Lacey (1988) that the diffusion of guiding centre radii driven by a rapid succession of spiral density waves is rather small. This assumes that disc heating by transient spiral density waves is the only disc heating mechanism. However, if the Sun has indeed drifted from its birth place nearly 2 kpc radially outwards to its present galactocentric radius as suggested by Wielen, Fuchs, & Dettbarn (1996) and Wielen & Wilson (1997), this would mean that there must be other dynamical heating mechanisms of the galactic disc. It was shown by Fuchs, Dettbarn, & Wielen (1994) theoretically and by numerical simulations that Spitzer–Schwarzschild diffusion of stars due to gravitational encounters with massive molecular clouds, for instance, leads to a much more pronounced diffusion of the guiding centre radii of the stellar orbits, even if the mechanism does not heat the disc effectively. Acknowledgments I thank A. Just and R. Wielen for helpful discussions. I am also grateful to the anonymous referee, whose comments lead to an improvement of the paper. References [1] Barbanis B., Woltjer L., 1967, ApJ, 150, 461 [2] Binney J., Lacey C., 1988, MNRAS, 230, 597 [3] Carlberg R.G., Sellwood J.A., 1985, ApJ, 292, 79 [4] Dekker E., 1976, Phys. Reports, 24, 315 [5] Fuchs B., 2001a, A&A, 368, 107 [6] Fuchs B., 2001b, A&A, submitted [7] Fuchs B., Dettbarn C., Wielen R., 1994, in: Ergodic Concepts in Stellar Dynamics, eds. V.G. Gurzadyan & D. Pfenniger, Springer, Berlin, p. 34 [8] Fuchs B., Dettbarn C., Jahreiß, Wielen R., 2001, in: Dynamics of Star Clusters and the Milky Way, eds. S. Deiters, B. Fuchs, A. Just, R. Spurzem, R. Wielen, ASP Conf. Ser. 228, in press [9] Fuchs B., von Linden S., 1998, MNRAS, 294, 513 [10] Goldreich P., Lynden–Bell D., 1978, ApJ, 222, 850 [11] Hall D.E., Sturrock P.A., 1967, Phys. Fluids, 10, 2620 [12] Jenkins A., Binney J., 1990, MNRAS, 245, 305 [13] Julian W.H., 1967, ApJ, 148, 175 [14] Julian W.H., Toomre A., 1966, ApJ, 146, 810 [15] Lynden–Bell D., Kalnajs A.J., 1972, MNRAS, 157, 1 [16] Sellwood J.A., Carlberg R.G., 1984, ApJ, 282, 61 [17] Toomre A., 1981, in: The Structure and Evolution of Normal Galaxies, eds. S.M. Fall & D. Lynden–Bell, Cambridge Univ. Press, p.111 [18] Toomre A., 1990, in: Dynamics and Interactions of Galaxies, ed. R. Wielen, Springer, Berlin, p. 292 [19] Wielen R., Fuchs B., Dettbarn C., 1996, A&A 314, 438 [20] Wielen R., Wilson T.L., 1997, A&A 326, 139
A Scheme for Molecular Computation of Maximum Likelihood Estimators for Log-Linear Models Manoj Gopalkrishnan manoj.gopalkrishnan@gmail.com School of Technology and Computer Science Tata Institute of Fundamental Research, Mumbai, India. Abstract We propose a scheme for computing Maximum Likelihood Estimators for Log-Linear models using reaction networks, and prove its correctness. Our scheme exploits the toric structure of equilibrium points of reaction networks. This allows an efficient encoding of the problem, and reveals how reaction networks are naturally suited to statistical inference tasks. Our scheme is relevant to molecular programming, an emerging discipline that views molecular interactions as computational primitives for the synthesis of sophisticated behaviors. In addition, such a scheme may provide a template to understand how biochemical signaling pathways integrate extensive information about their environment and history. 1 Introduction The sophisticated behavior of cells emerges from the computations being performed by the underlying biochemical reaction networks. These biochemical pathways have been studied in a “top-down” manner, by looking for recurring motifs, and signs of modularity [17]. There is also an opportunity to study these pathways in a “bottom-up” manner by proposing primitive building blocks which can be composed to create interesting and technologically-valuable behavior. This “bottom-up” approach connects with work in the Molecular Computation community whose goal is to generate sophisticated behavior using DNA hybridization reactions [22, 23, 18, 29, 25, 5, 6, 21] and other Artificial Chemistry approaches [4, 9]. We propose a new building block for molecular computation. We show that the mathematical structure of reaction networks is particularly well adapted to compute Maximum Likelihood Estimators for log-linear models, allowing a pithy encoding of such computations by reactions. According to [11]: Log-linear models are arguably the most popular and important statistical models for the analysis of categorical data; see, for example, Bishop, Fienberg and Holland (1975) [3], Christensen (1997) [7]. These powerful models, which include as special cases graphical models [see, e.g., Lauritzen (1996) [15]] as well as many logit models [see, e.g., Agresti (2002) [1], Bishop, Fienberg and Holland (1975) [3]], have applications in many scientific areas, ranging from social and biological sciences, to privacy and disclosure limitation problems, medicine, data-mining, language processing and genetics. Their popularity has greatly increased in the last decades… Receptors that sit on cell walls collect a large amount of information about the cellular environment. Statistical processing and integration of this spatially and temporally extensive information can help the cell correctly estimate the overall state of its environment, and respond in a manner that maximizes fitness. Such statistical processing would have to be carried out in the biochemical reaction pathways. We submit that schemes for statistical processing by reaction networks are of biological significance, and are deserving of as thorough and extensive a study as schemes for statistical processing by neural networks.  The problem: We illustrate the main ideas of our scheme with an example. Following [20], consider the log-linear model (also known as toric model) described by the design matrix $A=\tiny\left(\begin{array}[]{ccc}2&1&0\\ 0&1&2\end{array}\right)$. This means that we are observing an event with three possible mutually exclusive outcomes, call them $X_{1},X_{2}$, and $X_{3}$, which represent respectively the columns of $A$. The rows of $A$ represent “hidden variables” $\theta_{1}$ and $\theta_{2}$ respectively which parametrize the statistics of the outcomes in the following way specified by the columns of $A$: $$\displaystyle P[X_{1}\mid\theta_{1},\theta_{2}]$$ $$\displaystyle\propto\theta_{1}^{2}$$ $$\displaystyle P[X_{2}\mid\theta_{1},\theta_{2}]$$ $$\displaystyle\propto\theta_{1}\theta_{2}$$ $$\displaystyle P[X_{3}\mid\theta_{1},\theta_{2}]$$ $$\displaystyle\propto\theta_{2}^{2}$$ where the constant of proportionality normalizes the probabilities so they sum to $1$. 111It is more conventional in statistics and statistical mechanics literature to write $\theta_{1}=\mathrm{e}^{-E_{1}}$ and $\theta_{2}=\mathrm{e}^{-E_{2}}$ in terms of “energies” $E_{1},E_{2}$ so that $P[X_{2}\mid E_{1},E_{2}]\propto\mathrm{e}^{-E_{1}-E_{2}}$ for example. Suppose several independent trials are carried out, and the outcome $X_{1}$ is observed $x_{1}$ fraction of the time, the outcome $X_{2}$ is observed $x_{2}$ fraction of the time, and the outcome $X_{3}$ is observed $x_{3}=1-x_{1}-x_{2}$ fraction of the time. We wish to find the maximum likelihood estimator $(\hat{\theta}_{1},\hat{\theta}_{2})$ of the parameter $(\theta_{1},\theta_{2})\in\mathbb{R}^{2}_{>0}$, i.e., that value of $\theta$ which maximizes the likelihood of the observed data. Our contribution: We describe a reaction network that solves this problem for us. Definition 4.1 will extend our construction to every matrix $A$ over the integers with all column sums equal. • In Theorem 4.3, we show how to obtain from the matrix $A$ a reaction network that computes the maximum likelihood distribution. Specialized to our example, note that the kernel of the matrix $A$ is spanned by the vector $(1,-2,1)^{T}$. We encode this by the reversible reaction $X_{1}+X_{3}\rightleftharpoons 2X_{2}$. If this reversible reaction is started at initial concentrations $X_{1}(0)=x_{1},X_{2}(0)=x_{2},X_{3}(0)=x_{3}$, and the dynamics proceeds according to the law of mass action with all specific rates set to $1$: $$\displaystyle\dot{X}_{1}(t)=\dot{X}_{3}(t)=-X_{1}(t)X_{3}(t)+X_{2}^{2}(t),$$ $$\displaystyle\dot{X}_{2}(t)=-2X_{2}^{2}(t)+2X_{1}(t)X_{3}(t)$$ then the reaction reaches equilibrium $(\hat{x_{1}},\hat{x_{2}},\hat{x_{3}})$ where $\hat{x}_{1}+\hat{x}_{2}+\hat{x}_{3}=1$ and $\hat{x}_{1}\propto\hat{\theta}_{1}^{2}$, $\hat{x}_{2}\propto\hat{\theta}_{1}\hat{\theta}_{2}$, and $\hat{x}_{3}\propto\hat{\theta}_{2}^{2}$, so that $(\hat{x}_{1},\hat{x}_{2},\hat{x}_{3})$ represents the probability distribution over the outcomes $X_{1},X_{2},X_{3}$ at the maximum likelihood $\hat{\theta_{1}},\hat{\theta_{2}}$. This part of our scheme involves only reversible reactions, and requires no catalysis (see [12, Theorem 5.2] and Lemma 4.2), which may make it particularly easy and efficient to implement. • In Theorem 4.4, we show how to obtain from the matrix $A$ a reaction network that computes the maximum likelihood estimator. Specialized to our example, we obtain the reaction network with $5$ species $X_{1},X_{2},X_{3},\theta_{1},\theta_{2}$ and the $7$ reactions: $$\displaystyle X_{1}+X_{3}\rightleftharpoons 2X_{2},$$ $$\displaystyle 2\theta_{1}\to 0,$$ $$\displaystyle X_{1}\to X_{1}+2\theta_{1},$$ $$\displaystyle\theta_{1}+\theta_{2}\to 0,$$ $$\displaystyle X_{2}\to X_{2}+\theta_{1}+\theta_{2},$$ $$\displaystyle 2\theta_{2}\to 0,$$ $$\displaystyle X_{3}\to X_{3}+2\theta_{2}.$$ The number of species equals the number of rows plus the number of columns of $A$. The reversible reactions are determined upto a choice of basis for the kernel of $A$. Each column of $A$ determines a pair of irreversible reactions. Theorem 4.4 implies that if this reaction network is started at initial concentrations $X_{1}(0)=x_{1},X_{2}(0)=x_{2},X_{3}(0)=x_{3}$ and arbitrary concentrations of $\theta_{1}(0)$ and $\theta_{2}(0)$, and the dynamics proceeds according to the law of mass action with all specific rates set to $1$: $$\displaystyle\dot{X}_{1}(t)=\dot{X}_{3}(t)=-X_{1}(t)X_{3}(t)+X_{2}^{2}(t),$$ $$\displaystyle\dot{\theta}_{1}(t)=-2\theta_{1}^{2}(t)+2X_{1}(t)-\theta_{1}(t)% \theta_{2}(t)+X_{2}(t),$$ $$\displaystyle\dot{X}_{2}(t)=-2X_{2}^{2}(t)+2X_{1}(t)X_{3}(t),$$ $$\displaystyle\dot{\theta}_{2}(t)=-2\theta_{2}^{2}(t)+2X_{3}(t)-\theta_{1}% \theta_{2}(t)+X_{2}(t),$$ then the reaction reaches equilibrium $(\hat{x_{1}},\hat{x_{2}},\hat{x_{3}},\hat{\theta}_{1},\hat{\theta}_{2})$ where $(\hat{\theta}_{1},\hat{\theta}_{2})$ is the maximum likelihood estimator for the data frequency vector $(x_{1},x_{2},x_{3})$ and $(\hat{x}_{1},\hat{x}_{2},\hat{x}_{3})$ represents the probability distribution over the outcomes $X_{1},X_{2},X_{3}$ at the maximum likelihood. • A number of schemes have been proposed for translating reaction networks into DNA strand displacement reactions [25, 21, 5, 6]. These schemes can be adapted to our setting as well, allowing molecular implementation of our MLE-solving reaction networks with DNA molecules. 2 Maximum Likelihood Estimation in toric models The definitions and results in this section mostly follow [20]. Because we require a slightly stronger statement, and Theorem 2.3 allows a short, easy, and insightful proof, we give the proof here for completeness. Definition 2.1 (Toric Model). Let $m,n$ be positive integers. The probability simplex and its relative interior are: $$\Delta^{n}:=\{(x_{1},x_{2},\dots,x_{n})\in\mathbb{R}^{n}_{\geq 0}\mid x_{1}+x_% {2}+\dots+x_{n}=1\}$$ $$\operatorname{ri}(\Delta^{n}):=\{(x_{1},x_{2},\dots,x_{n})\in\mathbb{R}^{n}_{>% 0}\mid x_{1}+x_{2}+\dots+x_{n}=1\}.$$ An $m\times n$ matrix $A$ of integer entries is a design matrix iff all its column sums are equal. Let $a_{j}:=(a_{1j},a_{2j},\dots,a_{mj})^{T}$ be the $j$’th column of $A$. Define the parameter space $$\Theta:=\{\theta\in\mathbb{R}^{m}_{>0}\mid\theta^{a_{1}}+\theta^{a_{2}}+\dots+% \theta^{a_{n}}=1\}\text{ where }\theta^{a_{j}}:=\theta_{1}^{a_{1j}}\theta_{2}^% {a_{2j}}\dots\theta_{m}^{a_{mj}}.$$ The toric model of $A$ is the parametric probability model $$p_{A}=(p_{1},p_{2},\dots,p_{n}):\Theta\to\Delta^{n}\text{ given by }p_{j}(% \theta)=\theta^{a_{j}}\text{ for }j=1\text{ to }n.$$ Note that here $p_{j}(\theta)$ specifies $\operatorname{Pr}[j\mid\theta]$, the conditional probability of obtaining outcome $j$ given that the true state of the world is described by $\theta$. After performing several independent identical trials, we obtain a data vector $u\in\mathbb{Z}_{\geq 0}^{n}$ which records how many times each outcome occurred. The norm $|u|_{1}:=u_{1}+u_{2}+\dots+u_{n}$ denotes the total number of trials performed. Define the likelihood function $f_{u}(\theta):=\operatorname{Pr}[u\mid\theta]$. The Maximum Likelihood problem is the problem of finding that value of $\theta$ which maximizes $f(\theta)$. Then $$\displaystyle\hat{\theta}(u):=\arg\sup_{\theta\in\Theta}f_{u}(\theta)$$ (1) is a maximum likelihood estimator or MLE for the data vector $u$. We will call the point $\hat{p}(u):=p_{A}(\hat{\theta}(u))$ a maximum likelihood distribution. Definition 2.2. Let $A$ be an $m\times n$ design matrix, and $u$ a data vector. Then the sufficient polytope $P_{A}(u):=\{p\in\operatorname{ri}(\Delta^{n})\mid Ap=A\frac{u}{|u|_{1}}\}$. The following theorem is a version of Birch’s theorem from Algebraic Statistics. Theorem 2.3. Fix a design matrix $A$ of size $m\times n$. 1. If $u,v\in\mathbb{Z}^{n}_{\geq 0}$ are nonzero data vectors such that $Au/|u|_{1}=Av/|v|_{1}$ then they have the same maximum likelihood estimator: $\hat{\theta}(u)=\hat{\theta}(v)$. 2. Further if $P_{A}(u)$ is nonempty then (a) There is a unique distribution $\tilde{p}\in P_{A}(u)$ which maximizes Shannon entropy $H(p)=-\sum_{i=1}^{n}p_{i}\log p_{i}$ viewed as a function from $\overline{P_{A}(u)}$ to $\mathbb{R}$ with $0\log 0$ defined as $0$. (b) $\{\tilde{p}\}=P_{A}(u)\cap p(\Theta)$. (c) $\tilde{p}=\hat{p}(u)$, the Maximum Likelihood Distribution for the data vector $u$. Proof. 1. Fix a data vector $u$. Note that $f_{u}(\theta)=p_{1}(\theta)^{u_{1}}p_{2}(\theta)^{u_{2}}\dots p_{n}(\theta)^{u% _{n}}=\theta^{Au}$. Therefore the maximum likelihood estimator $$\hat{\theta}(u)=\arg\sup_{\theta\in\Theta}\theta^{Au}=\arg\sup_{\theta\in% \Theta}(\theta^{Au})^{1/|u|_{1}}=\arg\sup_{\theta\in\Theta}\theta^{Au/|u|_{1}}$$ where the second equality is true because the function $x\in\mathbb{R}_{\geq 0}\mapsto x^{c}\in\mathbb{R}_{\geq 0}$ is monotonically increasing whenever $c>0$. It follows that if $v\in\mathbb{Z}^{n}_{\geq 0}$ is a data vector such that $Au/|u|_{1}=Av/|v|_{1}$ then $\hat{\theta}(u)=\hat{\theta}(v)$.  2.(a) Suppose $P_{A}(u)$ is nonempty. Note that $p\in\partial\overline{P_{A}(u)}$ can not be a local maximum of the restriction $H|_{\overline{P_{A}(u)}}$ of $H$ to the polytope $\overline{P_{A}(u)}$ because for $q\in P_{A}(u)$: $$\lim_{\lambda\to 0}\frac{d}{d\lambda}H((1-\lambda)\hat{p}+\lambda q)\to+\infty.$$ Since $H$ is a continuous function and the closure $\overline{P_{A}(u)}$ is a compact set, $H$ must attain its maximum value in $P_{A}(u)$. Further $H$ is a strictly concave function since its Hessian is diagonal with entries $-1/p_{i}$ and hence negative definite. It follows that $H|_{\overline{P_{A}(u)}}$ is also strictly concave, and has a unique local maximum at $\tilde{p}\in P_{A}(u)$, which is also the global maximum.  (b) We claim that $q\in P_{A}(u)\cap p_{A}(\Theta)$ iff $\nabla H(q)=(-1-\log q_{1},-1-\log q_{2},\dots,-1-\log q_{n})$ is perpendicular to $P_{A}(u)$. Since all column sums are equal, this is equivalent to requiring that $\log q$ be in the span of the rows of $A$, which is true iff $q\in p_{A}(\Theta)$, which is true anyway. It follows that $P_{A}(u)\cap p_{A}(\Theta)=\{\tilde{p}\}$.  (c) To compute the Maximum Likelihood Distribution $\hat{\theta}(u)$, we proceed as follows: $$\displaystyle\hat{p}(u)$$ $$\displaystyle=p_{A}(\hat{\theta}(u))=p_{A}(\arg\sup_{\theta\in\Theta}\theta^{% Au})=p_{A}(\arg\sup_{\theta\in\Theta}\theta^{Au/|u|_{1}})$$ $$\displaystyle=p_{A}(\arg\sup_{\theta\in\Theta}\theta^{A\tilde{p}})=\arg\sup_{p% \in p_{A}(\Theta)}p^{\tilde{p}}=\arg\sup_{p\in p_{A}(\Theta)}\sum_{i=1}^{n}% \tilde{p}_{i}\log p_{i}=\tilde{p}$$ where the fourth equality uses $A\tilde{p}=Au/|u|_{1}$ and the last equality follows because the function $\sum_{i=1}^{n}\tilde{p}_{i}\log p_{i}$ viewed as a function of $p$ attains its minimum in all of $\Delta^{n}$, and hence in $p_{A}(\Theta)$, at $p:=\tilde{p}$. ∎ This theorem already exposes the core of our idea. We will design reaction systems that maximize entropy subject to the “correct” constraints capturing the polytope $P_{A}(u)$. Then the equilibrium point of our dynamics will correspond to the maximum likelihood distribution. It will remain to establish that trajectories actually correspond to these equilibrium points. To show this, we will need some results from [13]. 3 Reaction Networks In this section, we present the definitions and results for reaction networks which we will need for our main results. Theorem 3.9 is a new result. It extends a previously-known global convergence result to the case of perturbations. According to [19], “In building a design theory for chemistry, chemical reaction networks are usually the most natural intermediate representation –- the middle of the ‘hourglass’ [10]. Many different high level languages and formalisms have been and can likely be compiled to chemical reactions, and chemical reactions themselves (as an abstract specification) can be implemented with a variety of low level molecular mechanisms.” When $a,b$ are vectors in $\mathbb{R}^{S}$ then the notation $a^{b}$ will be shorthand for the monomial $\prod_{i\in S}a_{i}^{b_{i}}$. We introduce some standard definitions. Definition 3.1 (Reaction Network). Fix a finite set $S$ of species. 1. A reaction over $S$ is a pair $(y,y^{\prime})$ such that $y,y^{\prime}\in\mathbb{Z}_{\geq 0}^{S}$ (written $y\rightarrow y^{\prime}$, with reactant $y$ and product $y^{\prime}$). 2. A reaction network consists of a finite set $S$ of species, and a finite set $\mathcal{R}$ of reactions. 3. A reaction network is reversible iff for every reaction $y\to y^{\prime}\in\mathcal{R}$, the reaction $y^{\prime}\to y\in\mathcal{R}$. 4. A reaction network is weakly reversible iff for every reaction $y\to y^{\prime}\in\mathcal{R}$ there exists a positive integer $n\in\mathbb{Z}_{>0}$ and $n$ reactions $y_{1}\to y_{2},y_{2}\to y_{3},\dots,y_{n-1}\to y_{n}\in\mathcal{R}$ with $y_{1}=y^{\prime}$ and $y_{n}=y$. 5. The stoichiometric subspace $H\subseteq\mathbb{R}^{S}$ is the subspace spanned by $\{y^{\prime}-y\mid y\to y^{\prime}\in\mathcal{R}\}$. 6. A siphon is a set $T\subseteq S$ of species such that for all $y\to y^{\prime}\in\mathcal{R}$, if there exists $i\in T$ such that $y^{\prime}_{i}>0$ then there exists $j\in T$ such that $y_{j}>0$. 7. A siphon $T\subseteq S$ is critical iff $v\in H^{\perp}\cap\mathbb{R}^{S}_{\geq 0}$ with $v_{i}=0$ for all $i\notin T$ implies $v=0$. Definition 3.2. Fix a weakly-reversible reaction network $(S,\mathcal{R})$. The associated ideal $I_{(S,\mathcal{R})}\subseteq\mathbb{C}[x]$ where $x=(x_{i})_{i\in S}$ is the ideal generated by the binomials $\{x^{y}-x^{y^{\prime}}\mid y\to y^{\prime}\in\mathcal{R}\}$. A reaction network is prime iff its associated ideal is a prime ideal. The following theorem follows from [13, Theorem 4.1, Theorem 5.2]. Theorem 3.3. A weakly-reversible prime reaction network $(S,\mathcal{R})$ has no critical siphons. We now recall the mass-action equations which are widely employed for modeling cellular processes [27, 24, 26, 28] in Biology. Definition 3.4 (Mass Action System). A reaction system consists of a reaction network $(S,\mathcal{R})$ and a rate function $k:\mathcal{R}\to\mathbb{R}_{>0}$. The mass-action equations for a reaction system are the system of ordinary differential equations in concentration variables $\{x_{i}(t)\mid i\in S\}$: $$\dot{x}(t)=\sum_{y\to y^{\prime}\in\mathbb{R}}k_{y\to y^{\prime}}\,x(t)^{y}\,(% y^{\prime}-y)$$ (2) where $x(t)$ represents the vector $(x_{i}(t))_{i\in S}$ of concentrations at time $t$. Note that $\dot{x}(t)\in H$, so affine translates of $H$ are invariant under the dynamics 2. We recall the well-known notions of detailed balanced and complex balanced reaction system. Definition 3.5. A reaction system $(S,\mathcal{R},k)$ is 1. Detailed balanced iff it is reversible and there exists a point $\alpha\in\mathbb{R}^{S}_{>0}$ such that for every $y\to y^{\prime}\in\mathcal{R}$: $$k_{y\to y^{\prime}}\,\alpha^{y}\,(y^{\prime}-y)=k_{y^{\prime}\to y}\,\alpha^{y% ^{\prime}}\,(y-y^{\prime})$$ A point $\alpha\in\mathbb{R}^{S}_{>0}$ that satisfies the above condition is called a point of detailed balance. 2. Complex balanced iff there exists a point $\alpha\in\mathbb{R}^{S}_{>0}$ such that for every $y\in\mathbb{Z}^{S}_{\geq 0}$: $$\sum_{y\to y^{\prime}\in\mathcal{R}}k_{y\to y^{\prime}}\,\alpha^{y}\,(y^{% \prime}-y)=\sum_{z\to y\in\mathcal{R}}k_{z\to y}\,\alpha^{z}\,(y-z)$$ A point $\alpha\in\mathbb{R}^{S}_{>0}$ that satisfies the above condition is called a point of complex balance. The following observations are well-known and easy to verify. $\bullet$ A complex balanced reaction system is always weakly-reversible. $\bullet$ If all rates $k_{y\to y^{\prime}}=1$ then the network is complex balanced with point of complex balance $(1,1,\dots,1)\in\mathbb{R}^{S}$, and also detailed balanced if the network is reversible. $\bullet$ Every detailed balance point is also a complex balance point, but there are complex balanced reversible networks that are not detailed balanced. It is straightforward to check that every point of complex balance (respectively, detailed balance) is a fixed point for Equation 2. The next theorem, which follows from [2, Theorem 2] and [14], states that a converse also exists: if a reaction system is complex balanced (respectively, detailed balanced) then every fixed point is a point of complex balance (detailed balance). Further there is a unique fixed point in each affine translate of $H$, and if there are no critical siphons then the basin of attraction for this fixed point is as large as possible, namely the intersection of the affine translate of $H$ with the nonnegative orthant. Theorem 3.6 (Global Attractor Theorem for Complex Balanced Reaction Systems with no critical siphons). Let $(S,\mathcal{R},k)$ be a weakly-reversible complex balanced reaction system with no critical siphons and point of complex balance $\alpha$. Fix a point $u\in\mathbb{R}^{S}_{>0}$. Then there exists a point of complex balance $\beta$ in $(u+H)\cap\mathbb{R}^{S}_{>0}$ such that for every trajectory $x(t)$ with initial conditions $x(0)\in(u+H)\cap\mathbb{R}^{S}_{\geq 0}$, the limit $\lim_{t\to\infty}x(t)$ exists and equals $\beta$. Further the function $g(x):=\sum_{i=1}^{n}x_{i}\log x_{i}-x_{i}-x_{i}\log\alpha_{i}$ is strictly decreasing along non-stationary trajectories and attains its unique minimum value in $(u+H)\cap\mathbb{R}^{S}_{\geq 0}$ at $\beta$. It is not completely trivial to show, but nevertheless true, that this theorem holds with weakly-reversible replaced by “reversible” and “complex balance” replaced by “detailed balance.” What is to be shown is that the point of complex balance obtained in $(u+H)\cap\mathbb{R}^{S}_{\geq 0}$ by minimizing $g(x)$ is actually a point of detailed balance, and this follows from an examination of the form of the derivation $\frac{d}{dt}g(x(t))$ along trajectories $x(t)$ to Equation 2. We will also need a perturbative version of this theorem. The next lemma shows that if the rates are perturbed slightly, the strict Lyapunov function $g(x)$ from Theorem 3.6 continues to decrease along non-stationary trajectories outside a small neighborhood of the detailed balance point. Lemma 3.7. Let $(S,\mathcal{R},k)$ be a weakly-reversible complex balanced reaction system with no critical siphons and point of complex balance $\alpha$. For every sufficiently small $\epsilon>0$ there exists $\delta>0$ such that for all $x^{\prime}$ outside the $\epsilon$-neighborhood of $\alpha$ in $(\alpha+H)\cap\mathbb{R}^{S}_{\geq 0}$, the derivative $\frac{d}{dt}g(x(t))|_{t=0}<-\delta$, where $x(t)$ is a solution to the mass-action equations 2 with $x(0)=x^{\prime}$. Proof. Let $B_{\epsilon}$ be the open $\epsilon$ ball around $\alpha$ in $(\alpha+H)\cap\mathbb{R}^{S}_{\geq 0}$, with $\epsilon$ small enough so that $B_{\epsilon}$ does not meet the boundary $\partial\mathbb{R}^{S}_{\geq 0}$. Consider the closed set $S:=(\alpha+H)\cap\mathbb{R}^{S}_{\geq 0}\setminus B_{\epsilon}$. Define the orbital derivative of $g$ at $x^{\prime}$ as $\mathcal{O}_{k}g(x^{\prime}):=\frac{d}{dt}g(x(t))|_{t=0}$, where $x(t)$ is a solution to the mass-action equations 2 with $x(0)=x^{\prime}$. Define $\delta:=-\inf_{x^{\prime}\in S}\mathcal{O}_{k}g(x^{\prime})$. If $\delta\leq 0$ then since $S$ is a closed set, and $\mathcal{O}_{k}g$ is a continuous function, there exists a point $x^{\prime}$ such that $\mathcal{O}_{k}g(x^{\prime})\geq 0$, which contradicts Theorem 3.6. ∎ We formalize the notion of perturbation using differential inclusions. Recall that differential inclusions are a nondeterministic way of modeling uncertainty in dynamics by generalizing the notion of vector field. A differential inclusion maps every point to a subset of the tangent space at that point. Definition 3.8. Let $(S,\mathcal{R},k)$ be a reaction system and let $\delta>0$. The $\delta$-perturbation of $(S,\mathcal{R},k)$ is the differential inclusion $V:\mathbb{R}^{S}_{\geq 0}\to 2^{\mathbb{R}^{S}}$ that at point $x\in\mathbb{R}^{S}_{\geq 0}$ takes the value $$V(x):=\left\{\sum_{y\to y^{\prime}\in\mathcal{R}}k^{\prime}_{y\to y^{\prime}}x% ^{y}(y^{\prime}-y)\,\,\,\middle|\,\,\,k^{\prime}_{y\to y^{\prime}}\in(k_{y\to y% ^{\prime}}-\delta,k_{y\to y^{\prime}}+\delta)\text{ for all }y\to y^{\prime}% \in\mathcal{R}\right\}.$$ A trajectory of $V$ is a tuple $(I,x)$ where $I\subseteq\mathbb{R}$ is an interval and $x:I\to\mathbb{R}^{S}_{\geq 0}$ is a differentiable function with $\dot{x}(t)\in V(x(t))$. Theorem 3.9 (Perturbed Global Attractor Theorem for Complex Balanced Reaction Systems with no critical siphons). Let $(S,\mathcal{R},k)$ be a weakly-reversible complex balanced reaction system with no critical siphons. Fix a point $u\in\mathbb{R}^{S}_{>0}$. Then there exists a point of complex balance $\beta$ in $(u+H)\cap\mathbb{R}^{S}_{>0}$ such that: 1. For every sufficiently small $\varepsilon>0$, there exists $\delta>0$ such that every trajectory of the form $(\mathbb{R}_{\geq 0},x)$ to the $\delta$-perturbation of $(S,\mathcal{R},k)$ with initial conditions $x(0)\in(u+H)\cap\mathbb{R}^{S}_{\geq 0}$ eventually enters an $\varepsilon$-neighborhood of $\beta$ and never leaves. 2. Consider a sequence $\delta_{1}>\delta_{2}>\dots>0$ and a sequence $0<t_{1}<t_{2}<\dots$ such that $\lim_{i\to\infty}\delta_{i}=0$ and $\lim_{i\to\infty}t_{i}=+\infty$, and a trajectory $(\mathbb{R}_{\geq 0},x)$ with $x(0)\in(u+H)\cap\mathbb{R}^{S}_{\geq 0}$ such that $((t_{i},\infty),x)$ is a trajectory of the $\delta_{i}$-perturbation of $(S,\mathcal{R},k)$. Then the limit $\displaystyle\lim_{t\to\infty}x(t)=\beta$. Proof sketch. 1. Fix $\varepsilon>0$ such that the $\varepsilon$-ball $B_{\varepsilon}$ around $\beta$ does not meet the boundary $\partial\mathbb{R}^{S}_{\geq 0}$. By Lemma 3.7, outside $B_{\varepsilon}$, there exists $\delta_{\varepsilon}>0$ such that the function $\mathcal{O}_{k}g<-\delta_{\varepsilon}$. Since $\mathcal{O}_{k}g$ is a continuous function of the specific rates $k$, a sufficiently small perturbation $\delta>0$ in the rates will not change the sign of $\mathcal{O}_{k}g$. Hence, outside $B_{\epsilon}$, the function $g$ is strictly decreasing along trajectories $x(t)$ to Equation 2. It follows that eventually every trajectory must enter $B_{\epsilon}$. 2. Fix a sequence $\varepsilon_{1}>\varepsilon_{2}>\dots>0$ with $\varepsilon_{1}$ small enough so that the $\varepsilon_{1}$-ball around $\beta$ does not meet the boundary $\partial\mathbb{R}^{S}_{\geq 0}$ and $\lim_{i\to\infty}\varepsilon_{i}\to 0$. For each $\varepsilon_{i}$, there exists $j$ such that $\delta_{j}$ is small enough as per part (1) of the theorem. So every trajectory will eventually enter the $\epsilon_{i}$ neighborhood of $\beta$, and never leave. Since this is true for every $i$ and $\lim_{i\to\infty}\varepsilon_{i}\to 0$, the result follows. ∎ 4 Main Result Definition 4.1. Fix a design matrix $A=(a_{ij})_{m\times n}$ and a basis $B$ for the free group $\mathbb{Z}^{n}\cap\ker A$. 1. The reaction network $\mathcal{R}_{MLD}(A,B)$ consists of $n$ species $X_{1},X_{2},\dots,X_{n}$ and for each $b\in B$, the reversible reaction: $$\sum_{j:b_{j}>0}b_{j}X_{j}\rightleftharpoons\sum_{j:b_{j}<0}b_{j}X_{j}$$ 2. The reaction system $\mathcal{S}_{MLD}(A,B)$ consists of the reaction network $\mathcal{R}_{MLD}(A,B)$ with an assignment of rate $1$ to each reaction. 3. The reaction network $\mathcal{R}_{MLE}(A,B)$ consists of $m+n$ species $\theta_{1},\theta_{2},\dots,\theta_{m},X_{1},X_{2},\dots,X_{n}$, and in addition to the reactions in $\mathcal{R}_{MLD}$, the following reactions: • For each column $j$ of $A$, a reaction $\sum_{i=1}^{m}a_{ij}\theta_{i}\to 0$. • For each column $j$ of $A$, a reaction $X_{j}\to X_{j}+\sum_{i=1}^{m}a_{ij}\theta_{i}$. 4. The reaction system $\mathcal{S}_{MLE}(A,B)$ consists of the reaction network $\mathcal{R}_{MLE}(A,B)$ with an assignment of rate $1$ to each reaction. Lemma 4.2. Fix a design matrix $A=(a_{ij})_{m\times n}$ and a basis $B$ for the free group $\mathbb{Z}^{n}\cap\ker A$. Then the reaction network $\mathcal{R}_{MLD}(A,B)$ is prime and $\mathcal{S}_{MLD}(A,B)$ is detailed balanced. Consequently, the reaction system $\mathcal{S}_{MLD}(A,B)$ is globally asymptotically stable. Proof. $\mathcal{R}_{MLD}(A,B)$ is prime by [16, Corollary 2.15]. The idea is to look at the toric model $p_{A}$ as a ring homomorphism $\mathbb{C}[x_{1},x_{2},\dots,x_{n}]\to\mathbb{C}[\mathbb{N}A]$ with $x_{j}\mapsto\theta^{a_{j}}$. (Here $\mathbb{N}A$ is the affine semigroup generated by the columns of $A$.) The kernel of this ring homomorphism is the associated ideal of $\mathcal{R}_{MLD}(A,B)$ by [16, Proposition 2.14], and the codomain is an integral domain, so the kernel must be prime. To verify that $\mathcal{S}_{MLD}(A,B)$ is detailed balanced, note that the point $(1,1,\dots,1)\in\mathbb{R}^{n}$ is a point of detailed balance since all rates are $1$. Global asymptotic stability now follows from Theorem 3.3 and Theorem 3.6. ∎ Theorem 4.3 (The reaction system $\mathcal{S}_{MLD}(A,B)$ computes the Maximum Likelihood Distribution). Fix a design matrix $A=(a_{ij})_{m\times n}$, a basis $B$ for the free group $\mathbb{Z}^{n}\cap\ker A$, and a nonzero data vector $u\in\mathbb{Z}^{n}_{\geq 0}$. Let $x(t)=(x_{1}(t),x_{2}(t),\dots,x_{n}(t))$ be a solution to the mass-action differential equations for the reaction system $\mathcal{S}_{MLD}(A,B)$ with initial conditions $x(0)=u/|u|_{1}$. Then $x(\infty):=\displaystyle\lim_{t\to\infty}x(t)$ exists and equals the maximum likelihood distribution $\hat{p}(u)$. Proof. For the system $\mathcal{S}_{MLD}(A,B)$, note that $(x(0)+H)\cap\mathbb{R}^{n}_{>0}=P_{A}(u/|u|_{1})$. By Theorem 3.6, $x(\infty)$ exists, and the function $\sum_{i=1}^{n}x_{i}\log x_{i}-x_{i}-x_{i}\log 1$ attains its unique minimum in $P_{A}(u/|u|_{1})$ at $x(\infty)$. Since the system is mass-conserving, $\sum_{i=1}^{n}x_{i}$ is constant on $P_{A}(u/|u|_{1})$, so this is equivalent to the fact that Shannon entropy $H(x)=-\sum_{i=1}^{n}x_{i}\log x_{i}$ is increasing, and attains its unique maximum value in $P_{A}(u/|u|_{1})$ at $x(\infty)$. By Theorem 2.3, the point $x(\infty)$ must be the maximum likelihood distribution $\hat{p}(u)$. ∎ Theorem 4.4 (The reaction system $\mathcal{S}_{MLE}(A,B)$ computes the Maximum Likelihood Estimator). Fix a design matrix $A=(a_{ij})_{m\times n}$, a basis $B$ for the free group $\mathbb{Z}^{n}\cap\ker A$, and a nonzero data vector $u\in\mathbb{Z}^{n}_{\geq 0}$. Let $x(t)=(x_{1}(t),x_{2}(t),\dots,x_{n}(t),\theta_{1}(t),\theta_{2}(t),\dots,% \theta_{m}(t))$ be a solution to the mass-action differential equations for the reaction system $\mathcal{S}_{MLE}(A,B)$ with initial conditions $x(0)=u/|u|_{1}$ and $\theta(0)=0$. Then $x(\infty):=\lim_{t\to\infty}x(t)$ exists and equals the maximum likelihood distribution $\hat{p}(u)$, and $\theta(\infty):=\lim_{t\to\infty}\theta(t)$ exists and equals the maximum likelihood estimator $\hat{\theta}(u)$. Proof sketch. Fix $u$ and let $\hat{p}=\hat{p}(u)$ and $\hat{\theta}=\hat{\theta}(u)$. Note that for the species $X_{1},X_{2},\dots,X_{n}$, the differential equations for $\mathcal{S}_{MLE}(A,B)$ and $\mathcal{S}_{MLD}(A,B)$ are identical, since these species appear purely catalytically in the reactions that belong to $\mathcal{R}_{MLE}(A,B)\setminus\mathcal{R}_{MLD}(A,B)$. Hence $x(\infty)=\hat{p}(u)$ follows from Theorem 4.3. To see that $\theta(\infty)=\hat{\theta}$, let us first allow the $X$ species to reach equilibrium, then treat the $\theta$ system with replacing the $X$ species by rate constants representing their values at equilibrium. The system $\Theta_{MLE}(A,B,x(\infty))$ obtained in this way in only the $\theta$ species is a reaction system with the reactions • For each column $j$ of $A$, a reaction $\sum_{i=1}^{m}a_{ij}\theta_{i}\to 0$ of rate $1$ • For each column $j$ of $A$, a reaction $0\to\sum_{i=1}^{m}a_{ij}\theta_{i}$ of rate $x_{j}(\infty)$. This is a reversible reaction system, and the maximum likelihood estimators $\hat{\theta}$ are precisely the points of detailed balance for this system. In addition, this system has no siphons since if species $\theta_{i}$ is absent, and $a_{ij}>0$ then $\theta_{i}$ will immediately be produced by the reaction $0\to\sum_{i^{\prime}=1}^{m}a_{i^{\prime}j}\theta_{i^{\prime}}$. (We are assuming $A$ has no $0$ row. If $A$ has a $0$ row, we can ignore it anyway.) It follows from Theorem 3.6 that this system is globally asymptotically stable, and every trajectory approaches a maximum likelihood estimator $\hat{\theta}$. Our actual system may be viewed as a perturbation of the system $\Theta_{MLE}(A,B,x(\infty))$. Consider any trajectory $(x(t),\theta(t))$ to $\mathcal{S}_{MLE}(A,B)$ starting at $(u/|u|_{1},0)$. We are going to consider the projected trajectory $(\mathbb{R}_{\geq},\theta)$. We now show that it is possible to choose appropriate $t_{i}$ and $\delta_{i}$ so that $((t_{i},\infty),\theta(t))$ is a trajectory of a $\delta_{i}$-perturbation of $\Theta_{MLE}(A,B,x(\infty))$, for $i=1,2,\dots$. Wait for a sufficiently large time $t_{1}$ till $x(t)$ is in a sufficiently small $\delta_{1}$ neighborhood of $x(\infty)$ which it will never leave. After this time, we obtain a differential inclusion in the $\theta$ species with the mass-action equations 2 for the reactions • For each column $j$ of $A$, a reaction $\sum_{i=1}^{m}a_{ij}\theta_{i}\to 0$ of rate $1$ • For each column $j$ of $A$, a reaction $0\to\sum_{i=1}^{m}a_{ij}\theta_{i}$ with time-varying rate lying in the interval ${(x_{j}(\infty)-\delta_{1},x_{j}(\infty)+\delta_{1})}$. Continuing in this way, we choose a decreasing sequence $\delta_{1}>\delta_{2}>\dots>0$ with $\lim_{i\to\infty}\delta_{i}\to 0$, and corresponding times $t_{1}<t_{2}<t_{3}\dots$ with $\lim_{i\to\infty}t_{i}\to\infty$ such that after time $t_{i}$, $x(t)$ is in a $\delta_{i}$ neighborhood of $x(\infty)$ which it will never leave. Then $((t_{i},\infty),\theta(t))$ is a trajectory of the $\delta_{i}$-perturbation of $\Theta_{MLE}(A,B,x(\infty))$. Hence $\theta(t)$ satisfies the conditions of Theorem 3.9. Hence $\lim_{t\to\infty}\theta(t)=\hat{\theta}$. ∎ 5 Related work The mathematical similarities of both log-linear statistics and reaction networks to toric geometry have been pointed out before [8, 16]. Craciun et al. [8] refer to the steady states of complex-balanced reaction networks as Birch points “to highlight the parallels” with algebraic statistics. This paper develops on these observations, and serves to flesh out this mathematical parallel into a scheme for molecular computation. Various building blocks for molecular computation that assume mass-action kinetics have been proposed before. We briefly review some of these proposals. In [18], Napp and Adams model molecular computation with mass-action kinetics, as we do here. They propose a molecular scheme to implement message passing schemes in probabilistic graphical models. The goal of their scheme is to convert a factor graph into a reaction network that encodes the single-variable marginals of the joint distribution as steady state concentrations. In comparison, the goal of our scheme is to convert a log-linear model into a reaction network, and observations into initial concentrations, so that the steady state encodes the maximum likelihood estimator. In addition, we have shown global convergence to this steady state. Our scheme is able to fully exploit the toric structure of reaction networks. It may be possible to extend our scheme to exploit an independence structure specified by a probabilistic graphical model. Qian and Winfree [22, 23] have proposed a DNA gate motif that can be composed to build large circuits, and have experimentally demonstrated molecular computation of a Boolean circuit with around 30 gates. In comparison, our scheme natively employs a continuous-time dynamical system to do the computation, without a Boolean abstraction. Taking a control theory point of view, Oishi and Klavins [19] have proposed a scheme for implementing linear input/output systems with reaction networks. Note that for a given matrix $A$, the set of maximum likelihood distributions is usually not linear, but log-linear. Daniel et al.[9] have demonstrated an in vivo implementation of feedback loops, exploiting analogies with electronic circuits. It is possible that the success of their schemes can also be explained by the toric nature of mass-action kinetics. Buisman et al. [4] have proposed a reaction network scheme for computation of algebraic functions. The part of our scheme which reads out the maximum likelihood estimator from the maximum likelihood distribution bears some similarity to their work. Acknowledgements: I thank Nick S. Jones, Anne Shiu, and Abhishek Behera for useful discussions. References [1] Alan Agresti. Categorical data analysis. John Wiley & Sons, 2013. [2] David Angeli, Patrick De Leenheer, and Eduardo D. Sontag. A Petri net approach to the study of persistence in chemical reaction networks. Math. Biosci., 210(2):598–618, 2007. [3] YMM Bishop, Stephen Feinberg, and Paul Holland. Discrete multivariant analysis. 1975. [4] HJ Buisman, Huub MM ten Eikelder, Peter AJ Hilbers, and Anthony ML Liekens. Computing algebraic functions with biochemical reaction networks. Artificial life, 15(1):5–19, 2009. [5] Luca Cardelli. Strand Algebras for DNA Computing. Natural Computing, 10:407–428, 2011. [6] Luca Cardelli. Two-domain dna strand displacement. Mathematical Structures in Computer Science, 23(02):247–271, 2013. [7] Ronald Christensen and R Christensen. Log-linear models and logistic regression, volume 168. Springer New York, 1997. [8] Gheorghe Craciun, Alicia Dickenstein, Anne Shiu, and Bernd Sturmfels. Toric dynamical systems. Journal of Symbolic Computation, 44(11):1551–1565, 2009. In Memoriam Karin Gatermann. [9] Ramiz Daniel, Jacob R Rubens, Rahul Sarpeshkar, and Timothy K Lu. Synthetic analog computation in living cells. Nature, 497(7451):619–623, 2013. [10] John Doyle and Marie Csete. Rules of engagement. Nature, 446(7138):860–860, 2007. [11] Stephen E Fienberg, Alessandro Rinaldo, et al. Maximum likelihood estimation in log-linear models. The Annals of Statistics, 40(2):996–1023, 2012. [12] Manoj Gopalkrishnan. Catalysis in reaction networks. Bulletin of Mathematical Biology, 73:2962–2982, 2011. 10.1007/s11538-011-9655-3. [13] Manoj Gopalkrishnan. Catalysis in Reaction Networks. Bulletin of Mathematical Biology, 73(12):2962–2982, 2011. [14] Friedrich J. M. Horn. The dynamics of open reaction systems. In Mathematical aspects of chemical and biochemical problems and quantum chemistry, volume VIII of Proc. SIAM-AMS Sympos. Appl. Math., New York, 1974. [15] Steffen L Lauritzen. Graphical models. Oxford University Press, 1996. [16] Ezra Miller. Theory and applications of lattice point methods for binomial ideals. In Combinatorial Aspects of Commutative Algebra and Algebraic Geometry, pages 99–154. Springer, 2011. [17] Ron Milo, Shai Shen-Orr, Shalev Itzkovitz, Nadav Kashtan, Dmitri Chklovskii, and Uri Alon. Network motifs: simple building blocks of complex networks. Science, 298(5594):824–827, 2002. [18] Nils E Napp and Ryan P Adams. Message passing inference with chemical reaction networks. In Advances in Neural Information Processing Systems, pages 2247–2255, 2013. [19] Kevin Oishi and Eric Klavins. Biomolecular implementation of linear I/O systems. Systems Biology, IET, 5(4):252–260, 2011. [20] L. Pachter and B. Sturmfels. Algebraic Statistics for Computational Biology. Number v. 13 in Algebraic Statistics for Computational Biology. Cambridge University Press, 2005. [21] Lulu Qian, David Soloveichik, and Erik Winfree. Efficient turing-universal computation with dna polymers. In DNA computing and molecular programming, pages 123–140. Springer, 2011. [22] Lulu Qian and Erik Winfree. A simple DNA gate motif for synthesizing large-scale circuits. J. R. Soc. Interface, 2011. [23] Lulu Qian and Erik Winfree. Scaling up digital circuit computation with dna strand displacement cascades. Science, 332(6034):1196–1201, 2011. [24] Guy Shinar and Martin Feinberg. Structural sources of robustness in biochemical reaction networks. Science, 327(5971):1389–1391, 2010. [25] David Soloveichik, Georg Seelig, and Erik Winfree. Dna as a universal substrate for chemical kinetics. Proceedings of the National Academy of Sciences, 107(12):5393–5398, 2010. [26] Eduardo D. Sontag. Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction. IEEE Trans. Autom. Control, 46:1028–1047, 2001. [27] Matthew Thomson and Jeremy Gunawardena. Unlimited multistability in multisite phosphorylation systems. Nature, 460(7252):274–277, 2009. [28] John J Tyson, Katherine C Chen, and Bela Novak. Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Current opinion in cell biology, 15(2):221–231, 2003. [29] Boyan Yordanov, Jongmin Kim, Rasmus L Petersen, Angelina Shudy, Vishwesh V Kulkarni, and Andrew Phillips. Computational design of nucleic acid feedback control circuits. ACS synthetic biology, 3(8):600–616, 2014.
Interacting Monomer-Dimer Model with Infinitely Many Absorbing States WonMuk Hwang Department of Physics, Boston University, Boston, MA 02215.    Hyunggyu Park Department of Physics, Inha University, Inchon 402-751, Korea (November 25, 2020) Abstract We study a modified version of the interacting monomer-dimer (IMD) model that has infinitely many absorbing (IMA) states. Unlike all other previously studied models with IMA states, the absorbing states can be divided into two equivalent groups which are dynamically separated infinitely far apart. Monte Carlo simulations show that this model belongs to the directed Ising universality class like the ordinary IMD model with two equivalent absorbing states. This model is the first model with IMA states which does not belong to the directed percolation (DP) universality class. The DP universality class can be restored in two ways, i.e., by connecting the two equivalent groups dynamically or by introducing a symmetry-breaking field between the two groups. pacs: PACS numbers: 64.60.-i, 64.60.Ht, 02.50.-r, 05.70.Fh A wide variety of nonequilibrium systems with a single trapped (absorbing) state display a continuous phase transition from an active phase into an absorbing phase, which belongs to the directed percolation (DP) universality class [1, 2, 3, 4]. Recently, systems with multiple absorbing states have been investigated extensively. The interacting monomer-dimer(IMD) model introduced by one of us [5] is one of many models that have two equivalent absorbing states [6, 7, 8, 9]. These models belong to a different universality class from DP. By the analogy to the equilibrium Ising model that has two equivalent ground states, this new class is called as the directed Ising (DI) universality class [10]. When the (Ising) symmetry between the absorbing states is broken in the sense that one of the absorbing states is probabilistically preferable, the system goes back to the DP class[11]. Hence, the symmetry between the absorbing states is the key factor in determining the universality class of models with several absorbing states. Unfortunately, no models with higher symmetries than the Ising symmetry (like the three-state Potts symmetry) are found to have a stable absorbing phase as yet. In contrast, systems with infinitely many absorbing (IMA) states are far less understood. All IMA systems studied so far belong to the DP universality class[12, 13]. The number of absorbing states of these IMA systems grows exponentially with system size but there is no clear-cut symmetry among absorbing states. Recently, it was argued that the IMA models should belong to the DP class unless they possess any extra symmetry among absorbing states [14, 15]. However, no IMA model with an additional symmetry has been studied up to date and the role of the symmetry in the IMA systems is still unclear. In this Letter, we introduce an IMA model with the Ising symmetry between two groups of absorbing states. These two groups of absorbing states are equivalent and dynamically separated infinitely far apart. In other words, an absorbing state in one group can not be reached from any absorbing state in the other group by a finite number of successive local changes. There is no infinite dynamic barrier among absorbing states inside each group. This dynamic barrier is similar to the free energy barrier between ground states of equilibrium systems that exhibit spontaneous symmetry breaking in the ordered phase. Our numerical simulations show that this model belongs to the DI universality class. Furthermore, we find that this model crosses over to the DP class by allowing that the two absorbing groups are connected dynamically and/or by introducing a symmetry-breaking field to make one absorbing group probabilistically preferable to the other. Our model is a modified version of the ordinary IMD model that we call the IMA-IMD model. Dynamic rules of the IMA-IMD model are almost the same as those of the IMD model with infinitely strong repulsion between the same species in one dimension [5]. A monomer ($A$) cannot adsorb at a nearest-neighbor site of an already-occupied monomer (restricted vacancy) but adsorbs at a free vacant site with no adjacent monomer-occupied sites. Similarly, a dimer ($B_{2}$) cannot adsorb at a pair of restricted vacancies ($B$ in nearest-neighbor sites) but adsorbs at a pair of free vacancies. There are no nearest-neighbor restrictions in adsorbing particles of different species. Only the adsorption-limited reactions are considered. Adsorbed dimers dissociate and a nearest neighbor of the adsorbed $A$ and $B$ particles reacts, forms the $AB$ product, and desorbs the catalytic surface immediately. Differentiation between the IMA-IMD model and the IMD model comes in when there is an $A$ adsorption attempt at a vacant site between an adsorbed $A$ and an adsorbed $B$. In the IMD model, we allow the $A$ to adsorb and react with the neighboring $B$, so there are two equivalent absorbing states comprised of only monomers at alternating sites, i.e., $(A0A0\cdots)$ and $(0A0A\cdots)$ where “$0$” represents a vacancy. In the IMA-IMD model, this process is disallowed. Then, any configuration can be an absorbing state if there are no nearest neighbor pair of vacancies and no single vacany between two $B$ particles, e.g., ($\cdots B0A0BB0A0A\cdots$). To impose the Ising symmetry between the absorbing states, we introduce the probability $s$ of spontaneous desorption of a nearest neighbor pair of adsorbed $B$ particles. At finite $s$, an absorbing configuration cannot have this $BB$ pair. Hence only those configurations that have particles at alternating sites and no two $B$’s at consecutive alternating sites become absorbing states, e.g., ($A0A0B0A0\cdots$) and ($0A0A0B0A\cdots$). The absorbing states are divided into two groups with particles occupied at odd- and even-numbered sites (O group and E group). The number of absorbing states in each group grows exponentially with system size and there is a one-to-one mapping between absorbing states in two groups. It is clear that one can not reach from an absorbing state in one group to an absorbing state in the other group by a finite number of successive local changes. Any interface (active region) between two absorbing states in the different groups never disappears by itself in a finite amount of time, so there is an infinite dynamic barrier between the two groups. These interfaces annihilate pairwise only. The order parameter characterizing the absorbing phase transition is the density of active sites or kinks (domain walls). In the IMD model, the dimer density served well as the order parameter, but it cannot do in this model. We use the kink density as the order parameter. Kinks are defined such that all absorbing configurations have no kinks but any local change of the absorbing configurations should produce kinks. In this model, one should examine, at least, three adjacent sites to check the existence of kinks. There are 13 possible configurations for three adjacent sites. We assign a kink to eight different configurations; $000$, $00A$, $A00$, $B00$, $00B$, $B0B$, $BB0$, and $0BB$. Five others, $A0A$, $A0B$, $B0A$, $0A0$, and $0B0$, do not have a kink. In this kink representation, there is no mod(2) conservation of the total number of kinks. Three independent critical exponents characterize the critical behavior near the absorbing transition: the order parameter exponent $\beta$, correlation length exponent $\nu_{\bot}$, and relaxation time exponent $\nu_{\|}$ [2]. Elementary scaling theory combined with the finite size scaling theory [16] predicts that the kink density $\rho(p_{c},L)$ at criticality in the (quasi)steady state scales with system size $L$ as $$\rho(p_{c},L)\sim L^{-\beta/\nu_{\perp}}.$$ (1) One can also expect the short time behavior of the kink density as $\rho(p_{c},t)\sim t^{-\beta/\nu_{\|}}$ and the characteristic time scales with system size as $\tau(p_{c},L)\sim L^{\nu_{\|}/\nu_{\perp}}$. In Monte Carlo simulations, a monomer is attempted to adsorb at a randomly chosen site with probability $(1-s)p$ and a dimer with probability $(1-s)(1-p)$. With probability $s$, a randomly chosen nearest neighbor pair of adsorbed $B$’s (if there is any) is desorbed from the lattice. We choose the dimer desorption probability $s=0.5$ and run stationary Monte Carlo simulations starting with an empty lattice with size $L=2^{5}$ up to $2^{11}$. The system reaches a quasisteady state first and stays for a reasonably long time before finally entering into an absorbing state. We measure the kink density in the quasisteady state and average over many survived samples. The number of samples varies from $2\times 10^{5}$ for $L=2^{5}$ to $2\times 10^{3}$ for $L=2^{11}$. The number of time steps ranges from $10^{3}$ to $2\times 10^{5}$. From Eq. (1), we expect the ratio of the critical kink densities for two successive system sizes $\rho(L/2)/\rho(L)=2^{\beta/\nu_{\perp}}$, ignoring corrections to scaling. This ratio converges to unity in the active phase ($p<p_{c}$) and to 2 in the absorbing phase ($p>p_{c}$) in the limit $L\rightarrow\infty$. We plot the logarithm of this ratio divided by $\ln 2$ as a function of $p$ for $L=2^{6},\cdots,2^{11}$ in Fig. 1. The crossing points between lines for two successive sizes converge slowly due to strong corrections to scaling. In the limit $L\rightarrow\infty$, we estimate the crossing points converge to the point at $p_{c}=0.425(4)$ and $\beta/\nu_{\perp}=0.49(3)$. The value of $\beta/\nu_{\perp}$ agrees well with the standard DI value 0.50. In Fig. 2, we show the time dependence of the critical kink densities $\rho(p_{c},t)$ for various system sizes with $p_{c}=0.425$. From the slope of $\rho(p_{c},t)$ we estimate $\beta/\nu_{\|}=0.275(5)$. Insets show the size dependence of the relaxation time $\tau(p_{c},L)$ and the steady-state kink density $\rho(p_{c},L)$ at criticality. We estimate $\nu_{\|}/\nu_{\bot}=1.74(4)$ and $\beta/\nu_{\bot}=0.494(6)$, respectively. All of these results are in excellent agreement with the DI values. We run dynamic Monte Carlo simulations with various initial configurations and get a more precise estimate of the critical probability $p_{c}=0.425(1)$. Our estimates for the dynamic scaling exponents are $\delta+\eta=0.28(1)$ and $z=1.14(1)$ [17], where $\delta+\eta$ characterizes the growth of the number of kinks averaged over survived samples and $z$ the spreading of the active region [2]. These values are also in excellent agreement with the DI values. To check the importance of the Ising symmetry among the absorbing states, we introduce a symmetry breaking field such that the monomer adsorption attempt at an even-numbered site is rejected with probability $h$ [11]. For finite $h$, the O group of absorbing states is probabilistically preferable to the E group. We set $h=0.1$ and run stationary Monte Carlo simulations for lattice sizes $L=2^{5}$ up to $2^{9}$. In Fig. 3, we plot $\ln[\rho(L/2)/\rho(L)]/\ln 2$ versus $p$, from which we estimate $p_{c}=0.304(2)$ and $\beta/\nu_{\perp}=0.24(1)$. The value of $\beta/\nu_{\perp}$ is clearly different from the DI value but agrees well with the standard DP value 0.2524(5). More detailed study including dynamic Monte Carlo simulations confirms that the systems with finite $h$ belong to the DP universality class [17]. Similar to the case of the ordinary IMD model, the symmetry-breaking field makes the system behave like having only one (preferred) group of absorbing states [10]. Evolutions of the critical interfaces (active region) $(a)$ for the symmetric case ($h=0$) and $(b)$ for the asymmetric case ($h=0.1$) are shown in Fig. 4. In the symmetric case, the interfaces between the O and E group of absorbing states diffuse until they meet and form a loop to disappear, which is the essential characteristic of the DI universality class. In the asymmetric case, the absorbing region of the unpreferred (E) group quickly vanishes and the interfaces between the different groups become irrelvant. The interfaces inside the preferred (O) group, which can disappear by themselves without forming loops, become dominant and force the system into the DP universality class. When the desorption process of a nearest neighbor $BB$ pair is forbidden $(s=0)$, the system can find many more absorbing states with $BB$ pairs, e.g., ($\cdots B0A0BB0A0A\cdots$), in addition to the two groups of the absorbing states for $s\neq 0$. These new extra absorbing states are generically mixtures of the O and E group of the absorbing states. The O and E groups are now connected dynamically via new mixture-type absorbing states. Consider a configuration with an interface between two absorbing states in the different groups, ($\cdots B0A0\underline{00}0A0A\cdots$), where the interface is placed in two central vacancies $\underline{00}$. With nonzero $s$, this configuration never evolves into an absorbing state. However, in the case of $s=0$, it can evolve into a mixture-type absorbing state by adsorbing a dimer $BB$ in the center. Actually, any interface can disappear by itself in a finite amount of time, so there is no infinite dynamic barrier between absorbing states. Therefore the evolution of the interfaces resembles the asymmetric case in Fig. 4. Absorbing states for $s=0$ no longer possess the clear-cut global symmetry which drives the system into the DI class. So we expect the system falls into the DP class like the other IMA models without an extra symmetry. We run dynamic Monte Carlo simulations starting with a lattice occupied by monomers at alternating sites except at the central vacant site, ($\cdots A0A0\underline{0}0A0A\cdots$), where $\underline{0}$ represents a defect. In Fig. 5, we plot three effective exponents against time; $\delta(t)$, $\eta(t)$, and $z(t)$ [2]. Off criticality, these plots show some curvatures. The values of the dynamic scaling exponents can be extracted by taking the asymptotic values of the effective exponents at criticality. From Fig. 5, we estimate $p_{c}=0.105(1)$, $\delta=0.02(1)$, $\eta=0.48(5)$, and $z=1.33(5)$. The values of $\delta+\eta$ and $z$ are in good agreement with those of the DP values [18]. Introduction of the symmetry breaking field $h$ only changes the location of $p_{c}$. Stationary Monte Carlo simulations also confirm our results [17]. In summary, we found the first IMA model that does not belong to the DP class, but belong to the DI class. This can be achieved by imposing a global Ising symmetry on the absorbing states, i.e., making the two equivalent group of IMA states that are dynamically separated infinitely far apart. When the symmetry between these groups is broken, one group of absorbing states becomes completely obsolete and the evolution morphology changes from a loop-like to a tree-like structure, which ensures the system in the conventional DP class. We also found that the system goes back to the DP class if the mixture-type absorbing states between the two groups are added. These extra absorbing states connect the two separated groups dynamically and make the loop-forming process of the interfaces irrelvant. The absorbing states in all other previously studied IMA models are dynamically connected in the sense as mentioned above. This may explain why those models belong to the DP class. HP wishes to thank M. den Nijs for his hospitality during his stay at the University of Washington where this work was completed. This work was supported by Research Fund provided by Korea Research Foundation, Support for Faculty Research Abroad (1997). References [1] for a review, see J. Marro and R. Dickman, Nonequilibrium phase transitions in lattice models (Cambridge University, Cambridge, 1996). [2] P. Grassberger and A. de La Torre, Ann. Phy. (NY) 122, 373 (1979). [3] J. L. Cardy and R. L. Sugar, J. Phys. A 13, L423 (1980); H. K. Janssen, Z. Phys. B 42, 151 (1981); P. Grassberger, Z. Phys. B 47, 365 (1982). [4] G. Grinstein, Z. -W. Lai, and D. A. Browne, Phys. Rev. A 40, 4820 (1989). [5] M. H. Kim and H. Park, Phys. Rev. Lett. 73, 2579 (1994); H. Park, M. H. Kim and H. Park, Phys. Rev. E 52, 5664 (1995). [6] P. Grassberger, F. Krause, and T. von der Twer, J. Phys. A 17, L105 (1984); P. Grassberger, J. Phys. A 22, L1103 (1989). [7] N. Menyhárd, J. Phys. A 27, 6139 (1994); N. Menyhárd and G. Ódor, J. Phys. A 28, 4505 (1995). [8] K. E. Bassler and D. A. Browne, Phys. Rev. Lett. 77, 4094 (1996); Phys. Rev. E 55, 5225 (1997); K. S. Brown, K. E. Bassler, and D. A. Browne, Phys. Rev. E 56, 3953 (1997). [9] H. Hinrichsen, Phys. Rev. E 55, 219 (1997). [10] W. Hwang, S. Kwon, H. Park, and H. Park, Phys. Rev. E 57, 6438 (1998). [11] H. Park and H. Park, Physica A 221, 97 (1995). [12] I. Jensen, Phys. Rev. Lett. 70, 1465 (1993); J. Phys. A 27, L61 (1994); I. Jensen and R. Dickman, Phys. Rev. E 48, 1710 (1993). [13] J. F. F. Mendes, R. Dickman, M. Henkel, and M. C. Marques, J. Phys. A 27, 3019 (1994). [14] P. Grassberger, J. Stat. Phys. 79, 13 (1995). [15] M. A. Muñoz, G. Grinstein, R. Dickman, and R. Livi, Phys. Rev. Lett. 76, 451 (1996). [16] T. Aukrust, D. A. Browne, and I. Webman, Phys. Rev. A 41, 5294 (1990). [17] Detailed numerical results will be published elsewhere. [18] The values of the exponents, $\delta$ and $\eta$, depend on initial configurations, but their sum is universal [13].
The Flux Ratio Method for Determining the Dust Attenuation of Starburst Galaxies Karl D. Gordon11affiliation: Steward Observatory, University of Arizona, Tucson, AZ 85721; kgordon@as.arizona.edu , Geoffrey C. Clayton22affiliation: Department of Physics & Astronomy, Louisiana State University, Baton Rouge, LA 70803; (gclayton,misselt)@fenway.phys.lsu.edu , Adolf N. Witt33affiliation: Ritter Astrophysical Research Center, The University of Toledo, Toledo, OH 43606; awitt@dusty.astro.utoledo.edu , and K. A. Misselt22affiliation: Department of Physics & Astronomy, Louisiana State University, Baton Rouge, LA 70803; (gclayton,misselt)@fenway.phys.lsu.edu Abstract The presence of dust in starburst galaxies complicates the study of their stellar populations as the dust’s effects are similar to those associated with changes in the galaxies’ stellar age and metallicity. This degeneracy can be overcome for starburst galaxies if UV/optical/near-infrared observations are combined with far-infrared observations. We present the calibration of the flux ratio method for calculating the dust attenuation at a particular wavelength, $Att(\lambda)$, based on the measurement of $F(IR)/F(\lambda)$ flux ratio. Our calibration is based on spectral energy distributions (SEDs) from the PEGASE stellar evolutionary synthesis model and the effects of dust (absorption and scattering) as calculated from our Monte Carlo radiative transfer model. We tested the attenuations predicted from this method for the Balmer emission lines of a sample starburst galaxies against those calculated using radio observations and found good agreement. The UV attenuation curves for a handful of starburst galaxies were calculated using the flux ratio method, and they compare favorably with past work. The relationship between $Att(\lambda)$ and $F(IR)/F(\lambda)$ is almost completely independent of the assumed dust properties (grain type, distribution, and clumpiness). For the UV, the relationship is also independent of the assumed stellar properties (age, metallicity, etc) accept for the case of very old burst populations. However at longer wavelengths, the relationship is dependent on the assumed stellar properties. galaxies: ISM – galaxies: starburst 1 Introduction To study galaxies, it is crucial to be able to separate the effects of the dust intrinsic to the galaxy from those associated with the galaxy’s stellar age and metallicity. Currently, the accuracy of separating the stars and dust in galaxies is fairly poor and the study of galaxies has suffered. This is in contrast with studies of individual stars and their associated sightlines in the Milky Way and nearby galaxies for which the standard pair method (Massa, Savage, & Fitzpatrick, 1983) works quite well at determining the effects of dust on the star’s spectral energy distribution (SED). The standard pair method is based on comparing a reddened star’s SED with the SED of an unreddened star with the same spectral type. Application of the standard pair method to galaxies is not possible as each galaxy is the result of a unique evolutionary history and, thus, each has a unique mix of stellar populations and star/gas/dust geometry. Nevertheless, it would be very advantageous to find a method which would allow one to determine the dust attenuation of an individual galaxy. Such a method would greatly improve the accuracy of different star formation rate measurements. For example, two widely used star formation rate measurements are based on UV and H$\alpha$ luminosities. Both are affected by dust and this limits their accuracy (Kennicutt, 1998; Schaerer, 1999). The importance of correcting for the effects of dust in galaxies has gained attention through recent investigations into the redshift dependence of the global star formation rate (Madau, Pozzetti, & Dickinson, 1998; Steidel et al., 1999). The uncertainty in the correction for dust currently dominates the uncertainty in the inferred star formation rate in galaxies (Pettini et al., 1998; Meurer, Heckman, & Calzetti, 1999) and conclusions about the evolution of galaxies (Calzetti & Heckman, 1999). Initially, the effects of dust in galaxies were removed using a screen geometry. This assumption has been shown to be a dangerous oversimplification as the dust in galaxies is mixed with the stars. Radiative transfer studies have shown that mixing the emitting sources and dust and having a clumpy dust distribution produces highly unscreen-like effects (Witt, Thronson, & Capuano, 1992; Witt & Gordon, 1996; Gordon, Calzetti, & Witt, 1997; Ferrara et al., 1999; Takagi, Arimoto, & Vansevičius, 1999; Witt & Gordon, 1999). For example, the traditional reddening arrows in color-color plots turn into complex, non-linear reddening trajectories. In general, the attenuation curve of a galaxy is not directly proportional to the dust extinction curve and its shape changes as a function of dust column (e.g., Figs. 6 & 7 of Witt & Gordon (1999)). While the various radiative transfer studies have made it abundantly clear that correcting for the effects of dust in galaxies is hard, none have come up with a method that is not highly dependent on the assumed dust grain characteristics, star/gas/dust geometry, and clumpiness of the dust distribution. This has led to a search for empirical methods. For galaxies with hydrogen emission lines, it is possible to determine the slope and, with radio observations, the strength of the galaxies’ attenuation curves at the emission line wavelengths (Calzetti, Kinney, & Storchi-Bergmann, 1994; Smith et al., 1995). Unfortunately, this method is limited to the select few wavelengths associated with hydrogen emission lines. In the pioneering study of the IUE sample of starburst galaxies (Kinney et al., 1993), Calzetti, Kinney, & Storchi-Bergmann (1994) used a variant of the standard reddened star/unreddened star method to compute the average attenuation curve for these galaxies. This work binned the sample using $E(B-V)$ values derived from the H$\alpha$ and H$\beta$ emission lines and assigned the lowest $E(B-V)$ bin the status of unreddened. While this work was a significant advance in the study of dust in galaxies, it is only applicable to statistical studies of similar samples of starburst galaxies, not individual galaxies (Sawicki & Yee, 1998). More recently, Meurer, Heckman, & Calzetti (1999) derived a relationship between the slope of the UV spectrum of a starburst galaxy and the attenuation suffered at 1600 Å, $Att(1600)$, using the properties of the IUE sample. This slope is parameterized by $\beta$ where the UV spectrum is fit to a power law ($F(\lambda)\propto\lambda^{-\beta}$) in the wavelength range between 1200 and 2600 Å (Calzetti, Kinney, & Storchi-Bergmann, 1994). The purpose of Meurer, Heckman, & Calzetti (1999) was to calculate the attenuation suffered by high redshift starburst galaxies using only their UV observations. From our radiative transfer work, we have found that this relationship is strongly dependent on the star/gas/dust geometry, dust grain properties, and dust clumpiness (Witt & Gordon, 1996, 1999) as suspected by Meurer, Heckman, & Calzetti (1999). Fig. 11 of Witt & Gordon (1999) shows the dependence of $Att(1600)$ on $\Delta\beta$ ($=\beta-2.5$) for various geometries, dust clumpinesses, and dust types. Meurer, Heckman, & Calzetti (1999) used the observed relationship between $F(IR)/F(1600)$ and $\beta$ for starburst galaxies, combined with a semi-empirical calibration between $F(IR)/F(1600)$ and $Att(1600)$ to determine the relationship between $Att(1600)$ and $\beta$. The correlation between $F(IR)/F(UV)$ and $\beta$ was first introduced by Meurer et al. (1997) where $F(UV)=F(2200)$. Witt & Gordon (1999) discovered that the relationship between $F(IR)/F(1600)$ and $Att(1600)$ was almost completely independent of the star/gas/dust geometry, dust grain properties, and dust clumpiness (see Fig. 12b of Witt & Gordon (1999)). This implies that $F(IR)/F(1600)$ is a much better indicator of $Att(1600)$ than $\beta$. This opened the possibility that the $F(IR)/F(\lambda)$ might be a good measure of $Att(\lambda)$ and was the motivation for this paper. Qualitatively, there is good reason to think that a measure based on the flux at a wavelength $\lambda$ and the total flux absorbed and re-emitted by dust, $F(IR)$, should be a measure of $Att(\lambda)$. This is basically a statement of conservation of energy. Evidence that $F(IR)/F(UV)$ is a rough indicator of $Att(UV)$ in disk galaxies is given by Wang & Heckman (1996). The details of the relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ will be dependent on the stellar, gas, and dust properties of a galaxy. Thus, a calibration of the relationship is necessary. In §2, we calibrate the relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ for UV, optical, and near-IR wavelengths using a stellar evolutionary synthesis model combined with our dust radiative transfer model. This allowed us to investigate the dependence of the relationship on stellar parameters (age, star formation type, and metallicity) and dust parameters (geometry, local dust distribution, dust type, and the fraction of Lyman continuum photons absorbed by dust). We show a comparison of $Att(H\alpha)$, $Att(H\beta)$, and $Att(H\gamma)$ values determined with this flux ratio method and the radio method (Condon, 1992) for 10 starburst galaxies in §3. In §4, we apply the flux ratio method to construct the UV attenuation curves for 8 starburst galaxies. The implications this work are discussed in §5. 2 The Flux Ratio Method 2.1 $F(IR)/F(\lambda)$ Flux Ratio In a galaxy, almost all of the photons absorbed by dust are emitted by stars and gas in the UV, optical, and near-IR. This energy heats the dust which then re-emits in the mid- and far-infrared (small and large dust grains). Thus, the ratio of the total infrared flux to the flux at a particular wavelength is $$\frac{F(IR)}{F(\lambda)}=\frac{a_{d}F(LyC)+(1-a_{d})F(Ly\alpha)+\int_{912~{}% \AA}^{\infty}f(\lambda^{\prime},0)\left(1-C(\lambda^{\prime})\right)d\lambda^{% \prime}}{\lambda f(\lambda,0)C(\lambda)}$$ (1) where $F(IR)$ is the total IR flux in ergs cm${}^{-2}$ s${}^{-1}$, $F(LyC)$ is the total unattenuated stellar flux below 912 Å in ergs cm${}^{-2}$ s${}^{-1}$, $a_{d}$ is the fraction of $F(LyC)$ absorbed by dust internal to the H ii regions (Petrosian, Silk, & Field, 1972; Mathis, 1986), $F(Ly\alpha)$ is the $Ly\alpha$ emission line flux, $f(\lambda,0)$ is the unattenuated stellar/nebular flux in ergs cm${}^{-2}$ s${}^{-1}$ Å${}^{-1}$, $C(\lambda)=10^{-0.4Att(\lambda)}$, and $Att(\lambda)$ is the attenuation at $\lambda$ in magnitudes. For emission lines, the denominator of eq. 1 becomes $(1-a_{d})F(\lambda,0)C(\lambda)$ where $F(\lambda,0)$ is the intrinsic integrated flux of the emission line. The $Ly\alpha$ line is resonantly scattered and, thus, is completely absorbed by the dust internal to the H ii regions. Eq. 1 is similar to eq. 3 of Meurer, Heckman, & Calzetti (1999), but includes an additional term to account for the Lyman continuum photons absorbed by dust. 2.2 Relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ We can calculate the relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ by using a stellar evolutionary synthesis (SES) model and a dust radiative transfer model. We use the PEGASE SES model (Fioc & Rocca-Volmerange, 1997, 1999) which gives the SEDs of stellar populations with a range of ages, type of star formation (burst/constant), and metallicity. One strength of the PEGASE model is that it computes the continuum and emission lines expected from gas emission as well as the stellar emission. We used a Salpeter IMF for the PEGASE calculations. The SES model SEDs give $F(LyC)$, $f(\lambda,0)$, and emission line $F(\lambda,0)$ values. The effects of dust were calculated using the DIRTY radiative transfer model (Witt & Gordon, 1999). The DIRTY model gives the attenuation curves, $Att(\lambda)$, for a range of spherical star/gas/dust global geometries (shell, dusty, or cloudy), local dust distribution (homogeneous or clumpy), Milky Way (Cardelli, Clayton, & Mathis, 1989) or Small Magellanic Cloud (Gordon & Clayton, 1998) dust grain characteristics, and dust columns ($\tau_{V}=0.25-50$). The cloudy geometry has dust extending to 0.69 of the system radius and stars extending to the model radius. The dusty geometry has both dust and stars extending to the model radius. This geometry represents a uniform mixture of stars and dust. The shell geometry has stars extending to 0.3 of the model radius and dust extending from 0.3 to 1 of the model radius. These three star/gas/dust geometries are shown pictorially in Figure 1 of Witt & Gordon (1999). Additional details of the the DIRTY model calculations can be found in Witt & Gordon (1999). In Figure 1, we plot the the relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ for the Meurer, Heckman, & Calzetti (1999) 1600 Å, HST/WFPC2 F218W, V, and K bands assuming a constant star formation, 10 Myr old, solar metallicity SED, $a_{d}=0.25$, and the full range of dust parameters (see above). The most surprising result is that this relationship is not sensitive to the type of dust (MW/SMC) or the local dust distribution (homogeneous/clumpy). This is true not just for the four bands plotted in Fig. 1, but for all the ultraviolet, optical, and near-infrared. Less surprising is that this relationship in the V and K bands is sensitive to the presence of stars outside the dust. The dusty and shell geometries follow similar curves while the cloudy geometry follows a different curve. For the cloudy geometry, as the attenuation is increased the dominance of the band flux from the stars outside the dust increases to the point where the band flux no longer depends on the attenuation (i.e. the flux from the stars attenuated by dust is much less than the flux from the unattenuated stars). This is not the case for the dusty and shell geometries where band flux continues to decrease with increasing attenuation since all the stars are inside the dust and attenuated to some degree. The dependence of the $F(IR)/F(\lambda)$ versus $Att(\lambda)$ relationship can be sensitive to the shape of the intrinsic SED. Example SEDs for solar metallicity stellar populations are given in Fig. 3. The dependence of $F(IR)/F(\lambda)$ on $Att(\lambda)$ is illustrated in Figure 2 which shows the dependence of the flux ratio relationship for the 1600 and V bands on age, metallicity, star formation rate, and $a_{d}$ value. In the 1600 band, the relationship is quite similar for most choices of the above parameters except for old burst stellar populations (Fig. 2c). Our calibration of $F(IR)/F(1600)$ versus $Att(1600)$ is indistinguishable from that presented in Meurer, Heckman, & Calzetti (1999) after correcting for $\sim$30% difference between $F_{FIR}$ (Helou et al., 1988) and $F(IR)$ as $F_{FIR}$ does not include the hotter dust detected in the mid-IR. In the V band, the relationship is dependent, in decreasing order of dependence, on age, burst versus constant star formation, metallicity, and value of $a_{d}$. The qualitative dependence of other UV bands ($\lambda<3000$ Å) is similar to that seen for the 1600 Å band. The behavior of optical and near-infrared bands is similar to that of the V band with the increasing dependence on the above parameters as $\lambda$ increases. The behavior of emission lines is similar to that seen for the V band, but has notable differences. Figure 4 gives the relationship between $F(IR)/F(H\alpha)$ and $Att(H\alpha)$ for the same parameters plotted in Fig. 2. One obvious difference between the V band and H$\alpha$ emission line is that the behavior with age is reversed. In particular, the H$\alpha$ emission line is very sensitive to the value of $a_{d}$ since the strength of H$\alpha$ is directly proportional to $(1-a_{d})$. The behavior of the flux ratio versus $Att(\lambda)$ relationship can be qualitatively explained fairly easily. The general shape of the curves (see Fig. 1) is seen to be non-linear versus $F(IR)/F(\lambda)+1$ below $Att(\lambda)\sim 1.5$ and nearly linear versus $\log[F(IR)/F(\lambda)+1]$ above $Att(\lambda)\sim 1.5$. The non-linearity of the curve is due to changing relationship between the effective wavelength of $F(IR)$ energy absorption and that of $Att(\lambda)$. The linear portion of the curve is in the realm where $F(IR)$ is changing slowly (most of the galaxy’s luminosity is now being emitted in the IR), but $F(\lambda)$ continues to decrease due to the steady increase of $Att(\lambda)$. Thus, above $Att(\lambda)\sim 1.5$ the curves for all wavelengths have the same slope but different offsets reflecting the contributions different wavelengths make to $F(IR)$. Below $Att(\lambda)\sim 1.5$, the effective wavelength of the 1600 and F218W bands is similar to that of the $F(IR)$ energy absorption resulting in a nearly linear relationship. This is not the case for the V and K bands where their effective wavelengths are much larger than that of the $F(IR)$ energy absorption and, therfore, the V and K bands have non-linear relationships below $Att(\lambda)\sim 1.5$. The different behaviors of the star/gas/dust geometry relationships (see Fig. 1) is due to the presence of stars outside the dust in the cloudy geometry and the lack of external stars in the shell and dusty geometries. The behavior of the relationship for different stellar populations (Fig. 2 & 4) can be easily explained using the same arguments used above. The invariance of the relationship for the 1600 band is a reflection of the dominance of the UV in the $F(IR)$ energy absorption. The only time when the 1600 relationship is not invariant is for old burst stellar populations where the lack of significant UV flux means that the optical dominates the $F(IR)$ energy absorption (Fig. 3b). This is confirmed by the linear behavior over the entire $Att(\lambda)$ range of the V band curves for old stellar populations (Fig. 2d). The separation of the curves in the V band is the result of the different contributions the V band flux makes to the $F(IR)$ absorbed energy for different stellar populations. The older the stellar population, the more the optical contributes to the $F(IR)$ and, thus, the more linear the V band relationship is below $Att(V)\sim 1.5$. 2.3 Fits to the Relationships In order to use this method, we have fit the relationship between $F(IR)/F(\lambda)$ and $Att(\lambda)$ for combinations of stellar age, metallicity, burst or constant star formation, and values of $a_{d}$. We chose to fit the combination of the dusty/shell geometry curves. However, this does not limit the use of our fits in the UV since the cloudy geometry curves follow the dusty/shell geometry curves. This does limit the use of our fits for wavelengths longer than $\sim$3500 Å to cases where the dominant stellar sources are embedded in the dust such as starburst galaxies. The curvature of the relationship at $Att(\lambda)\sim 1$ required us to use a combination of a 3rd order polynomial for $Att(\lambda)<1.75$ and a 2nd order polynomial for $Att(\lambda)>1$. As a result the fit is: $$Att(\lambda)=\left\{\begin{array}[]{ll}A(x)&x<x_{1}\\ w(x)A(x)+(1-w(x))B(x)&x_{1}<x<x_{2}\\ B(x)&x>x_{2}\\ \end{array}\right.$$ (2) where $$\displaystyle x$$ $$\displaystyle=$$ $$\displaystyle F(IR)/F(\lambda),$$ $$\displaystyle x_{1}$$ $$\displaystyle=$$ $$\displaystyle x[Att(\lambda)=1]$$ $$\displaystyle x_{2}$$ $$\displaystyle=$$ $$\displaystyle x[Att(\lambda)=1.75]$$ $$\displaystyle A(x)$$ $$\displaystyle=$$ $$\displaystyle a_{1}+b_{1}x+c_{1}x^{2}+d_{1}x^{3},$$ $$\displaystyle B(x)$$ $$\displaystyle=$$ $$\displaystyle a_{2}+b_{2}(\log x)+c_{2}(\log x)^{2},\mbox{ and}$$ $$\displaystyle w(x)$$ $$\displaystyle=$$ $$\displaystyle(x_{2}-x)/(x_{2}-x_{1}).$$ For each curve fit with equation 2, 9 numbers result; 4 coefficients for $A(x)$, 3 coefficients for $B(x)$, the $F(IR)/F(\lambda)$ values where $Att(\lambda)=1$ and $1.75$ ($x_{1}$ and $x_{2}$). Computing the $Att(\lambda)$ value corresponding to a particular value of $F(IR)/F(\lambda)$ then involves specifying the stellar age, metallicity, star formation type, and value of $a_{d}$ which specify the appropriate fit coefficients to use. The parameters of these fits are available from the lead author as well as an IDL function which implements the calibration. 3 Comparison with Radio Method While the flux ratio method is relatively simple and makes sense qualitatively, to be truly convincing, we need an independent method for determining the attenuation for comparison. Fortunately, radio observations combined with measured hydrogen emission line fluxes allows just such a test. The radio method (Condon, 1992) is based on the measurement of the free-free radio flux from H ii regions and the assumption of Case B recombination (Osterbrock, 1989). From the thermal flux, the number of Lyman continuum photons absorbed by the gas can be calculated and, thus, the intrinsic fluxes of the hydrogen emission lines. Comparison of the intrinsic and observed line fluxes gives the attenuation at the emission line wavelength. The major source of uncertainty in the radio method is that radio observations contain both thermal (free-free) and nonthermal (synchrotron) components. For example, approximately a quarter of the flux measured at 4.85 GHz has a thermal origin. The decomposition of the measured radio flux into thermal and nonthermal components imparts a factor of two uncertainty in the resulting thermal flux (Condon, 1992). Unfortunately, determining attenuations using the flux ratio method is the most uncertain for hydrogen emission line fluxes. This is due to the lack of knowledge of the value of $a_{d}$, the fraction of Lyman continuum photons absorbed by dust (Fig. 4). We can take guidance from the work done by DeGioia-Eastwood (1992) on six Large Magellanic Cloud H ii regions. She found that $a_{d}$ ranges from 0.21 – 0.55 using the approximation of Petrosian, Silk, & Field (1972). We will use this range of $a_{d}$ values in the calculations below. To do this comparison, we need galaxies which have hydrogen emission line fluxes, infrared, and radio observations. In the IUE sample of starburst galaxies (Kinney et al., 1993), there are 10 galaxies with Balmer emission line (Storchi-Bergmann, Calzetti, & Kinney, 1994; Mcquade, Calzetti, & Kinney, 1995), IRAS (Calzetti et al., 1995), and 4.85 GHz observations (Gregory & Condon, 1991; Wright et al., 1994, 1995, 1996). The 10 galaxies are NGC 1313, 1569, 1614, 3256, 4194, 5236, 5253, 6052, 7552, & 7714. The emission lines were measured in a $10\arcsec\times 20\arcsec$ aperture which was usually large enough to include the entire starburst region but not the entire galaxy. While the IRAS and 4.85 GHz observations usually encompass the entire galaxy, the majority of the IRAS and radio flux emerges from the starburst region which should minimize the importance of the aperture mismatch (Calzetti et al., 1995). Figure 5 shows the comparison between the attenuations suffered by the H$\alpha$, H$\beta$, and H$\gamma$ emission lines in the 10 galaxies as calculated from the flux ratio method and the radio method. While the measurements of each galaxy’s three Balmer emission lines are related (through Case B recombination theory), plotting all three reduces the observational uncertainty due to the emission line flux measurements and increases the range of attenuations tested. For the radio method, we calculated the intrinsic emission line strengths using eqs. 3 & 5 of Condon (1992) assuming a $T_{e}=10^{4}~{}K$ and Table 4.2 of Osterbrock (1989). The attenuations were then easily calculated from the intrinsic and observed emission line fluxes. For flux ratio method, the $F(IR)$ flux was computed by integrating each galaxy’s 8 to 1000 $\micron$ SED after extrapolating the IRAS fluxes to longer wavelengths using a modified black body (dust emissivity $\propto\lambda^{-1}$). The temperature and flux level of the modified black body were determined from the IRAS 60 and 100 $\micron$ fluxes. ISO observations of starburst galaxies support the use of a single temperature for the large dust grain emission (Krügel et al., 1998). The 10 galaxies’ $F(IR)$ fluxes were 1.6 to 2.5 times larger than their FIR fluxes (as defined by Helou et al. (1988)) due to our inclusion of the mid-infrared hot, small dust grains. We assumed the 10 galaxies were undergoing constant star formation and used their measured metallicities (Calzetti et al., 1995) for the calculation of their attenuations from their measured infrared to emission line flux ratios. The error bars in Fig. 5 for the flux ratio method reflect the range of attenuations possible, assuming the galaxy age is between 1 Myr and 10 Gyr and $a_{d}$ values between 0.21 and 0.55. The attenuations calculated for the two methods agree well within their associated uncertainties. This gives confidence that the flux ratio method for calculating attenuations is valid. Of course, this conclusion would be strengthened with a larger sample of galaxies and observations with similar apertures at optical, infrared, and radio wavelengths. Such infrared observations will become possible with the launch of SIRTF. 4 Application to Individual Galaxies The application of the flux ratio method to determining the UV attenuations of individual galaxies is straightforward. Due to the insensitivity in the UV of this method to the star, gas, or dust parameters (Figs. 1a,b & 2a,c,e), the observed $F(IR)/F(UV)$ is directly related to $Att(UV)$. This is not the case for optical and near-IR wavelengths where this method is sensitive to the intrinsic SED shape (Figs. 2b,d,f) and, to a lesser extent, the geometry of the star, gas and dust (Fig. 1c,d). In order to construct the full UV through near-IR attenuation curve for a galaxy, an iterative procedure must be followed. The steps of the iterative procedure are: 1. Assume an intrinsic SED shape (stellar age, metallicity, star formation type, and $a_{d}$ value), 2. Construct a candidate attenuation curve using the observed UV-NIR $F(IR)/F(\lambda)$ and our calibration of $Att(\lambda)$ versus $F(IR)/F(\lambda)$, 3. Deredden the observed UV-NIR SED with the candidate attenuation curve, 4. Compare the dereddened SED (step 3) with the assumed SED (step 1), 5. Repeat steps 1-4 to find the attenuation curve which produces the best match between the dereddened SED and the assumed SED. We attempted to apply this iterative method to 10 starburst galaxies listed in the previous section as they have UV, optical, near-infrared, infrared, and radio observations. We were unable to find fits which would simultaneously fit the UV/optical/NIR continuum and the H$\alpha$ emission attenuations derived from the ratio observations. To do the fitting we used the measured metallicities of the galaxies and allowed the galaxy’s age and type of star formation as well as the the value of $a_{d}$ to vary. The fact that we could not find fits to any of the 10 galaxies is an indication that at least two stellar populations are contributing to the observed SED. But, the correlation between the radio and flux ratio H$\alpha$ attenuations is strong evidence that only one of these stellar populations is ionizing the gas and is the main source for the dust heating (see Fig. 5). This stellar population is likely the starburst and the other stellar population is likely that of the underlying galaxy. The existence of two stellar populations, each with different stellar parameters and attenuation curves, complicates the fitting to the point where the number of free parameters can exceed the number of observed data points. This illustrates one of the main difficulties of applying the flux ratio method. The calibration of the flux ratio method is based on the assumption that there is a single stellar population responsible for the UV-NIR continuum and IR dust emission. When a second stellar population contributes to the continuum or IR dust emission, applying the flux ratio method will become more difficult. While we cannot determine the UV-NIR attenuation curves for the 10 starburst galaxies, we can determine their UV attenuation curves for the following reason. The identification of the stellar population heating the dust as the same population ionizing the gas leads us to conclude that the same population also emits the majority of the galaxies’ UV continua since UV photons are the main source of dust heating for starburst galaxies (see §3). Figure 6 gives the attenuation curves for 8 of the 10 starburst galaxies used in the previous section. The other 2 galaxies were excluded as they did not have near UV data. All of the curves lack a substantial 2175 Å bump in agreement with previous work (Calzetti, Kinney, & Storchi-Bergmann, 1994; Gordon, Calzetti, & Witt, 1997). There is also a trend towards steeper attenuation curves as $Att(2850)$ decreases which is the behavior predicted by Witt & Gordon (1999). The curves are similar to the “Calzetti attenuation curve” (Calzetti, Kinney, & Storchi-Bergmann, 1994; Calzetti, 1997) derived for the IUE sample of galaxies. As the Calzetti attenuation curve is an average, the scatter of our individual curves is likely to be real. While the 8 galaxies in Fig. 6 were included in the Calzetti (1997) work, the method used to derive the Calzetti attenuation curve was quite different from the flux ratio method. This is further evidence that the flux ratio method can determine the attenuation curves of starburst galaxies. 5 Discussion We have presented a method which uses the $F(IR)/F(\lambda)$ flux ratio to determine $Att(\lambda)$ for individual starburst galaxies. The major strengths of this method is that it is almost completely independent of the type of dust (MW/SMC) or the local distribution of dust (homogeneous/clumpy), and is only weakly dependent on the global distribution of stars and dust (presence/lack of stars outside dust). In the ultraviolet, this method is independent of the intrinsic stellar SED except for the case of very old burst populations. In the optical/near-IR, this method is dependent on the intrinsic stellar SED shape. The flux ratio method is not based on the properties of the nebular emission (as is the radio method), but on the properties of the stellar continuum and IR dust emission. As a result, it is applicable to any wavelength from the UV to near-IR and not just wavelengths with hydrogen emission lines. A major limitation of the flux ratio method is that the majority of the observed UV through far-infrared flux must originate from a single stellar population (either burst or constant star formation). An example of a case where the flux ratio method would not be applicable would be a heavily embedded starburst in a galaxy with a second older, less embedded stellar population. At UV and IR wavelengths the starburst would dominate, but at optical and near-IR wavelengths the older population would dominate. Another possible limitation is that the measured infrared flux is assumed to be a direct measure of the flux absorbed by the dust. If the infrared radiation in not emitted symmetrically (e.g., for non-symmetrically distributed dust which is optical thick in the infrared), then the measured infrared flux will not be a direct measure of the flux absorbed by the dust. The assumption that the infrared flux is a direct measure of the flux absorbed by the dust is crucial to the accuracy flux ratio method. It is possible to account for these weaknesses by increasing the complexity of the modeling by adding additional stellar populations and/or complex dust geometries. Such increases in the complexity of the modeling will necessarily require more detailed spectral and spatial observations as the number of model parameters increases. For any starburst galaxy with UV and IR observations, the UV attenuation curve can be calculated using the flux ratio method. Starburst galaxies are likely to be the best case for applying the flux ratio method as the intensity of the starburst greatly increases the probability that the UV and IR flux originate from only the starburst population. If the parameters (age, metallicity, etc) of the intrinsic SED shape can be determined and the contamination from the underlying stellar population removed, then the attenuation of the starburst galaxy can be determined not only for the UV, but also for the optical and near-IR. Thus, the flux ratio method seems very promising for determining the dust attenuations of individual galaxies. The easiest way to ensure the basic assumptions of our calibration of the flux ratio method are met is to take high spatial resolution observations of starburst regions in nearby galaxies or integrated galaxy observations of intense starburst galaxies at any distance. This would ensure that the UV through far-infrared flux originates from the starburst and not the host galaxy. Examples of these observations would be super star clusters in nearby galaxies (Calzetti et al., 1997) and observations of high-z starburst galaxies which have been shown to be similar to local starbursts except more intense (Heckman et al., 1998). Currently, both types of UV, optical, and near-IR observations can and have been done, but the far-infrared observations needed await SIRTF. SIRTF will have the spatial resolution and sensitivity to do both types of observations. The ability to determine the UV dust attenuation curve for individual starburst galaxies will facilitate the study of dust in different star formation environments. The traditional explanation for the differences seen in the dust extinction between the Milky Way, LMC, and SMC has been that the different metallicities of the three galaxies lead to different dust grains. Work on starburst galaxies with metallicities between 0.1 and 2 times solar which found most of these galaxies possess dust which lacks a 2175 Å bump (Calzetti, Kinney, & Storchi-Bergmann, 1994; Gordon, Calzetti, & Witt, 1997) seriously called this explanation into question. Subsequent work on the extinction curves in both the SMC (Gordon & Clayton, 1998) and LMC (Misselt, Clayton, & Gordon, 1999) found that the extinction curves toward star forming regions in both galaxies were systematically different than those toward more quiescent regions. These results imply that dust near sites of active star formation is different due to processing (Gordon, Calzetti, & Witt, 1997) of existing dust or formation of new dust (Dwek, 1998). The processing interpretation is supported by recent work in the Milky Way along low density sightlines toward the Galactic Center (Clayton, Gordon, & Wolff, 1999). This work found that sightlines which show evidence of processing (probed by N(Ca II)/N(Na I)) have weaker 2175 Å bumps and stronger far-UV extinctions than most other Milky Way sightlines (Cardelli, Clayton, & Mathis, 1989). The actual processing mechanism is not simple as the dust towards the most intense star formation in the LMC (30 Dor) has a weak 2175 Å bump, but the dust towards the most intense star formation in the SMC, which has only 10% the strength of 30 Dor, has no 2175 Å bump. In order to completely characterize the dust near starbursts, attenuation curves for a large sample of starbursts galaxies with a range of metallicity, dust content, and starburst strength are needed. In conjunction with investigating the impact environment has on dust properties, the ability to determine individual starburst galaxy attenuation curves will simplify the study of the starburst phenomenon. By being able to remove the effects of dust accurately, the age and strength of starburst galaxies and regions in galaxies can be determined with confidence. In the realm of high redshift starburst galaxies ($z>2.5$), the ability to determine the dust attenuation of individual galaxies will arrive with the advent of deep SIRTF/MIPS imaging of fields with existing rest-frame UV imaging (eg., Hubble Deep Fields). The currently large uncertainty on the global star formation history of the universe due to the effects of dust on starburst galaxies will be greatly reduced (Madau, Pozzetti, & Dickinson, 1998; Pettini et al., 1998; Steidel et al., 1999). This work benefited from discussions with Daniela Calzetti and Gerhardt Meurer. Support for this work was provided by NASA through LTSAP grant NAG5-7933 and archival grant AR-08002.01-96A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. References Calzetti (1997) Calzetti, D. 1997, AJ, 113,162 Calzetti et al.  (1995) Calzetti, K., Bohlin, R. C., Kinney, A. L., Storchi-Bergmann, T., & Heckman, T. M. 1995, ApJ, 443, 136 Calzetti & Heckman (1999) Calzetti, D. & Heckman, T. M. 1999, ApJ, 519, 27 Calzetti, Kinney, & Storchi-Bergmann (1994) Calzetti, D., Kinney, A. L., & Storchi-Bergmann, T. 1994, ApJ, 429, 582 Calzetti et al.  (1997) Calzetti, D., Meurer, G. R., Bohlin, R. C., Garnett, D. R., Kinney, A. L., Leitherer, C., & Storchi-Bergmann, T. 1997, AJ, 114, 1834 Cardelli, Clayton, & Mathis (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245 Clayton, Gordon, & Wolff (1999) Clayton, G. C., Gordon, K. D., & Wolff, M. J. 1999, ApJ, submitted Condon (1992) Condon, J. J. 1992, ARA&A, 30, 575 DeGioia-Eastwood (1992) DeGioia-Eastwood, K. 1992, ApJ, 397, 542 Dwek (1998) Dwek, E. 1998, ApJ, 501, 643 Ferrara et al.  (1999) Ferrara, A., Bianchi, S., Cimatti, A., & Giovanardi, C. 1999, ApJS, in press (astro-ph/9903078) Fioc & Rocca-Volmerange (1997) Fioc, M. & Rocca-Volmerange, B. 1997, A&A, 326, 950 Fioc & Rocca-Volmerange (1999) Fioc, M. & Rocca-Volmerange, B. 1999, in preparation Gordon, Calzetti, & Witt (1997) Gordon, K. D., Calzetti, D., & Witt, A. N. 1997,ApJ, 487, 625 Gordon & Clayton (1998) Gordon, K. D. & Clayton, G. C. 1998, ApJ, 500, 816 Gregory & Condon (1991) Gregory, P. C. & Condon, J. J. 1991, ApJS, 75, 1011 Heckman et al.  (1998) Heckman, T. M., Robert, C., Leitherer, C., Garnett, D. R., van der Rydt, F. 1998, ApJ, 503, 646 Helou et al.  (1988) Helou, G., Khan, I. R., Malek, L., & Boehmer, L. 1988, ApJS, 68, 151 Kennicutt (1998) Kennicutt, R. C. 1998, ARA&A, 36, 189 Kinney et al.  (1993) Kinney, A. L., Bohlin, R. C., Calzetti, D., Panagia, N., & Wyse, R. F. G. 1993, ApJS, 86, 5 Krügel et al.  (1998) Krügel, E., Siebenmorgen, R., Zota, V., & Chini, R. 1998, A&A, 331, L9 Madau, Pozzetti, & Dickinson (1998) Madau, P., Pozzetti, L., & Dickinson, M. 1998, ApJ, 498, 106 Massa, Savage, & Fitzpatrick (1983) Massa, D., Savage, B. D., & Fitzpatrick, E. L. 1983, ApJ, 266, 662 Mathis (1986) Mathis, J. S. 1986, PASP, 98, 995 Mcquade, Calzetti, & Kinney (1995) Mcquade, K., Calzetti, D., & Kinney, A. L. 1995, ApJS, 97, 331 Meurer, Heckman, & Calzetti (1999) Meurer, G. R., Heckman, T. M., & Calzetti, D. 1999, ApJ, 512, 64 Meurer et al.  (1997) Meurer, G. R., Heckman, T. M., Lehnert, M. D., Leitherer, C., & Lowenthal, J. 1997, AJ, 114, 54 Misselt, Clayton, & Gordon (1999) Misselt, K. A., Clayton, G. C., & Gordon, K. D. 1998, ApJ, 515, 128 Osterbrock (1989) Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (Mill Valley, CA: University Science Books) Petrosian, Silk, & Field (1972) Petrosian, V., Silk, J., & Field, G. B. 1972, ApJ, 177, L69 Pettini et al.  (1998) Pettini, M., Kellogg, M., Steidel, C. C., Dickinson, M., Adelberger, K. L., & Giavalisco, M. 1998, ApJ, 508, 539 Sawicki & Yee (1998) Sawicki, M. & Yee, H. K. C. 1998, AJ, 115, 1329 Schaerer (1999) Schaerer, D. 1999, in Building the Galaxies: from the Primordial Universe to the Present, eds. F. Hammer et al. (Gif-sur-Yvette: Editions Frontiéres), in press Smith et al.  (1995) Smith, D. A., Herter, T., Haynes, M. P., Beichman, C. A., Gautheir, & T. N., III 1995, ApJ, 439, 623 Steidel et al.  (1999) Steidel, C., Adelberger, K. L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, ApJ, 519, 1 Storchi-Bergmann, Calzetti, & Kinney (1994) Storchi-Bergmann, T., Calzetti, D., & Kinney, A. L. 1994, ApJ, 429, 572 Takagi, Arimoto, & Vansevičius (1999) Takagi, T., Arimoto, N., & Vansevičius, V. 1999, ApJ, in press (astro-ph/9902219) Wang & Heckman (1996) Wang, B. & Heckman, T. M. 1996, ApJ, 457, 645 Witt & Gordon (1996) Witt, A. N. & Gordon, K. D. 1996, ApJ, 463, 681 Witt & Gordon (1999) Witt, A. N. & Gordon, K. D. 1999, ApJ, in press Witt, Thronson, & Capuano (1992) Witt, A. N., Thronson, H. A., & Capuano, J. M. 1992, ApJ, 393, 611 Wright et al.  (1994) Wright, A. E., Griffith, M. R., Burke, B. F., & Ekers, R. D. 1994, ApJS, 91, 111 Wright et al.  (1995) Wright, A. E., Griffith, M. R., Burke, B. F., & Ekers, R. D. 1995, ApJS, 97, 347 Wright et al.  (1996) Wright, A. E., Griffith, M. R., Hunt, A. J., Troup, E., Burke, B. F., & Ekers, R. D. 1996, ApJS, 103, 145
Macroeconomic Dynamics of Assets, Leverage and Trust Jeroen Rozendaal${}^{1}$, Yannick Malevergne${}^{1,2}$, Didier Sornette${}^{1}$ Abstract A macroeconomic model based on the economic variables (i) assets, (ii) leverage (defined as debt over asset) and (iii) trust (defined as the maximum sustainable leverage) is proposed to investigate the role of credit in the dynamics of economic growth, and how credit may be associated with both economic performance and confidence. Our first notable finding is the mechanism of reward/penalty associated with patience, as quantified by the return on assets. In regular economies where the EBITA/Assets ratio is larger than the cost of debt, starting with a trust higher than leverage results in the highest long-term return on assets (which can be seen as a proxy for economic growth). Therefore, patient economies that first build trust and then increase leverage are positively rewarded. Our second main finding concerns a recommendation for the reaction of a central bank to an external shock that affects negatively the economic growth. We find that late policy intervention in the model economy results in the highest long-term return on assets and largest asset value. But this comes at the cost of suffering longer from the crisis until the intervention occurs. The phenomenon that late intervention is most effective to attain a high long-term return on assets can be ascribed to the fact that postponing intervention allows trust to increase first, and it is most effective to intervene when trust is high. These results derive from two fundamental assumptions underlying our model: (a) trust tends to increase when it is above leverage; (b) economic agents learn optimally to adjust debt for a given level of trust and amount of assets. Using a Markov Switching Model for the EBITA/Assets ratio, we have successfully calibrated our model to the empirical data of the return on equity of the EURO STOXX 50 for the time period 2000-2013. We find that dynamics of leverage and trust can be highly non-monotonous with curved trajectories, as a result of the nonlinear coupling between the variables. This has an important implication for policy makers, suggesting that simple linear forecasting can be deceiving in some regimes and may lead to inappropriate policy decisions. ${}^{1}$ ETH Zurich, Department of Management, Technology and Economics, Scheuchzerstrasse 7, CH-8092 Zurich, Switzerland ${}^{2}$ Université de Lyon, Coactis E.A. 4161, France (Version January 11, 2021) Keywords: Macroeconomics, Complex Systems, Assets, Leverage, Trust, regime shifting, crisis 1 Introduction The credit crisis and panic that erupted in 2008 in the US and spilled over the World have made again clear that financial price and economic value are based fundamentally on trust; not on fancy mathematical formulas, not on subtle self-consistent efficient economic equilibrium; but on trust in the future, trust in economic growth, trust in the ability of debtors to face their liabilities, trust in financial institutions to play their role as multipliers of economic growth, trust that our money in a bank account can be redeemed at any time we choose. Usually, we take these facts for granted. When depositors happen to doubt banks, this leads to devastating bank runs. When banks start to doubt other banks, this leads to a freeze of the inter-banking loan markets and an effective run on collaterised assets. Then, the implicit processes of a working economy –all we take for granted– starts to dysfunction and spirals into a global collapse, as almost happened with the Lehmann Brothers Bankruptcy. The standard discourse by observers and pundits is to attribute the 2008 crisis to the mortgage-backed securities linked to the bursting of the house price bubble, the irresponsible lending, overly complex financial instruments and conflicts of interest leading to asymmetric information translated into market illiquidity, and the spreading of risks via packaging and selling of imagined valuations to unsuspecting investors. What is missing in this discussion is to endogenise trust in the dynamics of the credit system. As a contribution to fill this gap, we construct a simple macroeconomic model, based on basic accounting rules combined with reasonable economic assumptions, which put trust as the central dynamical variable. Focusing on credit creation and its dynamics, we naturally use asset amount and leverage (here defined as the ratio debt to asset) as the two other variables. To operationalise the model, we define trust as the maximum sustainable leverage. This transforms an a priori ill-defined qualitative concept, capturing the degree of belief in the reliability, truth, or ability of something, into a quantitative variable that can be worked with. Our motivation to focus on the three variables (i) assets, (ii) leverage and (iii) trust is that these are three key economic variables for studying different economic regimes, since they relate to credit, which has a central role in economics as a driver of economic cycles. For example, credit is pivotal to understand the financial crisis of 2008. Ahead of the 2008 crisis, credit was easily available: a high percentage of asset value could be used as collateral to obtain a loan. The high availability of credit caused debt levels and leverage to increase rapidly. At a certain moment, this drastically changed and credit crunched: the financial crisis of 2008 was a fact. To exemplify the importance of credit today: the current global public debt is over 57 trillion U.S. dollar (“The global debt clock”, n.d.). For comparison, the Gross World Product (GWP) is about 78 trillion U.S. dollar (“Gross domestic product 2014”, 2015). For the United States, the public debt as a percentage of GDP is over 90% at the time of writing. The central role of credit (debt) in an economy suggests to focus on the modelling the interplay between assets (assets serve as a collateral to obtain a loan), leverage (the debt to assets value) and trust (the maximum sustainable leverage). The link between credit and the afore-mentioned variables has been qualitatively discussed by Von der Becke and Sornette (2014), who provided a qualitative framework in which credit creation is argued to depend on the amount of collateral assets accepted, the level of leverage and the level of trust and confidence in future cash flows. In their paper, a qualitative theory was proposed to understand credit creation and the perspectives of different schools of thought were integrated (Austrian, Mainstream and Post Keynesian). Besides the paper of Von der Becke and Sornette (2014), there are numerous other papers in which assets, leverage and trust are (mostly separately) studied. Geanakoplos (2010) presents a model in which the interplay of leverage and assets is captured. In this model, a set-up is used in which houses are used as collateral for long-term loans, and loans are used as collateral for repos111Abbreviation for repurchase agreements, i.e. short-term collateralized loans.. Based on this framework, Geanakoplos (2010) advocates that leverage rates should be the primary target (instead of interest rates) of central banks in times of crisis. Asset prices are a central theme in, for example, the research of Bernanke et al. (2001). Their paper studies whether central banks should respond to movements in asset prices. Borio and Lowe (2002) linked price changes of assets to financial instability: sustained rapid credit growth combined with large increases in asset prices tends to increase the probability of financial instability. Based on historical data, the research showed that significant changes in asset prices are linked to increased financial instability. It is argued that monetary policy should respond to changing asset prices with the goal of preserving financial stability. Lang et al. (1996) present a study in which leverage is the main variable that is studied and it is demonstrated that an inverse relation between leverage and future growth of firms with a low q-ratio222Tobin’s q-ratio is defined as the total market value of the firm divided by the the total asset value. exists. Leverage as an indicator for growth opportunities is also a theme in a study of Gilchrist (2003), who hypothesizes that high leverage economies are particularly vulnerable to slowdowns in the world economy. Putnam et al. (1994) initiated the study of trust (“social capital”) and its relation to the well-functioning of a society by providing case studies of the functioning of regional governments in Italy. Knack and Keefer (1997) have investigated trust and its impact on the economy. Survey data was used to capture trust and it was found that “social capital” (indicated by trust) matters for economic performance. Dincer and Uslaner (2010) confirm this finding based on data from U.S. states in which a positive relation between trust and economic growth was found. In a study of Bjørnskov (2012), it has been investigated through what mechanisms trust affects economic growth and trust is proposed to be the fundamental driver of economic development. There is no full consensus on the relation between trust and economic growth. Beugelsdijk et al. (2004) could not verify the link between trust and economic growth and they concluded that there may not always be an economic pay-off of trust. Hence, there exists no consensus on a relation between trust and economic performance and one should keep in mind that the definition of trust may differ in various papers. In order to position our model, it is useful to provide a short overview of influential macroeconomic models in the academic and central banking literature. This overview is useful to to position our assets, leverage and trust model. The New-Keynesian dynamic stochastic general equilibrium (DSGE) model is one of the most influential models in the academic and central banking community (see e.g. Sbordone et al. (2010) for an overview of DSGE models and see Isohätälä et al. (2015) for an overview of DSGE models and beyond). The model is built around three coupled equations, which derive from micro-foundations: supply, demand and monetary policy equations. Furthermore, equations describing expectations can be explicitly taken into account in the DSGE framework. A strong element of the model is that it can capture the response of output and inflation to demand, supply and policy shocks. DSGE models however failed to explain the significant rapid changes (declines) in asset prices, output and investment that occurred during the 2008 crisis and thereafter (Isohätälä et al., 2015; Reinhart and Rogoff, 2009). As a response to this inability to model the 2008 crisis, a strand of the recent literature focussed on introducing dynamics to account for the possible occurrence of extreme events that abruptly and significantly influence economic quantities such as asset prices, output or investment. Examples of influential recent papers that focus on this are those by He and Krishnamurthy (2011) and Brunnermeier and Sannikov (2012). The new models employ continuous time modelling approaches to macroeconomic problems and assume market incompleteness (i.e. not all risks can be hedged). This approach is mathematically convenient as it allows one to characterise the possible economic states by (partial) differential equations. A characteristic of the new class of models is that, in the limit of small (zero) disturbances/volatility, the non-linear dynamics of the model reduces to a linearised DSGE model. Variants of the afore-mentioned models, as well as the model constructed in this research, can eventually be used to assess policy measures of central banks. Bacchetta et al. (2015) focus on the question whether monetary policy can help to avert self-fulfilling debt crises and at what cost (particularly in terms of inflation). Their paper builds on earlier sovereign debt crisis models and extends these with the goal of quantifying the afore-mentioned cost. A model on which it builds is that of Lorenzoni and Werning (2013) who introduced a real sovereign debt crisis model (without a monetary authority), in which a government defaults at a predetermined time $T$ if the present value of debt exceeds the present value of future primary surpluses (i.e. government spending, excluding interest payments, minus income from taxes). Bacchetta et al. (2015) extend the model to a monetary economy, which provides a convenient framework to study conventional monetary policies (in particular the effects of inflation and interest rate). Furthermore, it allows to study additional (non conventional) monetary tools, such as buying government bonds by considering the budget constraint of a central bank. Bacchetta et al. (2015) assess the conventional monetary policies (operating through interest rates) using a New Keynesian DSGE model based on Galí (2009) with extensions by Woodford (2003) to introduce a delay in the impact of monetary policy shocks. Bacchetta et al. (2015) conclude that the conventional tools of the central bank, aimed at averting a debt crisis, lead to high inflation for a sustained period of time. Unconventional monetary policies (e.g. quantitative easing: the purchase of government securities or other securities from the market) are suggested to be effective only when an economy is at the zero lower bound (ZLB), i.e. when the short-term interest rate is at 0. The organisation of the present article is as following. Section 2 gives the formal definition of our three fundamental variables as well as a number of derived quantities. It then states the three governing ordinary differential equations, whose derivation is presented in Appendix 2.2 and some closed-formed solutions are given. The fixed point stability analysis is also presented. Section 3 presents a survey of the main properties of the model, by showing phase portraits of the trajectories in the leverage-trust space and by quantifying the associated return on asset. Section 4 uses the model to investigate what happens under regime shifts, either due to some exogenous adverse shock leading to a sustained crisis (negative growth rate) and/or as a result of policy intervention in the form of decreased target interest rates and increased return on assets. Section 5 presents the calibration of the model, extended using a Markov Switching framework for an exogenous model parameter, to the return on equity of the EURO STOXX 50 for the time period 2000-2013. Section 6 concludes. The derivations and proofs of the main results are presented in Appendices A and B. 2 Formalisation of the joint dynamics of assets, leverage and trust 2.1 Basic definitions: assets, leverage, trust The following definitions clarify the meaning of our three key economic variables and introduce their growth rates. Definition 1. The total asset value at time $t$ is denoted by $A(t)$. Asset value can always be written as the sum of debt and equity: $$\displaystyle A(t):=D(t)+E(t),$$ (1) where $D(t)$ is the debt at time $t$, and $E(t)$ is the equity at time $t$. Definition 2. The leverage at time $t$, denoted $L(t)$, is defined here as: $$L(t):=D(t)/A(t).$$ (2) Banks usually operate with leverage ratios close to 1, while “normal firms” usually operate with lower leverage ratios. The leverage is an interesting quantity to examine since it quantifies the indebtedness of an economy and is linked to risk and volatility. As a side note, the standard way of defining leverage is $L_{s}:=D/E$. For this study, the definition given by Eq.(2) is more convenient to link to the trust variable to be defined formally in definition 3. $L$ in Eq.(2) is related to $L_{s}$ through $L=\frac{L_{s}}{1+L_{s}}$. While $L_{s}$ has no limit in principle, $L$ has an upper limit of one, corresponding to the limit of zero equity ($E=0$). Definition 3. The trust $T(t)$ is defined as the fraction of the total assets that qualifies as collateral for taking on debt. The trust can hence be viewed to represent a borrower’s creditworthiness. It is assumed that under normal circumstances the level of trust $T(t)$ satisfies the following inequality: $$D(t)\leq T(t)\cdot A(t).$$ (3) This inequality expresses that in general the debt $D$ should not exceed the total value of acceptable collateral. This holds simply by definition of what is meant by “acceptable collateral” and derives from the nature of lending where the lender hedges his risks by ensuring that the borrower has assets at least as large as the borrowed amount. We should note that periods of market exuberance have been characterised by a failure of this inequality, as for instance for subprime “NINJA loans” (loans extended to people with “No Income, No Job, (and) no Assets). Definition 4. In this study, the return on assets (ROA), defined by $$r_{A}(t):=\frac{1}{A}\frac{\mathrm{d}A}{\mathrm{d}t}~{},$$ (4) is viewed as a proxy for economic growth. We will also use the growth rate $r_{D}(t):=\frac{1}{D}\frac{\mathrm{d}D}{\mathrm{d}t}$ of debt, the return $r_{E}(t):=\frac{1}{E}\frac{\mathrm{d}E}{\mathrm{d}t}$ on equity (ROE) and the growth rate $r_{L}(t):=\frac{1}{L}\frac{\mathrm{d}L}{\mathrm{d}t}$ of leverage. From differentiation of $\ln(D)=\ln(L)+\ln(A)$ (Eq.(2)) with respect to $t$, the growth rate $r_{L}(t)$ of leverage, the growth rate $r_{D}(t)$ of debt and the return $r_{A}(t)$ on assets are linked by the simple relation $$r_{D}(t)=r_{L}(t)+r_{A}(t).$$ (5) 2.2 System equations for the joint dynamics of assets, leverage and trust We now derive the coupled dynamics of assets, leverage and trust. 2.2.1 Auxiliary definitions Before presenting the fundamental equations for assets, debt and trust, it is necessary to introduce additional definitions. Definition 5. Depreciations $\mathcal{D}_{\mathrm{d}t}(t)$ (over a period $\mathrm{d}t$) are given by: $$\displaystyle\mathcal{D}_{\mathrm{d}t}(t)=\delta_{\mathrm{d}t}A(t),$$ (6) where $\delta_{\mathrm{d}t}$ is a depreciation factor, indicating what fraction of asset value is “lost” over a time period of $\mathrm{d}t$. Definition 6. Amortization $\mathcal{A}_{\mathrm{d}t}(t)$ (over a period $\mathrm{d}t$) is given by: $$\displaystyle\mathcal{A}_{\mathrm{\mathrm{d}t}}(t)=D(t-\mathrm{d}t)-D(t),$$ (7) where $\mathcal{A}_{\mathrm{\mathrm{d}t}}(t)>0$ if the debt decreases (the word derives from “amortisen”, which means “to kill”: decrease debt in this case). Note that $\mathcal{A}_{\mathrm{\mathrm{d}t}}(t)<0$ if more debt is taken on. Definition 7. The definition of net income (NI), also referred to as net earnings, $\mathcal{E}_{\mathrm{d}t}(t)$ follows from accounting: $$\displaystyle\mathcal{E}_{\mathrm{d}t}(t)=\underbrace{\underbrace{\kappa_{% \mathrm{d}t}A(t)}_{\mathrm{EBITDA}}-\underbrace{\delta_{\mathrm{d}t}A(t)}_{% \mathrm{Depreciations}}-\underbrace{[D(t-\mathrm{d}t)-D(t)]}_{\mathrm{% Amortization}}}_{\mathrm{EBIT}}-\underbrace{r_{\mathrm{d}t}D(t)}_{\mathrm{% Interests\;payments}},$$ (8) in which $\kappa_{\mathrm{d}t}$ is defined as the EBITDA to assets ratio. Furthermore, $r_{\mathrm{d}t}$ is the interest rate paid on debt. The subscripts $\mathrm{d}t$ refers to a period $\mathrm{d}t$ (so e.g. $\mathcal{E}_{\mathrm{d}t}(t)$ are the net earnings generated over a time period $\mathrm{d}t$). Note that taxes are neglected in the model (in case of taxes, these should be subtracted from EBIT as well in Eq.(8); such that NI=EBIT-Interest-Taxes) Furthermore, the net earnings can be classified by how they are allocated: they are either paid out as dividends to the shareholders, or reinvested into the company and/or maintained as cash. $$\displaystyle\mathcal{E}_{\mathrm{d}t}(t)=\underbrace{p^{\mathrm{out}}\mathcal% {E}_{\mathrm{d}t}(t)}_{\mathrm{Dividends}}+\underbrace{p^{\mathrm{back}}% \mathcal{E}_{\mathrm{d}t}(t)}_{\mathrm{Retained\;Earnings}},$$ (9) where $p^{\mathrm{out}}$ is the pay out ratio and $p^{\mathrm{back}}$ is the plow back ratio. Note that: $p^{\mathrm{out}}+p^{\mathrm{back}}=1$. In the next sections, it is assumed that $p^{\mathrm{back}}=1$, and hence the net earnings are equal to the retained earnings. This assumption can be motivated by invoking the Modigliani–Miller theorem (Miller and Modigliani, 1961): a firm’s dividend policy is irrelevant for the valuation of shares, under idealized conditions which include the absence of taxation, transaction costs, asymmetric information and market imperfections. 2.2.2 Assets From basic accounting under the assumption of a plow back ratio of 1, it follows that: $${\footnotesize\mathrm{Asset\;value}(t)=\mathrm{Asset\;value}(t-\mathrm{d}t)+% \mathrm{Net\;Income}(t)}.$$ (10) Using the notation as introduced in section 2.2.1: $$\displaystyle A(t)$$ $$\displaystyle=\;A(t-\mathrm{d}t)+\mathcal{E}_{\mathrm{d}t}(t),$$ (11) $$\displaystyle\stackrel{{\scriptstyle\mathmakebox[\widthof{=}]{\eqref{earnings}% }}}{{=}}\;A(t-\mathrm{d}t)+\underbrace{(\kappa_{\mathrm{d}t}-\delta_{\mathrm{d% }t})A(t)}_{\mathrm{EBITA}}-\underbrace{[D(t-\mathrm{d}t)-D(t)]}_{\mathrm{% Amortization}}-\underbrace{r_{\mathrm{d}t}D(t)}_{\mathrm{Interest\;payments}}.$$ (12) By defining $g_{\mathrm{d}t}$ to be the EBITA/Assets ratio, i.e. $g_{\mathrm{d}t}:=\kappa_{\mathrm{d}t}-\delta_{\mathrm{d}t}$, Eq.(12) can be rewritten as follows: $${A(t)-A(t-\mathrm{d}t)}={g_{\mathrm{d}t}A(t)}-{r_{\mathrm{d}t}D(t)}+[D(t)-D(t-% \mathrm{d}t)].$$ (13) By dividing Eq.(13) by $\mathrm{d}t$ and taking the limit $\mathrm{d}t\to 0$, one obtains: $$\displaystyle\frac{\mathrm{d}A}{\mathrm{d}t}=gA(t)-rD(t)+\frac{\mathrm{d}D}{% \mathrm{d}t},$$ (14) where $\displaystyle g:=\lim_{\mathrm{d}t\to 0}\frac{g_{\mathrm{d}t}}{\mathrm{d}t}$ and $\displaystyle r:=\lim_{\mathrm{d}t\to 0}\frac{r_{\mathrm{d}t}}{\mathrm{d}t}$. Eq.(14) is the the governing equation for assets. 2.2.3 Trust To construct a fundamental equation for trust, we firstly assume that in the absence of externalities (that might cause abrupt changes in the trust) the trust increases to its maximum value $T=1$, where all assets qualify as collateral. A tendency for trust to increase is thus assumed. This assumption can be supported by both a psychological and a rational phenomenon. The psychological phenomenon is the prosociality of humans: people have a tendency of helping, benefiting and trusting others and/or society (Dovidio et al., 2006). The rational phenomenon is the aim to use all assets and let the economy function at its “full potential”. In an economy that functions to its full potential, the trust is at its maximum. This is favourable because the higher the trust, the more credit there can be to fund profitable projects (if trust is 1, 100% of the asset value is accepted as a collateral to obtain a loan). A convenient and common function to capture the growth of the trust to its maximum 1, is the logistic function (“S-curve”). It was first introduced by Pierre Verhulst in the nineteenth century to describe population growth, but it has a wide range of applications in many fields. For example, it has applications in statistical physics (Fermi-Dirac), demography, and to model the dynamics of the market penetration of a product or service. A standard logistic equation for trust would thus be: $$\frac{\mathrm{d}T}{\mathrm{d}t}=kT(1-T),$$ (15) where $1/k>0$ denotes the inertia (resistance) of the trust to change. Eq.(15) captures the standard features of a logistic equation. For low $T$, Eq.(15) simplifies to $\displaystyle\frac{\mathrm{d}T}{\mathrm{d}t}\approx kT$, which expresses that the growth rate of trust is proportional to the existing trust. This type of accumulation of trust is named the Matthew effect (Merton, 1968). In this context it follows that, the higher the level of existing trust, the faster trust grows (“cumulative advantage”). For $T$ close to 1 (large $T$), Eq.(15) simplifies to $\displaystyle\frac{\mathrm{d}T}{\mathrm{d}t}\approx k(1-T)$, which shows that the rate of change of trust ($\frac{\mathrm{d}T}{\mathrm{d}t}$) decreases when $T$ approaches 1. Note that when $T\to 1$ or $T\to 0$ then a steady-state is reached ($\frac{\mathrm{d}T}{\mathrm{d}t}\to 0$ in Eq.(15)). It is desired that the growth of trust is positive when trust exceeds leverage, while it should be negative when leverage exceeds trust. In this way, a convergence to the “natural optimum” $T=L$ is imposed. By adding a multiplication of $(T-L)$ to Eq.(15), this feature is captured. The following “extended” logistic equation is hence postulated for the trust: $$\displaystyle\frac{\mathrm{d}T}{\mathrm{d}t}=k(T-L)T(1-T).$$ (16) Another way to justify the term $(T-L)$ is to argue that the growth rate $k$ of trust should be modulated by $(T-L)$, being positive if $T>L$ and negative if $T<L$. A first order Taylor expansion then yields the form leading to expression (16). The “extended” logistic equation (Eq.(16)) is more complex than the standard logistic equation (Eq.(15)) because a constant trust (steady-state) can be reached when either $L\to T$, $T\to 1$, or $T\to 0$ which ever occurs first. Given that we start from an initial condition $T(0)>L(0)$, this means that if $L$ “catches up” with $T$ before $T\to 1$, a stationary condition is reached where $T_{\text{stationary}}=L_{\text{stationary}}$. 2.2.4 Debt Based on the assumption of a tendency for economies to increase debt levels, as argued by e.g. Dalio (2015) and empirically supported by Graeber (2011), a debt dynamics will be proposed in this subsection. We assume a tendency for economies to reach its maximal debt, while taking into account Eq.(3). This implies, in mathematical terms, that $D=TA$ should be imposed to be a natural fixed point. Then, the next point of reasoning is that a non-instantaneous convergence of $D\to TA$ is assumed. There exists inertia for $D$ to reach $TA$ because it takes time to find investment opportunities and it only makes sense to increase debt when such opportunities exist. This can be supported by, for example, Taylor (2002) and DeAngelo et al. (2011). Taylor (2002) discusses that there are occurrences in history where the capital mobility (moving funds internationally) was found to be low. DeAngelo et al. (2011) discuss leverage ratios with slow average speeds of adjustment to target level. The governing equations for the debt is postulated to be: $$\displaystyle\frac{\mathrm{d}(D-TA)}{\mathrm{d}t}=-a({D-TA}),$$ (17) $$\displaystyle\Leftrightarrow$$ $$\displaystyle\frac{\mathrm{d}D}{\mathrm{d}t}=a(TA-D)+\frac{\mathrm{d}(TA)}{% \mathrm{d}t},$$ (18) where $1/a>0$ captures the inertia of debt convergence to its optimal value. Note that Eq.(18) is valid for $D<TA$ and $D>TA$: in both cases there is a tendency for $D\to TA$. As previously mentioned, Eq.(18) is constructed such that $D$ tends to converge to $TA$. This convergence of $D$ to $TA$ would already be ensured without the term $\frac{\mathrm{d}(TA)}{\mathrm{d}t}$ in Eq.(18), however this term can boost or slow down the change in $D$, depending on the rate of change (trend) in $TA$. The proposed debt equation (Eq.(18)) assumes that this trend of $TA$ is known and incorporates this. We assume that economic agents learn optimally to adjust debt for a given level of trust and amount of assets. 2.2.5 Final equations for assets, leverage, trust In this section, we outline how the final differential equations for assets, leverage, trust can be obtained. First of all, Eq.(14), (16), (18) are divided by $k$, thereby making the equations non-dimensional. We introduce the non-dimensional time $\tau$, which we define as follows: $$\tau:=kt.$$ (19) We denote non-dimensional model parameters ($a,g,r$) with a tilde, for example: $\tilde{a}:=a/k$. By substitution of Eq.(18), and then Eq.(16), in Eq.(14) we obtain the final assets equation: $$\frac{\mathrm{d}A}{\mathrm{d}\tau}=\left(\frac{\tilde{g}-\tilde{r}L+\tilde{a}(% T-L)}{1-T}+(T-L)T\right)A,$$ (20) The final leverage equation can be obtained by using the definition of leverage (Eq.(2)) and the trust equation (Eq.(16)): $$\frac{\mathrm{d}L}{\mathrm{d}\tau}=(T-L)\left(\frac{\tilde{g}-\tilde{r}L+% \tilde{a}(1-L)}{1-T}+(1-L)T\right),$$ (21) which can also be written as follows: $$\displaystyle\frac{\mathrm{d}L}{\mathrm{d}\tau}=(T-L)\left(\beta\frac{L_{0}-L}% {1-T}+T(1-L)\right),$$ (22) where $\beta$ and $L_{0}$333$L_{0}$ appears naturally when substituting Eq.(18) into Eq.(14) for $T=1$, and then solving for $L$. are defined as follows: $\displaystyle\beta:=\frac{a+r}{k}=\tilde{a}+\tilde{r},\;L_{0}:=\frac{g+a}{r+a}$. Eq.(22) is a convenient expression for the study of fixed points. Lastly, the final trust equation is simply the non-dimensional version of Eq.(16): $$\frac{\mathrm{d}T}{\mathrm{d}\tau}=T(T-L)(1-T).$$ (23) Summing up, the assets, leverage and trust variables are governed by following system of three coupled ordinary differential equations: $$\displaystyle\frac{\mathrm{d}A}{\mathrm{d}\tau}=\left(\frac{\tilde{g}-\tilde{r% }L+\tilde{a}(T-L)}{1-T}+(T-L)T\right)A,$$ (24) $$\displaystyle\frac{\mathrm{d}L}{\mathrm{d}\tau}=(T-L)\left(\frac{\tilde{g}-% \tilde{r}L+\tilde{a}(1-L)}{1-T}+(1-L)T\right),$$ (25) $$\displaystyle\frac{\mathrm{d}T}{\mathrm{d}\tau}=T(T-L)(1-T),$$ (26) where $\tau$ represents a non-dimensional time expressed in unit of the characteristic time scale $1/k$ defined in the trust dynamics (15). The tildes on the model parameters ($\tilde{a},\tilde{g},\tilde{r}$) indicate that the paramters $a,g$ and $r$ introduced above have also been normalised by $k$. The meaning of $\tilde{a},\tilde{g}$ and $\tilde{r}$ are the following: (i) $1/\tilde{a}$ is the characteristic time scale for the debt to reach its optimal value ($D\to TA$); (ii) $\tilde{g}$ is the EBITA/Assets ratio, and (iii) $\tilde{r}$ is the interest rate paid on debt. From the above assets, leverage, trust equations, it can be observed that the leverage and trust equations constitute a sub-system (independent of $A$). 2.3 Closed-form solutions of the dynamics The ROA can be obtained from Eq.(24) and the ROE can be obtained from Definition 1 and the fundamental assets equation as presented in section 2.2.2 (i.e. Eq.(14)). Result 1. The ROA and ROE as a function of trust and leverage, and dependent on the model parameters $\tilde{a},\tilde{g},\tilde{r}$, are given by: $$\displaystyle\tilde{r}_{A}=\tilde{g}\frac{1}{1-T}-\tilde{r}\frac{L}{1-T}+% \tilde{a}\frac{T-L}{1-T}+(T-L)T,$$ (27) $$\displaystyle\tilde{r}_{E}=\tilde{g}+\frac{L}{1-L}(\tilde{g}-\tilde{r}).$$ (28) From Eq.(27), the different contributions to the ROA can be seen. The first term in the r.h.s. is the EBITA/Assets ratio leveraged by trust (i.e. $\propto\frac{1}{1-T}$). The second term is the cost of debt leveraged by trust. The third term, which is also leveraged by trust, is positive (resp. negative) for $T>L$ (resp. $T<L$). It can be interpreted as a reward (resp. penalty) from being patient (resp. impatient) by first building trust and then increasing leverage (resp. by increasing leverage before establishing trust). The fourth term is the transient economic growth (resp. contraction) resulting from the catching up of an economy that grows its leverage towards its optimal value from below (resp. above). In Eq.(28) for the ROE, we see that the term $(\tilde{g}-\tilde{r})$ is leveraged by how close leverage is to its maximum value $1$. The leverage/trust trajectories can be determined analytically based on Eq.(25) and (26). The results derived in Appendix A can be summarised as follows. Result 2. The leverage/trust trajectories are given by: $$\displaystyle L(T)$$ $$\displaystyle=1-K\frac{(1-T)^{1+\beta}}{T^{\beta}}\mathrm{e}^{-\frac{\beta}{1-% T}}+(L_{0}-1)\bigg{\{}\left[\frac{\beta}{1+\beta}+\frac{1-T}{1+\beta}\right]$$ $$\displaystyle\;\;\;+\frac{\beta}{1+\beta}\frac{T^{2}}{(1-T)}\int_{0}^{1}{(1-y)% ^{\beta+1}}{\mathrm{e}^{-\frac{\beta T}{1-T}y}\mathrm{d}y}\bigg{\}},$$ (29) where $K$ is an integration constant, $\beta:=\tilde{r}+\tilde{a}$, and $$L_{0}:=\frac{\tilde{g}+\tilde{a}}{\tilde{r}+\tilde{a}}~{}.$$ (30) Eq.(29) is a closed-form analytical solution. It is also useful to use for instance an Euler discretisation scheme to study numerically the different regimes described by Eq.(25) and (26). 2.4 Analysis of the leverage and trust subsystem (fixed points and stability) The following theorems 1-3 present important properties of the system of equations as presented in section 2.2.5. Their proof is provided in Appendix B. Theorem 1. The fixed points of the leverage/trust subsystem are the points $(T,L)=(0,L_{0})$ and $(T,L)=(1,L_{0})$, where $L_{0}$ is defined by expression (30). Furthermore, the axis $T=L$ is a fixed axis. Theorem 2. Let $(T^{*},L^{*})$ be one of the fixed point of the subsystem of equations (25) and (26) for trust and leverage. Then, these corresponding equations can be linearised close to $(T^{*},L^{*})$ using a Taylor expansion around the fixed point to yield $$\displaystyle\left[\begin{matrix}\frac{\mathrm{d}T}{\mathrm{d}\tau}\\ \frac{\mathrm{d}L}{\mathrm{d}\tau}\end{matrix}\right]={\left[\begin{matrix}(2T% ^{*}-L^{*})(1-T^{*})-T^{*}(T^{*}-L^{*})&-T^{*}(1-T^{*})\\ (1-L^{*})\left[\beta\frac{L_{0}-L}{(1-T^{*})^{2}}+2T^{*}-L^{*}\right]&-\beta% \frac{L_{0}+T^{*}-2L^{*}}{1-T^{*}}-T^{*}(1+T^{*}-2L^{*})\end{matrix}\right]}% \left[\begin{matrix}T-T^{*}\\ L-L^{*}\end{matrix}\right],$$ (31) The 2 by 2 matrix is the Jacobian whose eigenvalues can be studied for the different fixed points. The eigenvalues of the Jacobian evaluated for a specific fixed point indicate whether that fixed point is attractive or repulsive. We find that the point $(T,L)=(1,L_{0})$ is attractive while the point $(T,L)=(0,L_{0})$ is repulsive. The axis $T=L$ is (partly) attractive or repulsive depending on the model parameters $\tilde{a},\tilde{g},\tilde{r}$. For $L<L_{0}$, the axis $T=L$ is attractive, while it is repulsive for $L>L_{0}$. This implies that, if $g>r$ (then $L_{0}>1$), the axis $T=L$ will be entirely attractive for $L\in[0,1]$. Theorem 3. The corresponding ROA at the fixed points $(T,L)=(1,L_{0})$ and $(T,L)=(0,L_{0})$ is $-\tilde{a}$. The ROA and ROE are equal on the fixed axis $T=L$ and are given by: $$\displaystyle\tilde{r}_{A}|_{T=L}=\tilde{g}\frac{1}{1-L}-\tilde{r}\frac{L}{1-L}.$$ (32) 3 Phase portraits of leverage/trust trajectories and associated return on assets 3.1 Leverage/trust trajectories and return on assets: three cases Fig.1-3 show the dynamics of trust, leverage and return on assets (taken as a proxy for economic growth), together with their basins of attraction/repulsion, for three regimes: $\tilde{g}>\tilde{r}$, $\tilde{g}<\tilde{r}$ with $\tilde{g}<0$, and $\tilde{g}=\tilde{r}$. In each figure, panel (a) shows the non-dimensional return on assets (Eq.(27)) in color code, corresponding to the position $(L,T)$ in the diagram. Moreover, a number of trajectories (representing the vector field) of leverage and trust given by Eq.(29) are shown. Green boldface dots and green lines indicate attractive fixed points and lines, while the red boldface dots and red lines indicate the unstable points and lines. In each figure, panel (b) maps the different “basins of attraction” of the corresponding attractive fixed points and lines. The various colours indicate the following: $\bullet$ Light green area: domain of attraction towards the attractive axis $T=L$. $\bullet$ Light red area: domain of attraction towards the fixed point $(T,L)=(1,L_{0})$. $\bullet$ Light blue area: the leverage increases (it may locally decrease) so that the trajectories eventually leave the domain $L\in[0,1]$. Mathematically, we have $\tilde{r}_{L}>0$ (locally $\tilde{r}_{L}\leq 0$ may occur). The insets in panels (b) of Fig.1-3 show the return on assets on the axis $T=L$. The return on assets increases, decreases or stays constant when $L$ increases, depending on which case is studied ($\tilde{g}>\tilde{r}$, $\tilde{g}<\tilde{r}$, or $\tilde{g}=\tilde{r}$). Note that the return on assets is equal to the return on equity on the axis $T=L$ (Theorem 3). 3.2 The mechanism of reward and penalty in terms of long-term return on assets An interesting feature of Fig.1 is that the leverage/trust trajectories are upward sloping in the $T>L$ domain, while they are downward sloping in the $T<L$ domain. The higher $T_{\text{stationary}}=L_{\text{stationary}}$, the higher the stationary ROA is. Hence, regular economies ($\tilde{g}>\tilde{r}$) that first establish trust and then increase leverage are rewarded with the highest steady-state (long-term) return on assets, while economies that significantly increase leverage before having established trust are penalized with a lower long-term return on assets. Since $\tilde{r}_{A}|_{T=L}\propto\frac{1}{1-L}$ the reward for the patient economy that first builds trust is substantial. This existence of a reward when trust growth precedes leverage growth (and $T>L$) and the presence of penalty when $T<L$ are not always prevalent. In the case where $\tilde{g}<\tilde{r}$ with $\tilde{g}$ negative (see Fig.2), it can be observed that, in economies where trust is established first ($T>L$), the trajectories eventually reach a steady-state point $(T,L)=(1,L_{0})$ where the return on assets is negative ($\tilde{r}_{A}|_{(T=1,L=L_{0})}=-\tilde{a}$). Some economies in the $T<L$ regime are better off, namely those in the light green area in Fig.2 that converge to the attractive part of the axis $T=L$ where the ROA is still negative but is larger than $-\tilde{a}$. The other economies in the $T<L$ regime (light blue area in Fig.2) are, however, in a very detrimental regime. The trajectories there will eventually be expelled from the domain $L\in[0,1]$. Furthermore, the trust will be rapidly destroyed and will reach values close to zero, which implies that virtually no credit is available in the economy (if the trust is 0, there are no assets that can qualify as collateral to obtain a loan). On top of that, as can be observed from the contour plot of Fig.2, the return on assets is negative in the light blue area. In the special case where $\tilde{g}=\tilde{r}=0$ (Fig.3), there is no reward or penalty whatsoever, since all trajectories move to the steady-state line $T=L$ where the return on assets is zero. This steady-state is reminiscent of the economic state in Japan since 1990 (often referred to as the “two lost decades”) and in the Eurozone since the US-based subprime crisis in 2008 and the sovereign debt crisis in Europe starting in 2010. Japan and the Eurozone have been characterised by essentially vanishing interest rates and very low economic growth. In January 2015, the European Central Bank (ECB) announced that it will start to buy €60bn of bonds each month (“ECB announces expanded asset purchase programme”, 2015) from March 2015 for a determined period; this quantitative easing (QE) programme is a measure meant to revitalise the economy of the Eurozone. In the context of our model in terms of interacting assets, leverage and trust, this QE policy aims at boosting the parameter $\tilde{g}$, which ensures that the long-term ROA (on the axis $T=L$) increases. We will analyse the impact of negative shocks and policy response within the context of our model in section 4. 3.3 Transient costs before convergence to the beneficial fixed points for $\tilde{g}>\tilde{r}$ In all cases, by definition of what is a fixed point, the steady-state growth of leverage is zero (i.e. $L_{\text{stationary}}=L^{*}=\text{constant}$), or equivalently (based on Eq.(5)) the asset and debt value grow or shrink at the same pace in the stationary (long-term) situation. In the short-run however, the leverage is non constant and its dynamics may lead to non-intuitive effects. For example, for $\tilde{g}>\tilde{r}$ (Fig.1), in the $T>L$ regime, the leverage increases in the short-run. The long-term beneficial state of high ROA thus comes at a transient cost ($\tilde{r}_{L}>0\;\Leftrightarrow\;\tilde{r}_{D}>\tilde{r}_{A}$ in the short-run). The transient cost is the result of a convergence of the economy to a sustainable stationary fixed point (where the ROA is positive) with maximum output ($L_{\text{stationary}}=T_{\text{stationary}}$ with a value close to 1). The leverage increases in the short-run, which leads to a beneficial stationary state with a high ROA; this makes the transient fast growth of debt tolerable in view of the beneficial long-term goal. 4 Leverage/trust trajectories with regime shifts This section addresses the question of when and how should a central bank intervene in the face of a shock to an economy. Should a central bank directly act or should it best be patient and intervene relatively late? In the context of our model, we explore these questions by studying the trajectories in the leverage/trust phase portrait under different regime shifts of the model parameters $\tilde{g}$ and $\tilde{r}$. The model parameters $\tilde{r}$ and $\tilde{g}$ can be targeted by the central bank using conventional policy tools (open market operations, standing facilities, minimum reserve requirements) or unconventional tools (quantitative easing). For instance, a standard monetary intervention of a central bank at a time of crisis is to target a decrease of the (short-term) interest rate. According to general economic theory, a lower interest rate results in lower cost of borrowing, making it more attractive. The goal is to boost the economy by enticing economic agents to take more risks by investing in potential sources of economic growth. A short-term effect is also to encourage consumption, which has an immediate effect of GDP (see e.g. Dalio (2015)). Fig.4-6 depict single leverage/trust trajectories and their corresponding ROA and ROE. Regime shifts in the model parameters $\tilde{g}$ and $\tilde{r}$ are introduced along the trajectory, which flows in the direction given by the arrows. The relevant model parameters are given above each plot. The regime shifts in the exogenous model parameters $\tilde{g}$ and $\tilde{r}$ attempt to capture the main aspects of economic reality. For example, the EBITA/Assets ratio ($\tilde{g}$) of firms might abruptly change due to market changes or as a result of being (indirectly) targeted through a central bank’s policy. The interest rate ($\tilde{r}$) can also abruptly change as a result of a central bank’s policy to change its interest rate target. Panels (a) depict the leverage/trust trajectories (in black), with the initial condition $(T,L)=(0.26,0)$. Regime shifts in $\tilde{g}$ and $\tilde{r}$ are imposed along the trajectory. The black dot indicates the steady-state point associated with the last values of the parameters. The dashed coloured lines show the continuation of trajectories if no regime shift would have occurred (i.e., for constant $\tilde{g}$ and $\tilde{r}$). The value of the exogenous parameters $\tilde{a},\tilde{g},\tilde{r}$ are shown above each curve segment. The regime shifts are visualised by the kinks of the leverage/trust trajectory. They are also visible as jumps in the ROA, ROE plots shown in panels (b) as a function of leverage. Note that, in each of the panels, it can be observed that the return on assets is equal to the return on equity in the steady-state (where $T_{\text{stationary}}=L_{\text{stationary}}$). Fig.4 corresponds to the situation where the intervention (second kink) corresponding to an increase of $\tilde{g}$ and a decrease of $\tilde{r}$ occurs at an intermediate time $\tau=\tau_{1}$. To study the effect of how the timing of intervention affects the steady-state return on assets, two other cases are studed: Fig.5 corresponds to a central bank intervention occurring earlier (at time $\tau=\tau_{0}<\tau_{1}$) while Fig.6 shows the case of an intervention occurring later ($\tau=\tau_{2}>\tau_{1}$). The effects of the timing of intervention is then summarised in Fig.7. Panels (a) and (b) show respectively $\tilde{r}_{A}$ and $\tilde{r}_{A}(\tau)\cdot\tau$ as a function of $\tau$. The quantity $\tilde{r}_{A}(\tau)\cdot\tau$ can be interpreted as the natural logarithm of ${A(\tau)}/{A(0)}$ (the scaled asset value at time $\tau$), since $A(\tau)=A(0)\mathrm{e}^{\tilde{r}_{A}(\tau)\tau}$, which yields $\ln\left(\frac{A(\tau)}{A(0)}\right)=\tilde{r}_{A}(\tau)\tau$. Of course, this holds sensus stricto only for constant $r_{A}$, so that $r_{A}(\tau)\tau$ is only a (good) approximation of $\ln(A(\tau)/A(0))$. The main paradoxical conclusion obtained from Fig.7 is that optimising the long-term ROA comes at the cost of extending the period of crisis. In the scenarios represented in Fig.4-6, we have assumed that the first regime change in which $\tilde{g}$ dropped from a positive value to a negative value is completely exogenous (e.g. due to some external shock). After this first shock, a need for intervention arises. As can be observed, the drop in $\tilde{g}$ immediately leads to a negative return on assets and return on equity, and more importantly the trajectory would now head towards a disadvantageous stationary state, i.e. the attractive fixed point $(T,L)=(1,L_{0})$ with a corresponding negative stationary ROA (equal to $-\tilde{a}$). In this stationary state, trust exceeds leverage (i.e. $T_{\text{stationary}}>L_{\text{stationary}}$). This represents an economy in which not all resources are used to their full potential (the economy is “under-leveraged”). There could be more credit available in the economy to fund profitable projects in the case that the leverage would be equal to the trust (so full potential would be: $T_{\text{stationary}}=L_{\text{stationary}}$). Given the fact that intervention in the model economy is needed, the question then arises: what is the optimal time is to intervene. Determining the optimal intervention time in our model economy is analogous to studying optimal delays in an engineering control systems (Fridman, 2014). Too fast reactions can lead to over-control and unwanted oscillations or instabilities. Too slow controls may let unwanted regimes to develop. Fig.7 shows that the highest steady-state return on assets is obtained in the case of late intervention ($\tau=\tau_{2}$ shown in figure 6). Fig.6 elucidates the mechanism driving this result: postponing the intervention allows for a relatively strong increase in the trust variable together with some decrease in leverage, if postponing the intervention long enough (leverage slightly decreases in the last part of the path before intervention in Fig.6). The strong increase in trust is the most notable difference observed when comparing Fig.4 and 5 and it plays a crucial role to attain the highest steady-state return on assets. This is because it is most effective to intervene when trust is high. Intervening when trust is high ensures convergence towards a fixed point $T_{\text{stationary}}=L_{\text{stationary}}$ that is high on the diagonal and this translates (for $\tilde{g}>\tilde{r}>0$) into a high stationary ROA. From a dynamical system point of view, that trust increases when $\tilde{g}$ drops (representing a crisis) results from the fact that $L_{0}$ is “pushed” inside the domain $L\in[0,1]$ and the trajectories “bend” towards the attractive fixed point $(T,L)=(1,L_{0})$, which simultaneously means that trust grows. Structurally, the increase in trust results from our assumption in the fundamental trust Eq.(16) that trust increases when $T>L$ (and decreases when $T<L$). In other words, we have engineered an economy in which there is an innate propensity for trust to develop up to its maximum, allowing in turn leverage to grow so that the economy can attain its full potential of maximum growth. One could think that there are regimes in which leverage $L$ could grow faster than $T$, overtake it, which would then lead to a subsequent decrease in trust. However, in the present formulation of our model, this is forbidden by the “barrier” at $T=L$ (the fixed axis) that cannot be crossed by any trajectory. This property comes from the assumption underlying Eq.(17) in Appendix A that the difference between debt and its maximum available amount tends to relax exponentially fast with rate $\tilde{a}$. Thus, if at some time, trust is larger than leverage, it can only increase and remain in this region $T\geq L$ at all times. The above results thus suggest that the optimal policy intervention strategy is to accept the economic downturn for some time, thereby allowing trust to increase and leverage to decrease somewhat, and then to intervene relatively late by boosting $\tilde{g}$ and lowering interest rates $\tilde{r}$. One may question whether it makes sense that an economy in crisis can increase trust in the absence of intervention. An increasing trust implies that the maximum sustainable leverage increases. With the lower utilisation of means of production, this might actually reflect a reality. But the psychology of economic agents in general dominates, with strong risk aversion developing during crisis, leading to freezing of capital and under utilisation of resources. Such sub-optimal behaviour can be captured mathematically by modifying the optimal learning embodied in Eq.(17) of Appendix A to a weaker one that assumes less perfect anticipation of the optimal leverage. Relaxing the rigid structure of Eq.(17) will have the immediate consequence that the axis $T=L$ is no more a fixed axis under all circumstances, so that more complex dynamics that can cross it could develop. This will be studied in a subsequent publication. 5 Calibration of the model and performance In this section, the model of the joint dynamics of assets, leverage and trust is calibrated to ROE data of the EURO STOXX 50 over the time period 2000-2013, using Bayesian inference (i.e. the Gibbs sampler). 5.1 Set-up of the calibration exercise The specification of the model for the purpose of calibrating its ROE equation to ROE data is now described. It consists in an observation equation complemented by state equations, together with the specification of the prior distributions for the model parameters. The observation equation is the discrete time version of Eq.(28) to which a stochastic residual is added. The state equations are the discrete time versions of the model equations Eq.(25) and Eq.(26) for $L$ and $T$ studied previously. In the state equations, we allow for two regimes to co-exist, corresponding to states $s_{1}$ and $s_{2}$ respectively associated with two distinct values for the parameter $g_{s}$. The transition between the two regimes is described by a standard Markov Switching model with transition matrix $Q$. Observation Equation $$\displaystyle[r_{E}]_{t}=g_{s_{i}}+\frac{L_{t}}{1-L_{t}}(g_{s_{i}}-r_{t})+% \epsilon_{t},\hskip 199.169291pt\epsilon_{t}\sim\mathcal{N}(0,\sigma_{\epsilon% }^{2}),$$ (33) $$\displaystyle\text{where }\;\;t\in\mathbb{Z}^{+}\text{ and }s_{i}=s_{i}(t)\in% \{s_{1},s_{2}\}\text{ indicates the state at time }t.$$ State Equations $$\displaystyle L_{t+1}=L_{t}+(T_{t}-L_{t})\left(\frac{g_{s_{i}}-r_{t}L_{t}+a(1-% L_{t})}{1-T_{t}}+k(1-L_{t})T_{t}\right)\Delta t,$$ (34) $$\displaystyle T_{t+1}=T_{t}+kT_{t}\left(T_{t}-L_{t}\right)(1-T_{t})\Delta t,$$ (35) $$\displaystyle g_{s_{i}}=\begin{cases}c_{1}&\mbox{in }s_{1},\\ c_{2}&\mbox{in }s_{2},\end{cases}\;\;\;\text{(Markov Switching Model).}$$ (36) Priors $$\displaystyle\sigma_{\epsilon}^{2}\sim\mathcal{IG}(10^{-2},10^{-2}),\;\;c_{1},% \;c_{2}\sim\mathcal{U}(-0.25,0.25).$$ ($$\mathcal{IG}$$ denotes the inverse gamma distribution and $$\mathcal{U}$$ denotes the uniform distribution). $$\displaystyle\text{Transition rate matrix:}\;\;Q=\left[\begin{matrix}-\lambda&% \lambda\\ \mu&-\mu\end{matrix}\right],\;\;\;\;\;\;\;\lambda,\mu\sim\mathcal{U}(0,100).$$ $$\displaystyle\text{Initial conditions:}\;\;L_{1}\sim\mathcal{U}(0.2,0.3),\;T_{% 1}\sim\mathcal{U}(0.3,0.4).$$ $$\displaystyle\textbf{Parameter values:}\;\;a=0.05,\;k=0.05,\;\Delta t=0.1.$$ We sample 200 times and a burn-in sample of 100 is chosen (i.e. the first 100 samples are discarded). Uninformative priors for the variance of the noise term $\epsilon_{t}$ and most other parameters ($c_{1},c_{2},\lambda,\mu$) are chosen. The inverse gamma distribution is commonly used as a prior for the variance of noise terms. Restrictions are imposed on $L_{1}$ and $T_{1}$ to ensure that all variables remain finite and that $T>L$. The parameters that govern time scales ($a,k,\Delta t$) are chosen to be constant and such that all variables remain finite. The time step $\Delta t=0.1$ (year) corresponds to approximately one month, in line with the fact that we supply one data point for $[r_{E}]_{t}$ and $r_{t}$ per month (12 data points per year). We could have chosen to estimate the model parameters $a$ and $k$ as well, as long as $k$ and $a$ remain positive and do not cause variables to become infinite. However, estimating $a$ and $k$ (e.g. $a\sim U(0,10)$ and $k\sim U(0,1)$) will cause the dynamics of leverage and trust to become somewhat redundant due to the fact that in this way $a$, $k$ will be such that leverage and trust very quickly become constant. This would effectively reduce the system of equations that is used for fitting to only Eq.(33) and Eq.(36) with $r_{t}$ supplied as data and $L_{t}$ constant. To preclude that leverage and trust very quickly reach a steady-state, we assign constant values to $a$ and $k$. The parameter $a$ is chosen to be small (and of the same order of magnitude as $g$) to make sure that $g+a$ and thus $L_{0}$ changes significantly when the state is changed (from $s_{1}$ to $s_{2}$ or vice versa). Indeed, recall that $L_{0}$ controls the position of the attractive fixed point $(1,L_{0})$. This results in a leverage $L_{t}$ that fluctuates (decreases in one state, increases in the other state). Furthermore, $k$ is chosen such that $k\cdot\Delta t$ is small, which precludes that $L\to 1$ and $T\to 1$ so as to avoid the divergences associated with the terms $1/(1-L)$ and $1/(1-T)$ in Eq.(33) and (34). The empirical time series $r_{t}$ is obtained from the Federal Reserve Bank of St. Louis (“Interest Rates, Discount Rate for Euro Area”, n.d.) and the time series $[r_{E}]_{t}$ is calculated based on data obtained from Thomson Reuters of the adjusted closing prices of the EURO STOXX 50. The supplied return on equity values are monthly averaged yearly returns. 5.2 ROE and EBITA/Assets ratio: estimated distributions Fig.8 and 9 present the calibrated distribution of the EBITA/Assets ratio $g$ and the return on equity (ROE) $r_{E}$ respectively, based on the model specification of section 5.1. The light red shade is used to distinguish states 1 and 2 and to delineate the switching events between the states. The probability distribution is shown with the different shades of blue, that is: 20% of the observations lie within the dark blue region, another 20% in the lighter blue region that surrounds the dark blue one, and so on. The probability distribution of $g$ is narrow and the different shades cannot be (easily) observed. Fig.9 shows that the model as presented in section 5.1 is successful in fitting the return on equity of the EURO STOXX 50 for the time period 2000-2013. It can be observed that the switches between the states, which are endogenously determined in the model, occur at the same times as the shocks/jumps in the actual ROE data. Furthermore, the light red shade corresponds to the periods of negative equity returns (crisis). The proposed dynamics of the EBITA/Assets ratio $g$ shown in Fig.8 is crucial to obtain a good fit of the ROE data and it is clear that the Markov Switching Model with two states is successful. 5.3 Leverage and Trust: estimated distributions Fig.10 presents the resulting plots of the leverage (panel (a)) and trust (panel (b)) as a function of time, obtained using the model set-up of section 5.1. Fig.10 shows that the leverage decreases in the state shaded with light red (crisis state) and increases in the other state. This is a consequence of the fact that $L_{0}:=\frac{g+a}{r+a}$ is negative in the state shaded with light red ($g+a\approx-0.16+0.05=-0.11$ with $g$ taken from Fig.8), while it is positive in the other state ($g+a\approx 0.1+0.05=0.15$). Recall that $(T,L)=(1,L_{0})$ is an attractive fixed point. The trust/leverage trajectory lies in the $T>L$ regime as a consequence of the priors chosen for $L_{1}$ and $T_{1}$. As discussed previously in section 4, the increase in trust is the result of the assumption in the fundamental trust equation (Eq.(16)) that trust increases when $T>L$ (and decreases when $T<L$) together with the fact that agents anticipate optimally the evolution of leverage. This leads to an uncrossable “barrier” at the fixed axis $T=L$, so that trust can only increase when starting from an initial condition with $T>L$. Fig.10 furthermore shows that trust is almost constant. This is due to the fact that the “updating term” in Eq.(35) is relatively small ($k\cdot\Delta t=0.005$), making trust essentially constant. Fig.10 also illustrates that $L=1$ and $T=1$ are avoided. This is important because the model blows up whenever $T\to 1$ or $L\to 1$, since its equations exhibit terms proportional to $1/(1-L)$ and $1/(1-T)$ (see Eq.(33) and (34)). 6 Conclusion A macroeconomic model has been proposed based on the economic variables (i) assets, (ii) leverage and (iii) trust. The main motivation to use these three variables is to focus on the role of credit in the dynamics of economic growth, and to investigate how credit may be associated with both economic performance and confidence. Fundamental economic relations and assumptions have been used to describe the joint dynamics of these three variables. The interplay between assets, leverage and trust has been presented in leverage/trust trajectory plots, accompanied by contour plots of the return on assets. Several interesting features of the assets, leverage and trust model have been discussed. The first notable insight is the mechanism of reward/penalty associated with patience, as quantified by the return on assets. In regular economies ($g>r$), starting with a trust higher than leverage results in the highest long-term return on assets (which can be seen as a proxy for economic growth). Therefore, patient economies that first build trust and then increase leverage are positively rewarded. We also find that a positive development does not need to be monotonous: before reaching the happy positive growth steady state: an economy can live through transient regimes during which debt growth exceeds asset growth in the short-run, before converging to the most favourable long-term state. Our second main finding concerns a recommendation for the reaction of a central bank to an external shock that affect negatively the economic growth. For this, regime shifts associated with exogenous changes of model parameters have been studied for different leverage/trust trajectories. The regime shifts represent sudden changes in economic parameters, as a result of a crisis, or due to the intervention of a central bank. Based on the sample trajectories, the effect of the timing of policy intervention has been studied. It was found that late policy intervention in the model economy results in the highest long-term return on assets and largest asset value. Of course, this comes at the cost of suffering longer from the crisis until the intervention occurs. The phenomenon that late intervention is most effective to attain a high long-term return on assets can be ascribed to the fact that postponing intervention allows trust to increase first, and it is most effective to intervene when trust is high. These results derive from our first assumption that trust tends to increase when it is above leverage together with our second assumption that economic agents use an optimal learning embodied in Eq.(17) of Appendix A of what should be the utilisation of debt for a given level of trust and amount of assets. Relaxing this with less optimal learning may lead to more complex dynamics, which will be studied in a subsequent publication. We have also presented a calibration of the model to empirical data. A calibration set-up has been proposed, based on the Euler discretisation of our differential equations governing the dynamics of asset, leverage and trust. By specifying a Markov Switching Model for the EBITA/Assets ratio $g$, the model was shown to be very successful in fitting the empirical data of the return on equity of the EURO STOXX 50 for the time period 2000-2013. The fitted distribution of leverage was found to decrease in the state corresponding to crises and to increase in the other growing economy state. In the calibrated distribution of the trust variable, it can be observed that there is no time at which trust decreases. This is again a consequence of the assumption in the fundamental trust equation that trust increases when $T>L$ (and decreases when $T<L$) together with the assumption of optimal learning of the optimal level of debt. The presented figures also show that the dynamics of leverage and trust can be highly non-monotonous with curved trajectories, as a result of the nonlinear coupling between the variables. This has an important implication for policy makers, suggesting that simple linear forecasting can be deceiving in some regimes and may lead to inappropriate policy decisions. Appendix A Closed-form solution of leverage as a function of trust The leverage/trust trajectories can be solved analytically (closed-form) based on Eq.(25) and (26). We will outline the derivation and provide the result. The first step in deriving the closed form solution is to divide Eq.(25) by Eq.(26): $$\frac{\mathrm{d}L}{\mathrm{d}T}=\frac{1}{T(1-T)}\left(\frac{\tilde{g}-\tilde{r% }L+\tilde{a}(1-L)}{1-T}+(1-L)T\right).$$ (37) In terms of $\beta$ and $L_{0}$, Eq.(37) is given by: $$\displaystyle\frac{\mathrm{d}L}{\mathrm{d}T}$$ $$\displaystyle=\frac{1}{T(1-T)}\left(\beta\frac{L_{0}-L}{1-T}+T(1-L)\right),$$ (38) $$\displaystyle=\frac{\beta}{T}\cdot\frac{L_{0}-1}{(1-T)^{2}}+\left[\frac{\beta}% {T(1-T)^{2}}+\frac{1}{1-T}\right]\left(1-L\right).\;\;\;\;\;(\beta\neq 0,\pm\infty)$$ (39) Eq.(39) is a first order linear differential equation. Solving Eq.(39) yields: $$\displaystyle L(T)$$ $$\displaystyle=1-K\frac{(1-T)^{1+\beta}}{T^{\beta}}\mathrm{e}^{-\frac{\beta}{1-% T}}+(L_{0}-1)\bigg{\{}\left[\frac{\beta}{1+\beta}+\frac{1-T}{1+\beta}\right]$$ $$\displaystyle\;\;\;+\frac{\beta}{1+\beta}\frac{T^{2}}{(1-T)}\int_{0}^{1}{(1-y)% ^{\beta+1}}{\mathrm{e}^{-\frac{\beta T}{1-T}y}\mathrm{d}y}\bigg{\}},$$ (40) where $K$ is an integration constant. Appendix B Proof of Theorems 1-3 B.1 Determination of fixed points and Jacobian The fixed (stationary) points are the set $(T,L)$ for which $\frac{\mathrm{d}L}{\mathrm{d}\tau}=0$ and $\frac{\mathrm{d}T}{\mathrm{d}\tau}=0$. From Eq.(26) and Eq.(22) (which are valid for $k\neq 0$), it follows that the axis $T=L$ and the point $(T,L)=(0,L_{0})$ satisfy this condition. Furthermore, $(T,L)=(1,L_{0})$ is a fixed point. To show this, consider (14), (18), (16) in non-dimensional time for $T=1$: $$\begin{dcases}&\frac{\mathrm{d}A}{\mathrm{d}\tau}\bigg{|}_{T=1}=\tilde{g}A-% \tilde{r}D+\frac{\mathrm{d}D}{\mathrm{d}\tau},\\ &\frac{\mathrm{d}D}{\mathrm{d}\tau}\bigg{|}_{T=1}=\tilde{a}(A-D)+\frac{\mathrm% {d}A}{\mathrm{d}\tau},\\ &\frac{\mathrm{d}T}{\mathrm{d}\tau}\bigg{|}_{T=1}=0,\end{dcases}$$ (41) so: $\frac{\mathrm{d}T}{\mathrm{d}\tau}=0$ (as required to classify as fixed point) and one can substitute the first equation of system (41) into the second to find under what condition a solution is admitted: $$\displaystyle\frac{\mathrm{d}A}{\mathrm{d}\tau}=\tilde{g}A-\tilde{r}D+\tilde{a% }(A-D)+\frac{\mathrm{d}A}{\mathrm{d}\tau},$$ (42) $$\displaystyle\Leftrightarrow$$ $$\displaystyle L:=D/A=\frac{\tilde{g}+\tilde{a}}{\tilde{r}+\tilde{a}}=\frac{g+a% }{r+a}:=L_{0}.$$ (43) To summarize, the fixed points are: the axis $T=L$, the point $(T,L)=(0,L_{0})$, and the point $(T,L)=(1,L_{0})$. Now in order to be able to analyse the stability of the fixed points, define $h:=\frac{\mathrm{d}T}{\mathrm{d}\tau}$ and $g:=\frac{\mathrm{d}L}{\mathrm{d}\tau}$. Furthermore, assume that $(T^{*},L^{*})$ is an arbitrary fixed point. Now the linear system for $(T,L)$ close to $(T^{*},L^{*})$ can be represented by a Taylor expansion around the fixed point: $$\displaystyle\frac{\mathrm{d}T}{\mathrm{d}\tau}=h(T^{*},L^{*})+(T-T^{*})\cdot% \frac{\partial h}{\partial T}\bigg{|}_{T^{*},L^{*}}+(L-L^{*})\cdot\frac{% \partial h}{\partial L}\bigg{|}_{T^{*},L^{*}}+\dots\,,$$ (44) $$\displaystyle\frac{\mathrm{d}L}{\mathrm{d}\tau}=g(T^{*},L^{*})+(T-T^{*})\cdot% \frac{\partial g}{\partial T}\bigg{|}_{T^{*},L^{*}}+(L-L^{*})\cdot\frac{% \partial g}{\partial L}\bigg{|}_{T^{*},L^{*}}+\dots\,.$$ (45) Note that: $\displaystyle h(T^{*},L^{*})=\frac{\mathrm{d}T}{\mathrm{d}\tau}\bigg{|}_{T^{*}% ,L^{*}}=0$ and $\displaystyle g(T^{*},L^{*})=\frac{\mathrm{d}L}{\mathrm{d}\tau}\bigg{|}_{T^{*}% ,L^{*}}=0$ (the point $(T^{*},L^{*})$ was defined to be a fixed point), so the resulting system is given by: $$\displaystyle{\displaystyle\left[\begin{matrix}\frac{\mathrm{d}T}{\mathrm{d}% \tau}\\ \frac{\mathrm{d}L}{\mathrm{d}\tau}\end{matrix}\right]={\left[\begin{matrix}% \frac{\partial h}{\partial T}\big{|}_{T^{*},L^{*}}&\frac{\partial h}{\partial L% }\big{|}_{T^{*},L^{*}}\\ \frac{\partial g}{\partial T}\big{|}_{T^{*},L^{*}}&\frac{\partial g}{\partial L% }\big{|}_{T^{*},L^{*}}\end{matrix}\right]}\left[\begin{matrix}T-T^{*}\\ L-L^{*}\end{matrix}\right],}$$ (46) where the 2 by 2 matrix is the Jacobian evaluated at the fixed point $(T^{*},L^{*})$. The Jacobian matrix $J$ in its most general variant is given by: $$J=\left[\begin{matrix}J_{11}&J_{12}\\ J_{21}&J_{22}\end{matrix}\right]=\left[\begin{matrix}\partial_{T}\frac{\mathrm% {d}T}{\mathrm{d}\tau}&\partial_{L}\frac{\mathrm{d}T}{\mathrm{d}\tau}\\ \partial_{T}\frac{\mathrm{d}L}{\mathrm{d}\tau}&\partial_{L}\frac{\mathrm{d}L}{% \mathrm{d}\tau}\end{matrix}\right],$$ (47) which in this case becomes (using Eq.(26) and Eq.(22)): $$\displaystyle J=J(T,L)=\left[\begin{matrix}(2T-L)(1-T)-T(T-L)&-T(1-T)\\ (1-L)\left[\beta\frac{L_{0}-L}{(1-T)^{2}}+2T-L\right]&-\beta\frac{L_{0}+T-2L}{% 1-T}-T(1+T-2L)\end{matrix}\right].$$ (48) The Jacobian in Eq.(48) can be used to determine the (linear) stability of the fixed points. B.2 $T=L$ analysed: stability and ROA Stability The Jacobian matrix (Eq.(48)) evaluated at $T=L$ is given by: $$J(L,L)=\left[\begin{matrix}L(1-L)&-L(1-L)\\ (1-L)\left[\beta\frac{L_{0}-L}{(1-L)^{2}}+L\right]&-\beta\frac{L_{0}-L}{1-L}-L% (1-L)\end{matrix}\right].$$ (49) The eigenvalues (denoted $\lambda_{1},\lambda_{2}$) are: $\lambda_{1}=0$,  $\lambda_{2}=x+y=-\beta\frac{L_{0}-L}{(1-L)}$. 444The eigenvalues of the matrix $J(L,L)$ can be calculated by solving the following equation for $\lambda$: $\det(J(L,L)-\lambda I)=0,$ where $I$ is the identity matrix. Both eigenvalues are real. If both eigenvalues are negative, then the axis $T=L$ is attractive/stable, while the axis is repulsive/unstable when at least one of the eigenvalues is positive. From $\lambda_{2}$, it then follows that (assuming $L\in[0,1]$ and $\beta>0$): $$\begin{cases}\mathrm{For}\;L<L_{0},\;\mathrm{the\;axis}\;T=L\;\mathrm{is\;% attractive.}\\ \mathrm{For}\;L>L_{0},\;\mathrm{the\;axis}\;T=L\;\mathrm{is\;repulsive.}\end{cases}$$ Note that the above implies that, if $g>r$ (then: $L_{0}>1$), the axis $T=L$ will be entirely attractive for $L\in[0,1]$. ROA The corresponding non-dimensional return on assets on the axis $T=L$ is given by: $$\displaystyle\tilde{r}_{A}|_{T=L}\stackrel{{\scriptstyle\eqref{r_A_general2}}}% {{=}}\tilde{g}\frac{1}{1-L}-\tilde{r}\frac{L}{1-L},$$ (50) $$\displaystyle\Leftrightarrow$$ $$\displaystyle\tilde{r}_{A}|_{T=L}=\tilde{g}+\frac{L}{1-L}(\tilde{g}-\tilde{r})% \stackrel{{\scriptstyle\eqref{r_E_expr}}}{{=}}\tilde{r}_{E}.$$ (51) From Eq.(51), it follows that on the axis $T=L$ the return on equity and the return on assets are equal. This result $\tilde{r}_{A}=\tilde{r}_{E}$ on $T=L$ economically are intuitive. The axis $T=L$ is a steady-state axis and thus describes long-run behaviour. Hence the result can be interpreted as that in the long-run the return on financial investments (i.e. the return on equity) should equal the growth of the economy (return on assets). This is in line with the reasoning by Sornette and Cauwels (2014) and Dalio (2015). By expressing Eq.(51) as follows: $$\displaystyle\tilde{r}_{A}|_{T=L}=\tilde{r}+(\tilde{g}-\tilde{r})\frac{1}{1-L},$$ (52) the derivative with respect to $L$ can be easily computed. The derivative of Eq.(52) with respect to $L$ indicates whether $\tilde{r}_{A}|_{T=L}$ increases, decreases or stays constant for increasing $L$. Taking the derivative with respect to $L$ of Eq.(52) gives: $$\displaystyle\frac{\mathrm{d}}{\mathrm{d}L}\tilde{r}_{A}|_{T=L}$$ $$\displaystyle=\frac{\tilde{g}-\tilde{r}}{(1-L)^{2}},$$ (53) which expresses that: if $\tilde{g}>\tilde{r}$ ($\tilde{g}<\tilde{r}$) then: $\tilde{r}_{A}|_{T=L}$ increases (decreases) when $L$ increases. When $\tilde{g}=\tilde{r}$ then $\tilde{r}_{A}|_{T=L}$ is constant with respect to $L$. Note that the trajectories can never move on the axis $T=L$ (since points on the axis $T=L$ are in steady-state); however the result is useful to compare the different steady-state points on the axis $T=L$. B.3 $(T,L)=(0,L_{0})$ analysed: stability and ROA Stability The Jacobian matrix (Eq.(48)) evaluated at $T=0$, $L=L_{0}$ is given by: $$J(0,L_{0})=\left[\begin{matrix}-L_{0}&0\\ -(1-L_{0})L_{0}&\beta L_{0}\end{matrix}\right],$$ (54) with eigenvalues $\lambda_{1}=-L_{0}$, and $\lambda_{2}=\beta L_{0}$. The eigenvalues are of opposite sign. Hence, the stationary point is a saddle point, which is unstable. ROA The non-dimensional return on assets in the fixed point $(T,L)=(0,L_{0})$ is given by: $$\displaystyle\tilde{r}_{A}|_{T=0,L=L_{0}}$$ $$\displaystyle\stackrel{{\scriptstyle\eqref{r_A_general2}}}{{=}}\tilde{g}-% \tilde{r}{L_{0}}-\tilde{a}L_{0}=-\tilde{a},$$ (55) which is negative since $\tilde{a}>0$. B.4 $(T,L)=(1,L_{0})$ analysed: stability and ROA Stability In order to investigate the stability of the point $(T,L)=(1,L_{0})$, it is not possible to consider the Jacobian since this would presume $T\neq 1$ (the Jacobian contains elements $\propto\frac{1}{1-T}$). With perturbation analysis, it is however possible to show that ${(T,L)=(1,L_{0})}$ is an attractive fixed point. To do so, consider a small perturbation from the point $T=1$: $$T=1+\epsilon_{{T}},$$ (56) where $\epsilon_{\mathrm{T}}$ denotes the perturbation. Then by inserting Eq.(56) and $L=L_{0}$ into Eq.(26), the following equation is obtained: $$\displaystyle\frac{\mathrm{d}\epsilon_{T}}{\mathrm{d}\tau}=-\epsilon_{T}\cdot% \big{(}\epsilon_{T}+(1-L_{0})\big{)}(1+\epsilon_{T})\approx-\epsilon_{T}\cdot% \big{(}\epsilon_{T}+(1-L_{0})\big{)}\approx-\epsilon_{T}\cdot(1-L_{0}),$$ (57) where a first order approximation is taken (the terms proportional to $\epsilon_{T}^{2}$ or higher order terms are neglected). By separating variables and integrating Eq.(57), one can find that: $$\displaystyle\epsilon_{T}=C\mathrm{e}^{-(1-L_{0})\tau},$$ (58) showing that $\epsilon_{T}\to 0$ when $\tau\to\infty$ if $L_{0}<1$, which shows that $T=1$ is attractive if $L_{0}<1$. Eq.(43) showed that $L=L_{0}$ is the only solution admitted for $L$, hence: ${(T,L)=(1,L_{0})}$ is an attractive fixed point if $L_{0}<1$. ROA To determine the non-dimensional return on assets at the fixed point $(T,L)=(1,L_{0})$, note that Eq.(27) can also be expressed as: $$\displaystyle\tilde{r}_{A}=-\tilde{a}+\beta\frac{L_{0}-L}{1-T}+(T-L)T.$$ (59) In order to be able to take the limit $T\to 1$, a double integration by parts of Eq.(40) can be performed to arrive at: $$\displaystyle L(T)$$ $$\displaystyle=L_{0}-K\frac{(1-T)^{1+\beta}}{T^{\beta}}\mathrm{e}^{-\frac{\beta% }{1-T}}+(L_{0}-1)(1-T)\left\{-\frac{1}{\beta}+\int_{0}^{1}(1-y)^{\beta-1}% \mathrm{e}^{-\frac{\beta T}{1-T}y}\mathrm{d}y\right\}.$$ (60) One can now substitute Eq.(60) into Eq.(59) to obtain: $$\displaystyle\tilde{r}_{A}=-\tilde{a}+\beta\left(K\frac{(1-T)^{\beta}}{T^{% \beta}}e^{-\frac{\beta}{1-T}}+(1-L_{0})\left[-\frac{1}{\beta}+\int_{0}^{1}(1-y% )^{\beta-1}e^{-\frac{\beta T}{1-T}y}\,\mathrm{d}y\right]\right)+(T-L)T,$$ (61) and therefore: $$\displaystyle\lim_{T\to 1}\tilde{r}_{A}|_{L=L_{0}}$$ $$\displaystyle=-\tilde{a}-\beta(1-L_{0})\frac{1}{\beta}+(1-L_{0})=-\tilde{a}.$$ (62) References (1) Bacchetta et al. (2015) Bacchetta, P., Perazzi, E., & Van Wincoop, E. (2015). Self-Fulfilling Debt Crises: Can Monetary Policy Really Help? (No. w21158). National Bureau of Economic Research. DOI: 10.3386/w21158. Bernanke et al. (2001) Bernanke, B. S., & Gertler, M. (2001). Should central banks respond to movements in asset prices?. American Economic Review, 91(2), 253-257. DOI: 10.1257/aer.91.2.253. Beugelsdijk et al. (2004) Beugelsdijk, S., De Groot, H. L., & Van Schaik, A. B. (2004). Trust and economic growth: a robustness analysis. Oxford Economic Papers, 56(1), 118-134. DOI: 10.1093/oep/56.1.118. Bjørnskov (2012) Bjørnskov, C. (2012). How does social trust affect economic growth?. Southern Economic Journal, 78(4), 1346-1368. DOI: 10.4284/0038-4038-78.4.1346. Borio and Lowe (2002) Borio, C. E., & Lowe, P. W. (2002). Asset prices, financial and monetary stability: exploring the nexus. BIS Working Papers, No 114. DOI: 10.2139/ssrn.846305. Brunnermeier and Sannikov (2012) Brunnermeier, M. K., & Sannikov, Y. (2012). A macroeconomic model with a financial sector. American Economic Review, 104(2), 379-421. DOI: 10.1257/aer.104.2.379. Dalio (2015) Dalio, R. (2015). Economic Principles - How the economic machine works. Research by Bridgewater Associates. Retrieved October 18, 2015, from http://www.bwater.com/Uploads/FileManager/research/how-the-economic-machine-works/ray_dalio__how_the_economic_machine_works__leveragings_and_deleveragings.pdf. DeAngelo et al. (2011) DeAngelo, H., DeAngelo, L., & Whited, T. M. (2011). Capital structure dynamics and transitory debt. Journal of Financial Economics, 99(2), 235-261. DOI: 10.2139/ssrn.1262464. Dincer and Uslaner (2010) Dincer, O. C., & Uslaner, E. M. (2010). Trust and growth. Public Choice, 142(1-2), 59-67. DOI: 10.1007/s11127-009-9473-4. Dovidio et al. (2006) Dovidio, J. F., Piliavin, J. A., Schroeder, D. A., & Penner, L. (2006). The social psychology of prosocial behavior. Lawrence Erlbaum Associates Publishers. Chicago “ECB announces expanded asset purchase programme” (2015) ECB announces expanded asset purchase programme. (2015, January 22). Retrieved October 18, 2015, from https://www.ecb.europa.eu/press/pr/date/2015/html/pr150122_1.en.html. Fridman (2014) Fridman, E. (2014). Introduction to Time-Delay Systems: Analysis and Control (Systems & Control: Foundations & Applications), Birkhäuser. Galí (2009) Galí, J. (2009). Monetary Policy, inflation, and the Business Cycle: An introduction to the new Keynesian Framework. Princeton University Press. Geanakoplos (2010) Geanakoplos, J. (2010). The leverage cycle. In NBER Macroeconomics Annual 2009, Volume 24 (pp. 1-65). University of Chicago Press. Retrieved October 18, 2015, from nber.org/chapters/c11786. Gilchrist (2003) Gilchrist, S. (2003). Financial markets and financial leverage in a two-country world-economy (Vol. 228). Banco Central de Chile. Graeber (2011) Graeber, D. (2011). Debt: The first 5,000 years. Brooklyn, N.Y: Melville House. “Gross domestic product 2014” (2015) Gross domestic product 2014. (2015, September 18). Retrieved October 18, 2015, from http://databank.worldbank.org/data/download/GDP.pdf. He and Krishnamurthy (2011) He, Z., & Krishnamurthy, A. (2011). A model of capital and crises. The Review of Economic Studies, rdr036. DOI: 10.1093/restud/rdr036. “Interest Rates, Discount Rate for Euro Area” (n.d.) Interest Rates, Discount Rate for Euro Area. (n.d.). Retrieved October 18, 2015, from https://research.stlouisfed.org/fred2/series/INTDSREZQ193N. Isohätälä et al. (2015) Isohätälä, J., Klimenko, N., & Milne, A. (2015). Post-crisis Macrofinancial Modelling: Continuous Time Approaches. Forthcoming in The Handbook of Post-Crisis Financial Modelling. Palgrave-MacMillan. Knack and Keefer (1997) Knack, S., & Keefer, P. (1997). Does social capital have an economic payoff? A cross-country investigation. The Quarterly journal of economics, 1251-1288. DOI: 10.1162/003355300555475. Lang et al. (1996) Lang, L., Ofek, E., & Stulz, R. (1996). Leverage, investment, and firm growth. Journal of financial Economics, 40(1), 3-29. DOI: 10.1016/0304-405X(95)00842-3. Lorenzoni and Werning (2013) Lorenzoni, G., & Werning, I. (2013). Slow moving debt crises (No. w19228). National Bureau of Economic Research. DOI: 10.3386/w19228. Merton (1968) Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56-63. DOI: 10.1126/science.159.3810.56. Miller and Modigliani (1961) Miller, M. H., & Modigliani, F. (1961). Dividend policy, growth, and the valuation of shares. The Journal of Business, 34(4), 411-433. DOI: 10.1086/294442. Putnam et al. (1994) Putnam, R. D., Leonardi, R., & Nanetti, R. Y. (1994). Making democracy work: Civic traditions in modern Italy. Princeton university press. Reinhart and Rogoff (2009) Reinhart, C. M., & Rogoff, K. (2009). This time is different: eight centuries of financial folly. Princeton university press. Sbordone et al. (2010) Sbordone, A. M., Tambalotti, A., Rao, K., & Walsh, K. J. (2010). Policy analysis using DSGE models: an introduction. Economic Policy Review, 16(2). DOI: 10.2139/ssrn.1692896. Sornette and Cauwels (2014) Sornette, D., & Cauwels, P. (2014). 1980–2008: The illusion of the perpetual money machine and what it bodes for the future. Risks, 2(2), 103-131. DOI: 10.3390/risks2020103. “Summary for ESTX50 EUR P” (n.d.) Summary for ESTX50 EUR P. (n.d.). Retrieved October 18, 2015, from http://finance.yahoo.com/q?s=^STOXX50E. Taylor (2002) Taylor, A. M. (2002). A century of current account dynamics. Journal of International Money and Finance, 21(6), 725-748. DOI: 10.1016/S0261-5606(02)00020-7. “The global debt clock” (n.d.) The global debt clock. (n.d.). Retrieved October 18, 2015, from http://www.economist.com/content/global_debt_clock. Tobin (1969) Tobin, J. (1969). A general equilibrium approach to monetary theory. Journal of money, credit and banking, 1(1), 15-29. 10.2307/1991374. Von der Becke and Sornette (2014) Von der Becke, S., & Sornette, D. (2014). Toward a Unified Framework of Credit Creation. Swiss Finance Institute Research Paper, (14-07). DOI: 10.2139/ssrn.2395272. Woodford (2003) Woodford, M. (2003). Interest and prices: Foundations of a theory of monetary policy. Princeton University Press.
Probing the fourth neutrino existence by neutral current oscillometry in the spherical gaseous TPC J.D. Vergados${}^{1}$, Y. Giomataris${}^{2}$ and Yu.N. Novikov${}^{3}$ 1 University of Ioannina, Ioannina, GR 45110, Greece E-mail111Corresponding author:Vergados@uoi.gr 2 CEA, Saclay, DAPNIA, Gif-sur-Yvette, Cedex,France 3 Petersburg Nuclear Physics Institute, 188300, Gatchina,Russia and St.Petersburg State University, 199034 St.Petersburg, Russia (January 17, 2021) Abstract It is shown that, if the ”new neutrino” implied by the Reactor Neutrino Anomaly exists and is in fact characterized by the suggested relatively high mass squared difference and reasonably large mixing angle, it should clearly reveal itself in the oscillometry measurements. For a judicious neutrino source the ”new oscillation length” $L_{14}$ is expected shorter than 3m. Thus the needed measurements can be implemented with a gaseous spherical TPC of modest dimensions with a very good energy and position resolution, detecting nuclear recoils following the coherent neutrino-nucleus elastic scattering. The best candidates for oscillometry, yielding both monochromatic neutrinos as well as antineutrinos, are discussed. A sensitivity in the mixing angle $\theta_{14}$, $\sin^{2}{(2\theta_{14})}$=0.1 (99%), can be reached after a few months of data handling. keywords: sterile neutrinos, oscillometry,neutral currents, spherical gaseous TPC. ††journal: Nuclear Physics B 1 Introduction. A recent analysis of the Reactor Neutrino Anomaly (RNA) RNA11 led to a challenging claim that this anomaly can be explained in terms of a new fourth neutrino with a mass squared difference much larger than one encounters in neutrino oscillations. In fact assuming that the neutrino mass eigenstates are non degenerate one findsRNA11 : $$\Delta m^{2}_{24}=|m_{2}^{2}-m_{4}^{2}|\approx\Delta m^{2}_{14}=|m_{1}^{2}-m_{% 4}^{2}|\geq 1.5\mbox{(eV)}^{2}.$$ (1) and a mixing angle $$\sin^{2}{2\theta_{14}}=0.17\pm 0.1(95\%).$$ (2) It is obvious that this new neutrino should contribute to the oscillation phenomenon. In the present paper we will assume that the new netrino is sterile, that is it does not participate in weak interaction. Even then, however, it has an effect on neutrino oscillations since it will tend to decrease the elecron neutrino flux. This makes the analysis of oscillation experiments more sophisticated. In all the previous experiments the oscillation length is much larger than the size of the detector. So one is able to see the effect only if the detector is placed in the right distance from the source. It is, however, possible to design an experiment with an oscillation length of the order of the size of the detector, as it was proposed in VERGIOM06 ,VERNOV10 . This is equivalent to many standard experiments done simultaneously. In a previous paper VerGiomNov11 we have studied the possibility of investigating the low energy neutrino oscillations by the measurement of the electron recoils following neutrino electron scattering. In the present study we will explore the neutral current interaction to measure nuclear recoils. The main requirements are the same as given previously VERNOV10 with some modifications pertinent to the neutrino nucleus interaction. More specifically: The neutrinos should have low energy so that the oscillation length is smaller than the size of the detector. At the same time it should sufficiently high so that the neutrino-nucleus elastic scattering yields recoils above threshold with a sizable cross section. A monoenergetic neutrino source is preferred, since it has the advantage that some of the features of the oscillation pattern are not affected as they might be by the averaging over a continuous neutrino spectrum. Antineutrino sources with a relatively high energy can also be employed. The ”wave form” of the oscillation is not much different than that of the monochromatic source, since only the small portion of the spectrum at the high energy end becomes relevant. The lifetime of the source should be suitable for the experiment to be performed. Clearly a compromise has to be made in the selection of the source. In this article we will show that, unlike the standard neutrino case, for a sterile neutrino one can observe neutrino diasapperance oscillations via the neutral current interaction. Furthermore the aim of this article is to show that the existence of a new fourth neutrino can be verified experimentally by the direct measurements of the oscillation curves for the monoenergetic neutrino-nucleus elastic scattering. It can be done point-by-point within the dimensions of the detector, thus providing what we call neutrino oscillometry VERNOV10 ,VERGIOMNOV . 2 Neutrino oscillations and neutral current detection Suppose in addition to the three standard neutrinos we have a 4th sterile neutrino. In the neutral current detection all contributing neutrinos have the same cross section, say $\sigma$. Let us suppose that we initially have electronic neutrinos $\nu_{e}$. Then we distinguish the following cases: 1. All four neutrinos are active. Then $$\sigma_{\mbox{tot}}=\left(P(\nu_{e}\rightarrow\nu_{e})+P(\nu_{e}\rightarrow\nu% _{\mu})+P(\nu_{e}\rightarrow\nu_{\tau})+P(\nu_{e}\rightarrow\nu_{4})\right)\sigma,$$ (3) but $$P(\nu_{e}\rightarrow\nu_{e})=1-\left(P(\nu_{e}\rightarrow\nu_{\mu})+P(\nu_{e}% \rightarrow\nu_{\tau})+P(\nu_{e}\rightarrow\nu_{4})\right),$$ (4) i.e. $$\sigma_{\mbox{tot}}=\sigma,$$ (5) no oscillation is observed. 2. The fourth neutrino is sterile. Then $$\sigma_{\mbox{tot}}=\left(P(\nu_{e}\rightarrow\nu_{e})+P(\nu_{e}\rightarrow\nu% _{\mu})+P(\nu_{e}\rightarrow\nu_{\tau})\right)\sigma,$$ (6) i.e. the sterile neutrino does not contribute. Eq. 4, however, is still valid (neutinos are lost from the flux). Thus $$\sigma_{\mbox{tot}}=\left(1-P(\nu_{e}\rightarrow\nu_{4})\right)\sigma.$$ (7) If, in addition, the new oscillation length is much smaller than the other two, one finds: $$\sigma_{\mbox{tot}}=\left(1-\sin^{2}{2\theta_{14}}\sin^{2}{\pi\frac{L}{L_{14}}% }\right)\sigma.$$ (8) It is worth comparing the neutral current situation with neutrino-electron elastic scattering previously considered VerGiomNov11 . Then for the standard neutrinos we have $$\sigma_{\mbox{tot}}=P(\nu_{e}\rightarrow\nu_{e})\sigma+\left(P(\nu_{e}% \rightarrow\nu_{\mu})+P(\nu_{e}\rightarrow\nu_{\tau})\right)\sigma^{\prime},$$ (9) where $\sigma$ is the $(\nu_{e},e)$ cross section, while $\sigma^{\prime}$ is the cross section for the other two flavors,$(\nu_{\alpha},e)$, $\alpha=\mu,\tau$,. Furthermore $$P(\nu_{e}\rightarrow\nu_{e})=1-\left(P(\nu_{e}\rightarrow\nu_{\mu})+P(\nu_{e}% \rightarrow\nu_{\tau})\right).$$ (10) If, in addition, the new oscillation length is much smaller than the other two, one finds: $$\sigma_{\mbox{tot}}\approx\left(1-\chi(E_{\nu})\sin^{2}{2\theta_{13}}\sin^{2}{% \pi\frac{L}{L_{13}}}\right)\sigma,,\quad\chi(E_{\nu})=\left(1-\frac{\sigma^{% \prime}}{\sigma}\right),$$ (11) i.e. one observes oscillations, since VERGIOM06 ,VERNOV10 $\sigma^{\prime}\neq\sigma$. Furthermore, if there exists an additional sterile neutrino, we find that the corresponding oscillation cross section in $(\nu,e)$ scattering is still given by Eq. (8). 3 The differential and total cross section The elastic neutrino-nucleus scattering due to the neutral current interaction has been previously considered for the detection of sky VERGIOM06 , JDVPARIS10 and earth neutrinos VerAvGiom09 of appreciably higher neutrino energies. It has never been considered in the context of neutrino oscillations, since for standard neutrinos, as we have seen, oscillations in such a channel are not expected. The differential cross section for a given neutrino energy $E_{\nu}$ can be cast in the form VERGIOM06 : $$\left(\frac{d\sigma}{dT_{A}}\right)(T_{A},E_{\nu})=\frac{G^{2}_{F}Am_{N}}{2\pi% }~{}(N^{2}/4)F_{coh}(T_{A},E_{\nu}),$$ (12) with $$F_{coh}(T_{A},E_{\nu})=F^{2}(q^{2})\left(1+(1-\frac{T_{A}}{E_{\nu}})^{2}-\frac% {Am_{N}T_{A}}{E^{2}_{\nu}}\right),$$ (13) where $N$ is the neutron number and $F(q^{2})=F(T_{A}^{2}+2Am_{N}T_{A})$ is the nuclear form factor. The effect of the nuclear form factor depends on the target. Integrating the total cross section of Eq. 12 from $T_{A}=T_{th}$ to the maximum allowed by the neutrino energy we obtain the total cross section. The threshold energy $T_{th}$ depends on the detector. Since the favorable neutrino energies are of order 1 MeV, the results are not sensitive to the nuclear form factor. They crucially depend on the detector threshold, since the energy of the recoiling nucleus is quite low. This retardation becomes more severe, in the presence of quenching. Furthermore for a real detector the expected nuclear recoil events are quenched, especially at low energies. The quenching factor for a given detector is the ratio of the signal height for a recoil track to that of an electron signal with the same energy. We should not forget that the signal heights depend on the velocity and how the signals are extracted experimentally. The actual quenching factors must be determined experimentally for each target. In the case of NaI the quenching factor is 0.05, while for Ge and Si it is 0.2-0.3. Thus the measured recoil energy is typically reduced by a factor of about 3 for a Si-detector, when compared with the electron energy. For our purposes it is adequate, to multiply the energy scale by an recoil energy dependent quenching factor, $Q_{fac}(T_{A})$ adequately described by the Lindhard theory LINDHARD , SIMON03 . More specifically in our estimate of $Q_{\mbox{\tiny{fac}}}(T_{A})$ we assumed a quenching factor appropriate for a gas target of ${}^{4}$He which fits well the data SANTOS08 in the energy region of 2 to 50 keV : $$Q_{\mbox{\tiny{fac}}}(T_{A})=r_{1}\left[\frac{T_{A}}{1keV}\right]^{r_{2}},~{}~% {}r_{1}\simeq 0.620~{}~{},~{}~{}r_{2}\simeq 0.070.$$ (14) The quenching factor as given by Eq.( 14) is exhibited in Fig. 1b(a) for the energy of interest to us. We will assume that it is the same for all noble gases of interest in the present work. Due to quenching the threshold energy is shifted upwards, from $T_{th}$ to $T^{\prime}_{th}$ (see Fig. 1b(b)) The minimum neutrino energy required as a function of threshold is presented in Fig. 2. Furthermore, even if the neutrino energy is above this minimum and the detection is allowed with a coherent cross section ($\propto N^{2}$), the heavier the target, the smaller the recoiling energy is and the more effective the retardation due to the threshold becomes. Once the usual neutrino-nucleus cross sections $\sigma_{A}(E_{\nu},0)$ are known one can show that, due to the oscillation, they depend on the distance of the obsevation point from the source. One finds that the neutrino disappearance as seen in the neutrino-nucleus cross section can be cast in the form: $$\sigma_{A}(E_{\nu},L)=\sigma_{A}(E_{\nu},0)\left[1-\sin^{2}{2\theta_{14}}\sin^% {2}{\left(3.72451L\frac{m_{e}}{E_{\nu}}\right)}\right].$$ (15) The obtained results crucially depend on the enegy threshold and the quenching factor. Even though the threshold achieved by the gaseous spherical time projection counter (STPC) is impressive, 0.1 keV, since the neutrino recoiling energy is extremely low, in particilar for heavy targets, most of the neutrino sources do not pass the test. Some of those which pass the test are included in table 1. From these sources we will consider for further analysis those, which can be employed at present, namely ${}^{37}$Ar, ${}^{51}$Cr, ${}^{65}$Zn and ${}^{32}$P. 4 Results with monochromatic sources We will begin with the cross sections and proceed to the event rates. 4.1 Coherent cross sections For a light target like He, a variety of sources pass the test, but one loses the benefit of large coherence. In comparing these results with those of neutrino-electron scattering we should keep in mind that the present cross sections explicitly contain the $N^{2}$ term due to coherence, in the case of neutrino-electron scattering the enhancement $Z$ is not included in the cross section, but it has been contained in the electron densiity, which is $Z$ times the number of nuclei per unit volume. For the sources and the targets that pass the test we present our results in Figs 3-4. 4.2 The event rate We will consider a spherical detector with the source at the origin and will assume that the volume of the source is much smaller than the volume of the detector. The event rate $dI$ between $L$ and $L+dL$ is given by: $$dI=N_{\nu}n_{A}\frac{4\pi L^{2}dL}{4\pi L^{2}}\sigma(L,x)=N_{\nu}n_{A}dL\sigma% (L,x),$$ (16) where $N_{\nu}$ is the neutrino intensity and $n_{A}$ is the number of the target nuclei: $$n_{A}=\frac{P}{kT_{0}},$$ (17) where $P$ is its pressure and $T_{0}$ its temperature. Thus $$\frac{dI}{dL}=N_{\nu}n_{A}\sigma(L,x)$$ (18) or $$R_{0}\frac{dI}{dL}=\Lambda\tilde{\sigma}(L,x),$$ (19) where $$\Lambda=\frac{G^{2}_{F}m^{2}_{e}}{2\pi}R_{0}N_{\nu}n_{A}$$ (20) $R_{0}$ the radius of the target and $\tilde{\sigma}(L,x)$ is the neutrino-nucleus cross section in units of $(G_{F}m_{e})^{2}/(2\pi)=2.29\times 10^{-49}$m${}^{2}$. The total number of events per unit length after running for time $t_{r}$ will be given by $$\frac{dN}{dL}=\Lambda_{1}\tilde{\sigma}(L,x)\left(1-e^{-t_{r}/\tau}\right),% \quad\Lambda_{1}=\frac{G^{2}_{F}m^{2}_{e}}{2\pi}\tau N_{\nu}n_{A}$$ (21) where $\tau$ is the lifetime of the source. Integrating Eq. (21) over $L$ from 0 to $R_{0}$ we obtain the total number of events, which can be cast in the form: $$N=A+B\sin^{2}{2\theta_{14}}.$$ (22) The parameters $A$ and $B$ for some cases of interest are presented in table 2. We have included here the relevant results even for the pairs source-target not discussed further in the present work in the context of oscillometry222 Due to lack of space the expected oscillation patterns are not provided in this paper. They can be obtained by communicating directly with the authors, just to give an estimate on the uncertainties expected in the extraction of $\sin^{2}{2\theta_{14}}$ from the total number of events The goal of the experiment is to scan the monoenergetic neutrino nucleus elastic scattering events by measuring the nuclear recoils as a function of distance from the neutrino source prepared in advance at the reactor/s. This scan means point-by-point determination of scattering events along the detector dimensions within its position resolution. These events can be observed as a smooth curve, which reproduces the neutrino disappearance probability. It is worthwhile to note again that the oscillometry is suitable for monoenergetic neutrino, since it deals with a single oscillation length or $L_{14}$(see table 1). This is obviously not a case for antineutrino, since, in this instance, one extracts only an effective oscillation length. This could be a serious problem in neutrino electron scattering. In the case of nuclear recoils, as we will see below, with a judicious choice of the target-source pair, not much information is lost due to the folding, since only a narrow band in the high energy tail of the continuous neutrino energy spectrum contributes to nuclear recoils, even though the assumed threshold for noble gas targets is quite low, 0.1 keV. Table 1 clearly shows that the oscillation lengths for a new neutrino proposed in RNA11 are much smaller compared to those previously considered VERGIOMNOV in connection with $\theta_{13}$. They can thus be directly measured within the dimensions of detector of reasonable sizes. One of the very promising options could be the Spherical Time Projection Counter (STPC) proposed in VERGIOM06 . If necessary, a spherical Micromegas based on the micro-Bulk technology ADRIAm10 , which will be developed in the near future, can be employed in the STPC. A thin 50 micron polyamide foil will be used as bulk material to fabricate the detector structure. This detector provides an excellent energy resolution, can reach high gains at high gas pressure (up to 10 Atm) and has the advantage that its radioactivity level CEBRIAN10 should fulfill the requirements of the proposed experiment. In this spherical chamber with a modest radius, assumed to be 4m, the neutrino source can be situated in the center of the sphere and the detector for recoils is also placed around the source in the smaller sphere with radius $r\approx 0.5$ m. The chamber outside this small sphere is filled with a gas (a noble gas such as Ar, Ne, or He (preferably Ar), if the neutrino source is of sufficiently high energy). In the present work we assumed the gas is at room temperature under a pressure of 10 Atm. The nuclear recoils are guided by a Micromegas-detector Giomataris ,GIOMVER08 . Such type of device has an advantage in precise position determination (better than 0.1 m) and in detection of very low nuclear recoils in 4$\pi$-geometry (down to a few hundreds of eV, that well suits the nuclides from Table 1). The results obtained in the presence of quenching are presented in Figs 5-6 5 Antineutrino Sources The monochromatic neutrino sources considered above, unfortunately, have the disadvantage that the neutrino energy is lower than required to meet the experimental requirements for a neutral current (NC) detector with high neutron number. Thus one may have to resort to antineutrino sources paying the price of distortion of the form of the oscillation due to the integration over the antineutrino spectrum. The NC cross section for an ${}^{40}$Ar target with a threshold of 0.1 keV as a function of neutrino energy is shown in Fig. 7. This cross section must be folded with the energy spectrum of the source. From an experimental point of view ${}^{32}$P is the best antineutrino source (see table 1). In principle, even though it has not been included in table 1, ${}^{90}$Sr can also be used as an antineutrino source. In the case of ${}^{32}$P the normalized spectrum is exhibited in Fig. 8. One sees that, due to theshold effects only a portion of the spectrum can be exploited. Since the regions of large cross section have a small probability in the spectrum, the integrated cross section is not very large. In fact folding Eq. 15 we obtain the results shown in Figs 9-10. From the thus obtained cross sections proceeding as above we obtain the differential number of events $dN/dL$ exhibited in Fig. 11. Integrating over $L$ the results of Fig. 11, and the corresponding one without quenching, not shown here, we obtain the parameters $A$ and $B$ shown in table 2. 6 Discussion We have seen that neutrino oscillometry will lead to a direct observation of the fourth sterile neutrino in electron neutrino disapperarnce experiments, if such a neutrino exists. The calculations and analysis shows that the gaseous STPC is a powerful tool for identification of a new neutrino in two ways i) by observing the electron recoils in neutrino-electron scattering , recently discussed by us VerGiomNov11 , and ii) by measuring the nuclear recoils in the coherent neutrino-nucleus scattering via the neutral current interaction, discussed in this work. The latter method can utilize the wide experience obtained in other experiments attempting to measure nuclear recoils, like dark matter, (quenching factors, background minimization etc). We have seen that, since the expected mass-squared difference for this neutrino is rather high, the corresponding oscillation length is going to be sufficiently small for 1 MeV neutrino energy, so that it can be fitted into the dimensions of a spherical detector with the radius of a few meters. The neutrino oscillometry can be implemented in this detector with the use of the intense neutrino sources which can be placed at the origin of the sphere. The gaseous STPC with the Micromegas detection has a big advantage in the 4$\pi$-geometry and in very good position resolution (better than 0.1 m) with a very low energy threshold ($\approx$ 100 eV). The most promising candidates for oscillometry have been considered. These are the monochromatic sources ${}^{37}$Ar, ${}^{51}$Cr and ${}^{65}$Zn as well as ${}^{32}$P for an antineutrino source. As an example, for the ${}^{65}$Zn source the sensitivity to the mixing angle $\theta_{14}$ is estimated as $\sin^{2}{(2\theta_{14})}$ = 0.1 with the 99% of confidence, arising from the total number of events collected during only a few months of data handling. The observation of the expected characteristic oscillometry curve will provide a much more precise information on the oscillation length and thus constitute a definite manifestation of the existence of a new type of neutrino as very recently proposed by the analysis of the reactor antineutrino anomaly. Unfortunately, however, one cannot fully benefit from the large size of the coherent cross section for large neutron number $N$ of the target. The reason is that larger $N$ implies larger nuclear mass number $A$, i.e. lower recoil energy. Thus, even though the gaseous STPC detector has already achieved the impressive threshold of 0.1 keV, this is still high enough to reduce the cross sections for neutrinos in the MeV range for heavy targets or even exclude these targets. It is clear that the accuracy of the extraction of the above parameters will substantially increase only if the threshold can be further reduced, since it does not seam feasible to utilize a monochromatic neutrino source with energy higher than that of 1343 keV associated with ${}^{65}$Zn. References (1) G. Mention et al, The Reactor Neutrino Anomaly, arXiv:submit/01/925 [hep-ex]. (2) Y. Giomataris and J.D. Vergados. Phys. Lett., B 634:23, 2006. hep-ex/0503029. (3) J.D. Vergados and Yu.N. Novikov. Nucl. Phys. B, 839:1, 2010. arXiv:1006.3862[hep-ph]. (4) J.D. Vergados, Y. Giomataris and Yu.N. Novikov, to be published. (5) J.D. Vergados, Y. Giomataris and Yu. N. Novikov, Proceedings of PASCOS10 (Valencia, Spain) and Neutrino 2010 (Athens, Greece), arXiv:1010.4388 [hep-ph]. (6) J.D. Vergados, Fifth symposium on large TPC’s for low energy rare event detection and workshop on neutrinos from supernovae, Paris, December 14-17, 2010; arXiv:1103.1107 (hep-ph). (7) J. D. Vergados, F. T. Avignone, and I. Giomataris. Phys. Rev. D, 79:113001, 2009. arXib:0902.1055 [hep-ph]. (8) J. Lindhard et al. Mat. Phys. Medd Dan Vid. Selsk, 33:1, 1963. (9) E. Simon et al. Nucl. Instr. Meth., A 507:643, 2003. (10) D. Santos, F. Mayet, O. Guillaudin, Th. Lamy, S. Ranchon, A. Trichet, P. Colas and I. Giomataris, Ionization Quenching Factor Measurement of Helium 4, arXiv:0810.1137 (astro-ph). (11) Audi et al. Nucl. Phys. A, 729:3, 2003. (12) C. Giunti and M. Laveder. Phys. Rev. D, 82:05005, 2010. arXiv:1010.1395 [hep-ph]. (13) S. Andriamonje et al. JINST, 5:P02001, 2010. (14) C. Cebrian et al. JCAP, 1010:010, 2010. (15) Y. Giomataris et al. Nucl. Instr. and Meth. A, 376:29, 1996. (16) I. Giomataris et al. JINST, 3:P09007, 2008. arXiv:0807.2802 (physics.ins-det).
WHEN RESPONSE VARIABILITY INCREASES NEURAL NETWORK ROBUSTNESS TO SYNAPTIC NOISE Gleb Basalyga and Emilio Salinas Department of Neurobiology and Anatomy Wake Forest University School of Medicine Winston-Salem, NC 27157-1010 E-mail: gbasalyg@wfubmc.edu, esalinas@wfubmc.edu December 5, 2020 Preliminary version of paper to appear in Neural Computation Abstract Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might, in fact, be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. So, if the algorithm used to find the optimal synaptic weights can take into account the variability of the model neurons, it can also take into account the variability of the synapses. Thus, the connection patterns generated with response noise are typically more resistant to synaptic degradation than those obtained without response noise. As a consequence of this interplay, if multiplicative synaptic noise is present, it is better to have response noise in the network than not to have it. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensory-motor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn. 1 Introduction Neuronal networks face an inescapable tradeoff between learning new associations and forgetting previously stored information. In competitive learning models, this is sometimes referred to as the stability-plasticity dilemma (Carpenter and Grossberg, 1987; Hertz et al., 1991): in terms of inputs and outputs, learning to respond to new inputs will interfere with the learned responses to familiar inputs. A particularly severe form of performance degradation is known as catastrophic interference (McCloskey and Cohen, 1989). It refers to situations in which the learning of new information causes the virtually complete loss of previously stored associations. Biological networks must face a similar problem, because once a task has been mastered, plasticity mechanisms will inevitably produce further changes in the internal structural elements, leading to decreased performance. That is, within sub-networks that have already learned to perform a specific function, synaptic plasticity must at least partly appear as a source of noise. In the cortex, this problem must be quite significant, given that even primary sensory areas show a large capacity for reorganization (Wang et al., 1995; Kilgard and Merzenich, 1998; Crist et al., 2001). Some mechanisms, such as homeostatic regulation (Turrigiano and Nelson, 2000) and specific types of synaptic modification rules (Hopfield and Brody, 2004), may help alleviate the problem, but by and large, how nervous systems cope with it remains unknown. Another factor that is typically considered as a limitation for neural computation capacity is response variability. The activity of cortical neurons is highly variable, as measured either by the temporal structure of spike trains produced during constant stimulation conditions, or by spike counts collected in a given time interval and compared across identical behavioral trials (Dean, 1981; Softky and Koch, 1992, 1993; Holt et al., 1996). Some of the biophysical factors that give rise to this variability, such as the balance between excitation and inhibition, have been identified (Softky and Koch, 1993; Shadlen and Newsome, 1994; Stevens and Zador, 1998). But its functional significance, if any, is not understood. Here we consider a possible relationship between the two sources of randomness just discussed, whereby response variability helps counteract the destabilizing effects of synaptic changes. Although noise generally hampers performance, recent studies have shown that in nonlinear dynamical systems such as neural networks this is not always the case. The best known example is stochastic resonance, in which noise enhances the sensitivity of sensory neurons to weak periodic signals (Levin and Miller, 1996; Gammaitoni et al., 1998; Nozaki et al., 1999), but noise may play other constructive roles as well. For instance, when a system has an internal source of noise, an externally added noise can reduce the total noise of the output (Vilar and Rubi, 2000). Also, adding noise to the synaptic connections of a network during learning produces networks that, after training, are more robust to synaptic corruption and have a higher capacity to generalize (Murray and Edwards, 1994). In this paper we study another beneficial effect of noise on neural network performance. In this case, adding randomness to the neural responses reduces the impact of fluctuations in synaptic strength. That is, here, performance depends on two sources of variability, response noise and synaptic noise, and adding some amount of response noise produces better performance than having synaptic noise alone. The reason for this paradoxical effect is that response noise acts as a regularization factor that favors connectivity matrices with many small synaptic weights over connectivity matrices with few large weights, and this minimizes the impact of a synapse that is lost or has a wrong value. We study this regularization effect in three different cases: (1) a classification task, which in its simplest instantiation can be studied analytically, (2) a sensory-motor transformation, and (3) an attractor network that produces self-sustained activity. For the latter two, the interaction between noise terms is demonstrated by extensive numerical simulations. 2 General Framework First we consider networks with two layers, an input layer that contains $N$ sensory neurons and an output layer with $K$ output neurons. A matrix $r$ is used to denote the firing rates of the input neurons in response to $M$ stimuli, so $r_{ij}$ is the firing rate of input unit $i$ when stimulus $j$ is presented. These rates have a mean component ${\overline{\mbox{\boldmath$r$}}}$ plus noise, as described in detail below. The output units are driven by the first layer responses, such that the firing rate of output unit $k$ evoked by stimulus $j$ is $$R_{kj}=\sum_{i=1}^{N}w_{ki}\,r_{ij},$$ (1) or in matrix form, $\mbox{\boldmath$R$}=\mbox{\boldmath$w$}\mbox{\boldmath$r$}$, where $w$ is the $K\!\times\!N$ matrix of synaptic connections between input and output neurons. The output neurons also have a set of desired responses $F$, where $F_{kj}$ is the firing rate that output unit $k$ should produce when stimulus $j$ is presented. In other words, $F$ contains target values that the outputs are supposed to learn. The error $E$ is the mean squared difference between the actual driven responses $R_{kj}$ and the desired ones, $$E=\left<\frac{1}{KM}\,\sum_{k=1}^{K}\sum_{j=1}^{M}\left(R_{kj}-F_{kj}\right)^{% 2}\right>,$$ (2) or in matrix notation, $$E=\frac{1}{KM}\left<\mbox{Tr}\left[(\mbox{\boldmath$w$}\mbox{\boldmath$r$}-% \mbox{\boldmath$F$})(\mbox{\boldmath$w$}\mbox{\boldmath$r$}-\mbox{\boldmath$F$% })^{\mathrm{T}}\right]\right>.$$ (3) Here, $\mbox{Tr}(\mbox{\boldmath$A$})=\sum_{i}A_{ii}$ is the trace of a matrix and the angle brackets indicate an average over multiple trials, which corresponds to multiple samples of the noise in the inputs $r$. The optimal synaptic connections ${\overline{\mbox{\boldmath$W$}}}$ are those that make the error as small as possible. These can be found by computing the derivative of Equation (3) with respect to $w$ (or with respect to $w_{ab}$, if the summations are written explicitly) and setting the result equal to zero (see e.g., Golub and van Loan, 1996). These steps give $${\overline{\mbox{\boldmath$W$}}}=\mbox{\boldmath$F$}\,{\overline{\mbox{% \boldmath$r$}}}^{\mathrm{T}}\mbox{\boldmath$C$}^{-1},$$ (4) where ${\overline{\mbox{\boldmath$r$}}}\!=\!\left<\mbox{\boldmath$r$}\right>$ and $\mbox{\boldmath$C$}^{-1}$ is the inverse (or the pseudo-inverse) of the correlation matrix $\mbox{\boldmath$C$}=\left<\mbox{\boldmath$r$}\mbox{\boldmath$r$}^{\mathrm{T}}\right>$. The general outline of the computer experiments proceeds in five steps as follows. First, the matrix ${\overline{\mbox{\boldmath$r$}}}$ with the mean input responses is generated together with the desired output responses $F$. These two quantities define the input-output transformation that the network is supposed to implement. Second, response noise is added to the mean input rates, such that $$r_{ij}={\overline{r}}_{ij}(1+\eta_{ij}).$$ (5) The random variables $\eta_{ij}$ are independently drawn from a distribution with zero mean and variance $\sigma_{r}^{2}$, $$\displaystyle\left<\eta_{ij}\right>$$ $$\displaystyle=$$ $$\displaystyle 0$$ $$\displaystyle\left<\eta^{2}_{ij}\right>$$ $$\displaystyle=$$ $$\displaystyle\sigma_{r}^{2},$$ (6) where the brackets again denote an average over trials. We refer to this as multiplicative noise. Third, the optimal connections are found using Equation (4). Note that these connections take into account the response noise through its effect on the correlation matrix $C$. Fourth, the connections are corrupted by multiplicative synaptic noise with variance $\sigma_{W}^{2}$, that is $$W_{ij}={\overline{W}}_{ij}(1+\epsilon_{ij}),$$ (7) where $$\displaystyle\left<\epsilon_{ij}\right>$$ $$\displaystyle=$$ $$\displaystyle 0$$ $$\displaystyle\left<\epsilon^{2}_{ij}\right>$$ $$\displaystyle=$$ $$\displaystyle\sigma_{W}^{2}.$$ (8) Finally, the network’s performance is evaluated. For this, we measure the network error $E_{W}$, which is the square error obtained with the optimal but corrupted weights $W$, averaged over both types of noise, $$E_{W}=\frac{1}{KM}\left<\mbox{Tr}\left[(\mbox{\boldmath$W$}\mbox{\boldmath$r$}% -\mbox{\boldmath$F$})(\mbox{\boldmath$W$}\mbox{\boldmath$r$}-\mbox{\boldmath$F% $})^{\mathrm{T}}\right]\right>.$$ (9) Thus, the brackets in this case indicate an average over multiple trials and multiple networks, i.e., multiple corruptions of the optimal weights ${\overline{\mbox{\boldmath$W$}}}$. The main result we report here is an interaction between the two types of noise: in all the network architectures that we have explored, for a fixed amount of synaptic noise $\sigma_{W}$, the best performance is typically found when the response noise has a certain nonzero variance. So, given that there is synaptic noise in the network, it is better to have some response noise rather than to have none. Before addressing the first example, we should highlight some features of the chosen noise models. Regarding response noise, Equations (5, 6), other models were tested in which the fluctuations were additive rather than multiplicative. Also, Gaussian, uniform and exponential distributions were tested. The results for all combinations were qualitatively the same, so the shape of the response noise distribution does not seem to play an important role; what counts is mainly the variance. On the other hand, the benefit of response noise is observed only when the synaptic noise is multiplicative; it disappears with additive synaptic noise. However, we do test several variants of the multiplicative model, including one in which the random variables $\epsilon_{ij}$ are drawn from a Gaussian distribution and another in which they are binary, 0 or -1. The latter case represents a situation in which connections are eliminated randomly with a fixed probability. 3 Noise Interactions in a Classification Task First we consider a task in which the two-layer, fully connected network is used to approximate a binary function. The task is to classify $M$ stimuli on the basis of the $N$ input firing rates evoked by each stimulus. Only one output neuron is needed, so $K\!=\!1$. The desired response of this output neuron is the classification function $$F_{j}=\left\{\begin{array}[]{l}1\ \>\>\mbox{if }j\leq M/2\\ 0\ \>\>\mbox{else},\end{array}\right.$$ (10) where $j$ goes from 1 to $M$. Therefore, the job of the output unit is to produce a 1 for the first $M/2$ input stimuli and a 0 for the rest. 3.1 A Minimal Network In order to obtain an analytical description of the noise interactions, we first consider the simplest possible network that exhibits the effect, which consists of two input neurons and two stimuli. Thus, $N\!=\!M\!=\!2$ and the desired output is $\mbox{\boldmath$F$}=\left(1,0\right)$. Note that, with a single output neuron, the matrices $W$ and $F$ become row vectors. Now we proceed according to the five steps outlined in the preceding section — the goal is to show analytically that, in the presence of synaptic noise, performance is typically better for a nonzero amount of response noise. The matrix of mean input firing rates is set to $${\overline{\mbox{\boldmath$r$}}}=\left(\begin{array}[]{cc}1&r_{0}\\ r_{0}&1\\ \end{array}\right),$$ (11) where $r_{0}$ is a parameter that controls the difficulty of the classification. When it is close to 1, the pairs of responses evoked by the two stimuli are very similar and large errors in the output are expected; when it is close to 0, the input responses are most different and the classification should be more accurate. After combining the mean responses with multiplicative noise, as prescribed by Equation (5), the input responses in a given trial become $$\mbox{\boldmath$r$}=\left(\begin{array}[]{cc}1+\eta_{11}&r_{0}(1+\eta_{12})\\ r_{0}(1+\eta_{21})&1+\eta_{22}\\ \end{array}\right).$$ (12) Assuming that the fluctuations are independent across neurons, the correlation matrix is, therefore, $$\mbox{\boldmath$C$}=\left<\mbox{\boldmath$r$}\mbox{\boldmath$r$}^{\mathrm{T}}% \right>=\left(\begin{array}[]{cc}(1+r_{0}^{2})(1+\sigma_{r}^{2})&2r_{0}\\ 2r_{0}&(1+r_{0}^{2})(1+\sigma_{r}^{2})\\ \end{array}\right).$$ (13) Next, after calculating the inverse of $C$, Equation (4) is used to find the optimal weights, which are $$\displaystyle{\overline{W}}_{1}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sigma_{r}^{2}(1+r_{0}^{2})+(1-r_{0}^{2})}{(1+\sigma_{r}^{2% })^{2}\,(1+r_{0}^{2})^{2}-4r_{0}^{2}}$$ $$\displaystyle{\overline{W}}_{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sigma_{r}^{2}(1+r_{0}^{2})-(1-r_{0}^{2})}{(1+\sigma_{r}^{2% })^{2}\,(1+r_{0}^{2})^{2}-4r_{0}^{2}}\>r_{0}\,.$$ (14) Notice that these connections take into account the response variability through their dependence on $\sigma_{r}$. The next step is to corrupt these synaptic weights as prescribed by Equation (7), and substitute the resulting expressions into Equation (9). After making all the substitutions, calculating the averages and simplifying, we obtain the average error, $$E_{W}=\frac{1}{2}\left(\sigma_{W}^{2}({\overline{W}}^{2}_{1}+{\overline{W}}^{2% }_{2})(1+\sigma_{r}^{2})(1+r_{0}^{2})-{\overline{W}}_{1}-r_{0}{\overline{W}}_{% 2}+1\right).$$ (15) This is the average square difference between the desired and actual responses of the output neuron given the two types of noise. It is a function only of three parameters, $\sigma_{r}$, $\sigma_{W}$ and $r_{0}$, because the optimal weights themselves depend on $\sigma_{r}$ and $r_{0}$. The interaction between noise terms for this simple $N\!=\!K\!=\!2$ case is illustrated in Fig. 1A, which plots the error as a function of $\sigma_{r}$ with and without synaptic variability. Here, dashed and solid lines represent the theoretical results given by Equations (14, 15) and symbols correspond to simulation results averaged over $1000$ networks and $100$ trials per network. Without synaptic noise (dashed line), the error increases monotonically with $\sigma_{r}$, as one would normally expect when adding response variability. In contrast, when $\sigma_{W}\!=\!0.15$, 0.2 or 0.25 (solid lines), the error initially decreases and then starts increasing again, slowly approaching the curve obtained with response noise alone. Figure 1B shows how the optimal weights depend on $\sigma_{r}$. The solid lines were obtained from Equations (14) above. The curves show that the effect of response noise is to decrease the absolute values of the optimal synaptic weights. Intuitively, that is why response variability is advantageous; smaller synaptic weights also mean smaller synaptic fluctuations, because their standard deviation (SD) is proportional to the mean values. So, there is a tradeoff: the intrinsic effect of increasing $\sigma_{r}$ is to increase the error, but with synaptic noise present, $\sigma_{r}$ also decreases the magnitude of the weights, which lowers the impact of the synaptic fluctuations. That the impact of synaptic noise grows directly with the magnitude of the weights is also apparent from the first term in Equation (15). The magnitude of the noise interaction can be quantified by the ratio $E_{\mathrm{min}}$$/E_{0}$, where the numerator is the minimal value of the error curve and the denominator is the error obtained when only synaptic noise is present, that is, when $\sigma_{r}\!=\!0$. The minimum error $E_{\mathrm{min}}$ occurs at the optimal value of $\sigma_{r}$, denoted as $\sigma_{\mathrm{min}}$. The ratio $E_{\mathrm{min}}$$/E_{0}$ is equal to 1 if response variability provides no advantage and approaches 0 as $\sigma_{\mathrm{min}}$ cancels more of the error due to synaptic noise. For the lowest solid curve in Fig. 1A the ratio is approximately 0.8, so response variability cancels about 20% of the square error generated by synaptic fluctuations. Note, however, that in these examples the error is below $E_{0}$ for a large range of values of $\sigma_{r}$, not only near $\sigma_{\mathrm{min}}$, so response noise may be beneficial even if it is not precisely matched to the amount of synaptic noise. Figure 2 further characterizes the strength of the interaction between the two types of noise. Figures 2A, B show how the error and the optimal amount of response variability vary as functions of $\sigma_{W}$. These graphs indicate that the fraction of the error that $\sigma_{r}$ is able to compensate for, as well as the optimal amount of response noise, increases with the SD of the synaptic noise. The minimum error, $E_{\mathrm{min}}$, grows steadily with $\sigma_{W}$ — clearly, $\sigma_{r}$ cannot completely compensate for synaptic corruption. Also, $\sigma_{W}$ has to be bigger than a critical value for the noise interaction to be observed ($\sigma_{W}\!>\!0.1$, approximately). However, except when synaptic noise is very small, the optimal strategy is to add some response noise to the network. As in the previous figure, symbols and lines in Fig. 2 correspond to simulation and theoretical results, respectively. To obtain the latter, the key is to calculate $\sigma_{\mathrm{min}}$. This is done by, first, substituting the optimal synaptic weights of Equation (14) into the expression for the average error, Equation (15), and second, calculating the derivative of the error with respect to $\sigma_{r}^{2}$ and equating it to zero. The resulting expression gives $\sigma^{2}_{\mathrm{min}}$ as a function of the only two remaining parameters, $\sigma_{W}$ and $r_{0}$. The dependence, however, is highly nonlinear, so in general the solution is implicit: $$\displaystyle\sigma_{r}^{8}\,(1-\sigma_{W}^{2})+2\sigma_{r}^{6}\,(1+a^{2}(1-2% \sigma_{W}^{2}))+6\sigma_{r}^{4}a^{2}\,(1-\sigma_{W}^{2})+\mbox{}$$ (16) $$\displaystyle 2\sigma_{r}^{2}a^{2}\,(1+a^{2}+2a^{2}\sigma_{W}^{2}-4\sigma_{W}^% {2})+a^{4}(1+3\sigma_{W}^{2})-4a^{2}\sigma_{W}^{2}\>\>=\>\>0\,,$$ where $$a\equiv\frac{1-r_{0}^{2}}{1+r_{0}^{2}}\,.$$ (17) The value of $\sigma_{r}$ that makes Equation (16) true is $\sigma_{\mathrm{min}}$. For Figs. 2A, B, the zero of the polynomial was found numerically for each combination of $r_{0}$ and $\sigma_{W}$. Figures 2C, D show how $E_{\mathrm{min}}$, $E_{\mathrm{min}}$/$E_{0}$ and $\sigma_{\mathrm{min}}$ depend on the separation between evoked input responses, as parameterized by $r_{0}$. For these two plots, we chose a special case in which $\sigma_{\mathrm{min}}$ can be obtained analytically from Equation (16): $\sigma_{W}\!=\!1$. In this particular case the dependence of $\sigma_{\mathrm{min}}$ on $r_{0}$ has a closed form, $$\sigma_{\mathrm{min}}^{2}=\frac{(1-r_{0}^{2})^{2/3}}{1+r_{0}^{2}}\left((1+r_{0% })^{2/3}+(1-r_{0})^{2/3}\right).$$ (18) This function is shown in Fig. 2D. In general, the numerical simulations are in good agreement with the theory, except that the scatter in Fig. 2D tends to increase as $r_{0}$ approaches 0. This is due to a key feature of the noise interaction, which is that it depends on the overlap between input responses across stimuli. This can be seen as follows. First, notice that in Fig. 2C the relative error approaches 1 as $r_{0}$ gets closer to 0. Thus, the noise interaction becomes weaker when there is less overlap between input responses, which is precisely what $r_{0}$ represents in Equation (11). If there is no overlap at all, the benefit of response noise vanishes. This fact explains why more than one neuron is needed to observe the noise interaction in the first place. This observation can be demonstrated analytically by setting $r_{0}\!=\!0$ in Equations (14) and (15), in which case the average square error becomes $$E_{W}(r_{0}\!=\!0)=\frac{1}{2}\left(\frac{\sigma_{W}^{2}-1}{1+\sigma_{r}^{2}}+% 1\right).$$ (19) This result has interesting implications. If $\sigma_{W}^{2}\!=\!1$, response noise makes no difference, so there is no optimal value. If $\sigma_{W}^{2}\!<\!1$, the error increases monotonically with response noise, so the optimal value is 0. And if $\sigma_{W}^{2}\!>\!1$, the optimal strategy is to add as much noise as possible! In this case, the variance of the output neuron is so high that there is no hope of finding a reasonable solution; the best thing to do is set the mean weights to zero, disconnecting the output unit. Thus, without overlap, either the synaptic noise is so high that the network is effectively useless, or, if $\sigma_{W}$ is tolerable, response noise does not improve performance. At $r_{0}\!=\!0$, the numerical solutions oscillate between these two extremes, producing an average error of 0.5 (leftmost point in Fig. 2C). In general, however, with non-zero overlap there is a true optimal amount of response noise, and the more overlap there is, the larger its benefit, as shown in Fig. 2C. The simulation data points in Fig. 2 were obtained using fluctuations $\epsilon$ and $\eta$ in Equations (7) and (12), respectively, sampled from Gaussian distributions. The results, however, were virtually identical when the distribution functions were either uniform or exponential. Thus, as noted earlier, the exact shapes of the noise distributions do not restrict the observed effect. 3.2 Regularization by Noise Above, we mentioned that response noise tends to decrease the absolute value of the optimal synaptic weights. Why is this? The reason is that minimization of the mean square error in the presence of response noise is mathematically equivalent to minimization of the same error without response noise but with an imposed constraint forcing the optimal weights to be small. This is as follows. Consider Equation (4), which specifies the optimal weights in the two-layer network. Response noise enters into the expression through the correlation matrix. By separating the input responses into mean plus noise, we have $$C$$ $$\displaystyle=$$ $$\displaystyle\left<({\overline{\mbox{\boldmath$r$}}}+\mbox{\boldmath$\eta$})({% \overline{\mbox{\boldmath$r$}}}+\mbox{\boldmath$\eta$})^{{\mathrm{T}}}\right>$$ (20) $$\displaystyle=$$ $$\displaystyle{\overline{\mbox{\boldmath$r$}}}\,{\overline{\mbox{\boldmath$r$}}% }^{{\mathrm{T}}}+\left<\mbox{\boldmath$\eta$}\mbox{\boldmath$\eta$}^{{\mathrm{% T}}}\right>$$ $$\displaystyle=$$ $$\displaystyle{\overline{\mbox{\boldmath$r$}}}\,{\overline{\mbox{\boldmath$r$}}% }^{{\mathrm{T}}}+\mbox{\boldmath$D$}_{\!\sigma}\,,$$ where we have assumed that the noise is additive and uncorrelated across neurons (additivity is considered for simplicity but is not necessary). This results in the diagonal matrix $\mbox{\boldmath$D$}_{\!\sigma}$ containing the variances of individual units, such that element $j$ along the diagonal is the total variance, summed over all stimuli, of input neuron $j$. Thus, uncorrelated response noise adds a diagonal matrix to the correlation between average responses. In that case, Equation (4) can be rewritten as $${\overline{\mbox{\boldmath$W$}}}=\mbox{\boldmath$F$}\,{\overline{\mbox{% \boldmath$r$}}}^{\mathrm{T}}\left({\overline{\mbox{\boldmath$r$}}}\,{\overline% {\mbox{\boldmath$r$}}}^{{\mathrm{T}}}+\mbox{\boldmath$D$}_{\!\sigma}\right)^{-% 1}.$$ (21) Now consider the mean square error without any noise but with an additional term that penalizes large weights. To restrict, for instance, the total synaptic weight provided by each input neuron, add the penalty term $$\frac{1}{KM}\sum_{i,j}\lambda_{i}\,w_{ij}^{2}$$ (22) to the original error expression, Equation (3). Here, $\lambda_{i}$ determines how much input neuron $i$ is taxed for its total synaptic weight. Rewriting this as a trace, the total error to be minimized in this case becomes $$E=\frac{1}{KM}\left(\left<\mbox{Tr}\left[(\mbox{\boldmath$w$}{\overline{\mbox{% \boldmath$r$}}}-\mbox{\boldmath$F$})(\mbox{\boldmath$w$}{\overline{\mbox{% \boldmath$r$}}}-\mbox{\boldmath$F$})^{\mathrm{T}}\right]\right>+\mbox{Tr}\left% (\mbox{\boldmath$w$}^{{\mathrm{T}}}\mbox{\boldmath$D$}_{\!\lambda}\mbox{% \boldmath$w$}\right)\right).$$ (23) where $\mbox{\boldmath$D$}_{\!\lambda}$ is a diagonal matrix that contains the penalty coefficients $\lambda_{i}$ along the diagonal. The synaptic weights that minimize this error function are given by $$\mbox{\boldmath$F$}\,{\overline{\mbox{\boldmath$r$}}}^{\mathrm{T}}\left({% \overline{\mbox{\boldmath$r$}}}\,{\overline{\mbox{\boldmath$r$}}}^{{\mathrm{T}% }}+\mbox{\boldmath$D$}_{\!\lambda}\right)^{-1}\!.$$ (24) But this solution has exactly the same form as Equation (21), which minimizes the error in the presence of response noise alone, without any other constraints. Therefore, adding response noise is equivalent to imposing a constraint on the magnitude of the synaptic weights, with more noise corresponding to smaller weights. The penalty term in Equation (22) can also be interpreted as a regularization term, which refers to a common type of constraint used to force the solution of an optimization problem to vary smoothly (Hinton, 1989; Haykin, 1999). Therefore, as has been pointed out previously (Bishop, 1995), the effect of response fluctuations can be described as regularization by noise. In our model, we assumed that the fluctuations in synaptic connections are proportional to their size. What happens, then, is that response noise forces the optimal weights to be small, and this significantly decreases the part of the error that depends on $\sigma_{W}$. In this way, smaller synaptic weights — and therefore a nonzero $\sigma_{r}$ — typically lead to smaller output errors. Another way to look at the relationship between the two types of noise is to calculate the optimal mean synaptic weights taking the synaptic variability directly into account. For simplicity, suppose that there is no response noise. Substitute Equation (7) directly into Equation (3) and minimize with respect to ${\overline{\mbox{\boldmath$W$}}}$, now averaging over the synaptic fluctuations. With multiplicative noise the result is again an expression similar to Equations (21) and (24), where a correction proportional to the synaptic variance is added to the diagonal of the correlation matrix. In contrast, with additive synaptic noise the resulting optimal weights are exactly the same as without any variability, because this type of noise cannot be compensated for. Therefore, the recipe for counteracting response noise is equivalent to the recipe for counteracting multiplicative synaptic noise. An argument outlining why this is generally true is presented in the Discussion, Section 6.1. 3.3 Classification in Larger Networks When the simple classification task is extended to larger numbers of first-layer neurons ($N\!>2$) and more input stimuli to classify ($M\!>2$), an important question can be studied: how does the interaction between synaptic and response noise depend on the dimensionality of the problem, that is, on $N$ and $M$? To address this issue we did the following. Each entry in the $N\times M$ matrix ${\overline{\mbox{\boldmath$r$}}}$ of mean responses was taken from a uniform distribution between 0 and 1. The desired output still consisted of a single neuron’s response given by Equation (10), as before. So, each one of the $M$ input stimuli evoked a set of $N$ neuronal responses, each set drawn from the same distribution, and the output neuron had to divide the $M$ evoked firing rate patterns into two categories. The optimal amount of response noise was found, and the process was repeated for different combinations of $N$ and $M$. The results from these simulations are shown in Fig. 3. All data points were obtained with the same amount of synaptic variability, $\sigma_{W}\!=\!0.5$. Each point represents an average over 1000 networks for which the optimal connections were corrupted. The amount of response noise that minimized the error, averaged over those 1000 corruption patterns, was found numerically by calculating the average error with the same mean responses and corruption patterns but different $\sigma_{r}$. For each combination of $N$ and $M$, this resulted in $\sigma_{\mathrm{min}}$, which is shown in panel B. The actual average error obtained with $\sigma_{r}\!=\!$ $\sigma_{\mathrm{min}}$ divided by the error for $\sigma_{r}\!=\!0$ is shown in panel A, as in the previous figure. Interestingly, the benefit conferred by response noise depends strongly on the difference between $N$ and $M$. With $M\!=\!10$ input stimuli, the effect of response noise is maximized when $N\!=\!10$ neurons are used to encode them (Fig. 3A); and viceversa, when there are $N\!=\!10$ neurons in the network, the maximum effect is seen when they encode $M\!=\!10$ stimuli (Fig. 3C). Results with other numbers (5, 20 and 40 stimuli or neurons) were the same: response noise always had a maximum impact when $N\!=\!M$. This is not unreasonable. When there are many more neurons than stimuli, a moderate amount of synaptic corruption causes only a small error, because there is redundancy in the connectivity matrix. On the other hand, when there are many more input stimuli than neurons, the error is large anyway, because the $N$ neurons cannot possibly span all the required dimensions, $M$. Thus, at both extremes, the impact of synaptic noise is limited. In contrast, when $N\!=\!M$ there is no redundancy but the output error can potentially be very small, so the network is most sensitive to alterations in synaptic connectivity. Thus, response noise makes a big difference when the number of responses and the number of independent stimuli encoded are equal or nearly so. In Figs. 3A, C, the relative error is not zero for $N\!=\!M$, but it is quite small ($E_{\mathrm{min}}$ $\!=\!0.23$, $E_{\mathrm{min}}$$/E_{0}\!=\!0.004$). This is primarily because the error without any response noise, $E_{0}$, can be very large. Interestingly, the optimal amount of response noise also seems to be largest when $N\!=\!M$, as suggested by Figs. 3B, D. In contrast to previous examples, for all data points in Fig. 3 the fluctuations in the synapses and in the firing rates, $\epsilon$ and $\eta$, were drawn from uniform rather than Gaussian distributions. As mentioned before, the variances of the underlying distributions should matter but their shapes should not. Indeed, with the same variances, results for Fig. 3 were virtually identical with Gaussian or exponential distributions. A potential concern in this network is that, although the variability of the output neuron depends on the interaction between the two types of noise, perhaps the interaction is of little consequence with respect to actual classification performance. The relevant measure for this is the probability of correct classification, $p_{c}$. This probability is obtained by comparing the distributions of output responses to stimuli in one category versus the other, which is typically done using standard methods from signal detection theory (Dayan and Abbott, 2001). The algorithm underlying the calculation is quite simple: in each trial, the stimulus is assumed to belong to class 1 if the output firing rate is below a threshold, otherwise the stimulus belongs to class 2. To obtain $p_{c}$, the results should be averaged over trials and stimuli. Finally, note that an optimal threshold should be used to obtain the highest possible $p_{c}$. We performed this analysis on the data in Fig. 3. Indeed, $p_{c}$ also depended non-monotonically on response variability. For instance, for $N\!=\!M\!=\!10$ the values with and without response noise were $p_{c}(\sigma_{r}\!=$$\sigma_{\mathrm{min}}$$)\!=\!0.83$ and $p_{c}(\sigma_{r}\!=\!0)\!=\!0.75$, where chance performance corresponds to 0.5. Also, the maximum benefit of response noise occurred for $N\!=\!M$ and decreased quickly as the difference between $N$ and $M$ grew, as in Figs. 3A, C. However, the amount of response noise that maximized $p_{c}$ was typically about one third of the amount that minimized the mean square error. Thus, the best classification probability for $N\!=\!M\!=\!10$ was $p_{c}(\sigma_{r}\!=\!0.13)\!=\!0.91$. Maximizing $p_{c}$ is not equivalent to minimizing the mean square error; the two quantities weight differently the bias and variance of the output response (see Haykin, 1999). Nevertheless, response noise can also counteract part of the decrease in $p_{c}$ due to synaptic noise, so its beneficial impact on classification performance is real. 4 Noise Interactions in a Sensory-Motor Network To illustrate the interactions between synaptic and response noise in a more biologically realistic situation, we apply the general approach outlined in Section 2 to a well-known model of sensory-motor integration in the brain. We consider the classic coordinate transformation problem in which the location of an object, originally specified in retinal coordinates, becomes independent of gaze angle. This type of computation has been thoroughly studied both experimentally (Andersen et al., 1985; Brotchie et al., 1995) and theoretically (Zipser and Andersen, 1988; Salinas and Abbott, 1995; Pouget and Sejnowski, 1997), and is thought to be the basis for generating representations of object location relative to the body or the world. Also, the way in which visual and eye-position signals are integrated here is an example of what seems to be a general principle for combining different information streams in the brain (Salinas and Thier, 2000; Salinas and Sejnowski, 2001). Such integration by ’gain modulation’ may have wide applicability in diverse neural circuits (Salinas, 2004), so it represents a plausible and general situation in which computational accuracy is important. From the point of view of the phenomenon at hand, the constructive effect of response noise, this example addresses an important issue: whether the noise interaction is still observed when network performance depends on a population of output neurons. In the classification task, performance was quantified through a single neuron’s response, but in this case it depends on a nonlinear combination of multiple firing rates, so maybe the impact of response noise washes out in the population average. As shown below, this is not the case. The sensory-motor network has, as before, a feedforward architecture with two layers. The first layer contains $N$ gain-modulated sensory units and the second or output layer contains $K$ motor units. Each sensory neuron is connected to all output neurons through a set of feedforward connections, as illustrated in Fig. 4B. The sensory neurons are sensitive to two quantities, the location (or direction) of a target stimulus $x$, which is in retinal coordinates, and the gaze (or eye-position) angle $y$. The network is designed so that the motor layer generates or encodes a movement in a direction $z$, which represents the direction of the target relative to the head. The idea is that the profile of activity of the output neurons should have a single peak centered at direction $z$. The correct (i.e., desired) relationship between inputs and outputs is $z\!=\!x\!-\!y$, which is approximately how the angles $x$ and $y$ should be combined in order to generate a head-centered representation of target direction (Zipser and Andersen, 1988; Salinas and Abbott, 1995; Pouget and Sejnowski, 1997). In other words, $z$ is the quantity encoded by the output neurons and it should relate to the quantities encoded by the sensory neurons through the function $z(x,y)\!=\!x\!-\!y$. Many other functions are possible, but as far as we can tell, the choice has little impact on the qualitative effect of response noise. In this model, the mean firing rate of sensory neuron $i$ is characterized by a product of two tuning functions, $f_{i}(x)$ and $g_{i}(y)$, such that $${\overline{r}}_{i}(x,y)=r_{\mathrm{max}}\,f_{i}(x)\left(1-D+D\,g_{i}(y)\right)% +r_{B},$$ (25) where $r_{B}\!=\!4$ spikes/s is a baseline firing rate, $r_{\mathrm{max}}\!=\!35$ spikes/s and $D$ is the modulation depth, which is set to 0.9 throughout. The sensory neurons are gain modulated because they combine the information from their two inputs nonlinearly. The amplitude — but not the selectivity — of a visually-triggered response, represented by $f_{i}(x)$, depends on the direction of gaze (Andersen et al., 1985; Brotchie et al., 1995; Salinas and Thier, 2000). Note that, in the expression above, the second index of the mean rate ${\overline{r}}_{ij}$ has been replaced by parentheses indicating a dependence on $x$ and $y$. This is to simplify the notation; the responses can still be arranged in a matrix ${\overline{\mbox{\boldmath$r$}}}$ if each value of the second index is understood to indicate a particular combination of values of $x$ and $y$. For example, if the rates were evaluated in a grid with 10 $x$ points and 10 $y$ points, the second index would run from 1 to 100, covering all combinations. Indeed, this is how it is done in the computer. For simplicity, the tuning curves for different neurons in a given layer are assumed to have the same shape but different preferred locations or center points, which are always between $-25$ and $25$. Visual responses are modeled as Gaussian tuning functions of stimulus location $x$, $$f_{i}(x)=\exp\left(-\frac{\left(x-a_{i}\right)^{2}}{2\sigma_{f}^{2}}\right),$$ (26) where $a_{i}$ is the preferred location and $\sigma_{f}\!=\!4$ is the tuning curve width. The dependence on eye position is modeled using sigmoidal functions of the gaze angle $y$, $$g_{i}(y)=\frac{1}{1+\exp(-(b_{i}-y)/d_{i})}\,,$$ (27) where $b_{i}$ is the center point of the sigmoid and $d_{i}$ is chosen randomly between $-7$ and $+7$ to make sure that the curves $g_{i}(y)$ have different slopes for different neurons in the array. In each trial of the task, response variability is included by applying a variant of Equation (5), $$r_{ij}={\overline{r}}_{ij}+\sqrt{{\overline{r}}_{ij}}\,\eta_{ij}.$$ (28) This makes the variance of the rates proportional to their means, which in general is in good agreement with experimental data (Dean, 1981; Softky and Koch, 1992, 1993; Holt et al., 1996). This choice, however, is not critical (see below). The desired response for each output neuron is also described by a Gaussian, $$F_{k}(z)=r_{\mathrm{max}}\,\exp\!\left(-\frac{\left(z-c_{k}\right)^{2}}{2% \sigma_{F}^{2}}\right)+r_{B},$$ (29) where $\sigma_{F}\!=\!4$ and $c_{k}$ is the preferred target direction of motor neuron $k$. This expression gives the intended response of output unit $k$ in terms of the encoded quantity $z$. Keep in mind, however, that the desired dependence on the sensory inputs is obtained by setting $z\!=\!x\!-\!y$. When driven by the first-layer neurons, the output rates are still calculated through a weighted sum, $$R_{k}(z)=R_{k}(x,y)=\sum_{i=1}^{N}W_{ki}\,r_{i}(x,y).$$ (30) This is equivalent to Equation (1) but with the second index defined implicitly through $x$ and $y$, as mentioned above. The optimal synaptic connections ${\overline{W}}_{ki}$ are determined exactly as before, using Equation (4). Typical profiles of activity for input and output neurons are shown in Figs. 4A, C for a trial with $x\!=\!-10$ and $y\!=\!10$. The sensory neurons are arranged according to their preferred stimulus location $a_{i}$, whereas the motor neurons are arranged according to their preferred movement direction $c_{k}$. For this sample trial no variability was included; the firing rate values in Fig. 4A are scattered under a Gaussian envelope (given by Equation (26)) because the gaze-dependent gain factors vary across cells. Also, the output profile of activity is Gaussian and has a peak at the point $z\!=\!-20$, which is exactly where it should be given that the correct input-output transformation is $z\!=\!x\!-\!y$. With noise, the output responses would be scattered around the Gaussian profile and the peak would be displaced. The error used to measure network performance is, in this case, $$E_{\mathrm{pop}}=\left<\,\left|z-Z\right|\,\right>.$$ (31) This is the absolute difference, averaged over trials and networks, between the desired movement direction $z$ — the actual head-centered target direction — and the direction $Z$ that is encoded by the center of mass of the output activity, $$Z=\frac{\sum_{i}\,(R_{i}-r_{\!B})^{2}\,c_{i}}{\sum_{k}\,(R_{k}-r_{\!B})^{2}}\,.$$ (32) Therefore, Equation (31) gives the accuracy with which the whole motor population represents the head-centered direction of the target, whereas Equation (32) provides the recipe to read out such output activity. Now the idea is to corrupt the optimal connections and evaluate $E_{\mathrm{pop}}$ using various amounts of response noise to determine whether there is an optimum. Relative to the previous examples, the key differences are, first, that the error in (31) represents a population average, and second, that although the connections are set to minimize the average difference between desired and driven firing rates, the performance criterion is not based directly on it. Simulation results for this sensory-motor model are presented in Fig. 5. A total of 400 sensory and 25 output neurons were used. These units were tested with all combinations of 20 values of $x$ and 20 values of $y$, uniformly spaced (thus, $M\!=\!400$). Synaptic noise was generated by random weight elimination. This means that, after having set the connections to their optimal values given by Equation (4), each one was reset to zero with a probability $p_{W}$. Thus, on average, a fraction $p_{W}$ of the weights in each network was eliminated. As shown in Fig. 5A, when $p_{W}\!>\!0$, the error between the encoded and the true target direction has a minimum with respect to $\sigma_{r}$. These error curves represent averages over 100 networks. Interestingly, the benefit of noise does not decrease when more sensory units are included in the first layer (Fig. 5B). That is, if $p_{W}$ is constant, the proportion of eliminated synapses does not change, so the error caused by synaptic corruption cannot be reduced simply by adding more neurons. Figure 5C shows the minimum and relative errors as functions of $p_{W}$. This graph highlights the substantial impact that response noise has on this network: the relative error stays below 0.2 even when about a third of the synapses are eliminated. This is not only because the error without response noise is high, but also because the error with an optimal amount of noise stays low. For instance, with $p_{W}\!=\!0.3$ and $\sigma_{r}\!=\!$ $\sigma_{\mathrm{min}}$, the typical deviation from the correct target direction is about 2 units, whereas with $\sigma_{r}\!=\!0$ the typical deviation is about 10. Response noise thus cuts the deviation by about a factor of five, and importantly, the resulting error is still small relative to the range of values of $z$, which spans 50 units. Also, as observed in the classification task, in general it is better to include response noise even if $\sigma_{r}$ is not precisely matched to the amount of synaptic variability (Fig. 5A). Figure 5D plots $\sigma_{\mathrm{min}}$ as a function of the probability of synaptic elimination. The optimal amount of response noise increases with $p_{W}$ and reaches fairly high levels. For instance, at a value of 1, which corresponds to $p_{W}$ near 0.15, the variance of the firing rates is equal to their mean, because of Equation (28). We wondered whether the scaling law of the response noise would make any difference, so we reran the simulations with either additive noise (SD independent of mean) or noise with an SD proportional to the mean, as in Equation (5). Results in these two cases were very similar: $E_{\mathrm{min}}$ and $E_{\mathrm{min}}$$/E_{0}$ varied very much like in Fig. 5C, and the optimal amount of noise grew monotonically with $p_{W}$, as in Fig. 5D. 5 Noise Interactions in a Recurrent Network The networks discussed in the previous sections had a feedforward architecture, and in those cases the contribution of response noise to the correlation matrix between neuronal responses could be determined analytically. In contrast, in recurrent networks the dynamics are more complex and the effects of random fluctuations more difficult to ascertain. To investigate whether response noise can still counteract some of the effects of synaptic variability, we consider a recurrent network with a well-defined function and relatively simple dynamics characterized by attractor states. When the firing rates in this network are initialized at arbitrary values, they eventually stop changing, settling down at certain steady-state points in which some neurons fire intensely and others do not. The optimal weights sought are those that allow the network to settle at predefined sets of steady-state responses, and the error is thus defined in terms of the difference between the desired steady states and the observed ones. As before, response noise is taken into account when the optimal synaptic weights are generated, although in this case the correction it introduces (relative to the noiseless case) is an approximation. The attractor network consists of $N$ continuous-valued neurons, each of which is connected to all other units via feedback synaptic connections (Hertz et al., 1991). With the proper connectivity, such network can generate, without any tuned input, a steady-state profile of activity with a cosine or Gaussian shape (Ben-Yishai et al., 1995; Compte et al., 2000; Salinas, 2003). Such stable ‘bump’-shaped activity is observed in various neural models, including those for cortical hypercolumns (Hansel and Sompolinsky, 1998), head-direction cells (Zhang, 1996; Laing and Chow, 2001) and working memory circuits (Compte et al., 2000). Below, we find the connection matrix that allows the network to exhibit a unimodal activity profile centered at any point within the array. 5.1 Optimal Synaptic Weights in a Recurrent Architecture The dynamics of the network are determined by the equation $$\tau\frac{dr_{i}}{dt}=-r_{i}+h\!\left(\sum_{j}W_{ij}\,r_{j}\right)+\eta_{i}\,,$$ (33) where $\tau\!=\!10$ is the integration time constant, $r_{i}$ is the response of neuron $i$, and $h$ is the activation function of the cells, which relates total current to firing rate. The sigmoid function $h(x)=1/(1+\exp(-x))$ is used, but this choice is not critical. As before, $\eta_{i}$ represents the response fluctuations, which are drawn independently for each neuron in every time step. In this case they are Gaussian, with zero mean and a variance $\sigma_{r}^{2}/\Delta t$. The variance of $\eta_{i}$ is divided by the integration time step $\Delta t$ to guarantee that the variance of the rate $r_{i}$ remains independent of the time step (van Kampen, 1992). For our purposes, manipulating this type of network is easier if the equations are expressed in terms of the total input currents to the cells (Hertz et al., 1991; Dayan and Abbott, 2001). If the current for neuron $i$ is $u_{i}\!=\!\sum_{j}W_{ij}\,r_{j}$, then $$\tau\frac{du_{i}}{dt}=-u_{i}+\sum_{j}W_{ij}\left(h(u_{j})+\eta_{j}\right),$$ (34) is equivalent to Equation (33) above. A stationary solution of Equation (34) without input noise is such that all derivatives become zero. This corresponds to an attractor state $\alpha$ for which $$u_{i}^{\alpha}=\sum_{j}W_{ij}\,h(u_{j}^{\alpha}).$$ (35) The label $\alpha$ is used because the network may have several attractors or sets of fixed points. The desired steady-state currents are denoted as $U_{i}^{\alpha}$. These are Gaussian profiles of activity such that, during steady state $\alpha\!=\!1$, neuron 1 is the most active (i.e., the Gaussian is centered at neuron 1), during steady state $\alpha\!=\!2$, neuron 2 is the most active, and so on. Figure 6 illustrates the activity of the network at four steady states in the absence of noise ($\sigma_{W}\!=\!0\!=\!\sigma_{r}$). To make the network symmetric, the neurons were arranged in a ring, so their activity profiles wrap around. Because of this, each neuron is labeled with an angle. The observed currents $u_{i}$ settle down at values that are almost exactly equal to the desired ones, $U_{i}^{\alpha}$. The synaptic connections that achieved this match were found by enforcing the steady-state condition (35) for the desired attractors. That is, we minimized $$E=\frac{1}{N_{A}}\sum_{\alpha=1}^{N_{A}}\sum_{i}\left(U_{i}^{\alpha}-\sum_{j}W% _{ij}\,h(U_{j}^{\alpha})\right)^{\!2},$$ (36) where $U_{i}^{\alpha}$ is a (wrap-around) Gaussian function of $i$ centered at $\alpha$ and $N_{A}$ is the number of attractors; in the simulations $N_{A}$ is always equal to the number of neurons, $N$. This procedure leads to an expression for the optimal weights equivalent to Equation (4). Thus, without response noise, $${\overline{\mbox{\boldmath$W$}}}=\mbox{\boldmath$L$}\,\mbox{\boldmath$C$}^{-1},$$ (37) where $$\displaystyle L_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{N_{A}}\sum_{\alpha}U_{i}^{\alpha}\,h(U_{j}^{\alpha})$$ $$\displaystyle C_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{N_{A}}\sum_{\alpha}h(U_{i}^{\alpha})\,h(U_{j}^{\alpha})\,.$$ (38) To include the effects of response noise, we add a correction to the diagonal of the correlation matrix, as in the previous cases (see Section 3.2). We thus set $$C_{ij}=\frac{1}{N_{A}}\sum_{\alpha}h(U_{i}^{\alpha})h(U_{j}^{\alpha})+\delta_{% ij}\,a\,\frac{\sigma_{r}^{2}}{2\tau},$$ (39) where $a$ is a proportionality constant. The rationale for this is as follows. Strictly speaking, Equation (34) with response noise does not have a steady state. But consider the simpler case of a single variable $u$ with a constant asymptotic value $u_{\infty}$, such that $$\tau\frac{du}{dt}=-u+u_{\infty}+\eta.$$ (40) If the trajectory $u(t)$ from $t\!=\!0$ to $t\!=\!T$ is calculated many times, starting from the same initial condition, the distribution of endpoints $u(T)$ has a well-defined mean and variance, which vary smoothly as functions of $T$. The mean is always equal to the endpoint that would be observed without noise, whereas for $T$ much longer than the integration time constant $\tau$, the variance is equal to the variance of the fluctuations on the right hand side of Equation (40) divided by $2\tau$ (van Kampen, 1992). These considerations suggest that we minimize $$E=\frac{1}{N_{A}}\sum_{\alpha,i}\left(U_{i}^{\alpha}-\sum_{j}W_{ij}\,\left(h(U% _{j}^{\alpha})+a\,\tilde{\eta}_{j}\right)\right)^{\!2},$$ (41) where the variance of $\tilde{\eta}_{j}$ is $\sigma_{r}^{2}/(2\tau)$. This leads to Equation (37) with the corrected correlation matrix given by (39). 5.2 Performance of the Attractor Network To evaluate the performance of this network, we compare the center of mass of the desired activity profile to that of the observed profile tracked during a period of time. For a particular attractor $\alpha$, the network is first initialized very close to that desired steady state, then Equation (34) is run for 1000 ms (100 time constants $\tau$), and the absolute difference between the initial and the current centers of mass is recorded during the last 500 ms. The error for the recurrent networks $E_{\mathrm{rec}}$ is defined as the absolute difference averaged over this time period and all attractor states, ie., all values of $\alpha$. Also, when there is synaptic noise, an additional average over networks is performed. This error function is similar to Equation (31), except that the circular topology is taken into account. Thus, $E_{\mathrm{rec}}$ is the mean absolute difference between desired and observed centers of mass. It is expressed in degrees. Before exploring the interaction between synaptic and response noise, we used $E_{\mathrm{rec}}$ to test whether the noise-dependent correction to the correlation matrix in Equation (39) was appropriate. To do this, a recurrent network without synaptic fluctuations was simulated multiple times with different values of the parameter $a$ and various amounts of response noise. The desired attractors were kept constant. The resulting error curves are shown in Fig. 7A. Each one gives the average absolute deviation between desired and observed centers of mass as a function of $\sigma_{r}$ for a different value of $a$. The dependence on $a$ was non-monotonic. The optimal value we found was 0.5, which corresponds to the lowest curve (dashed) in the figure. This curve was well below the one observed without adjusting the synaptic weights. Therefore, the correction was indeed effective. Figure 7B shows $E_{\mathrm{rec}}$ as a function of $\sigma_{r}$ when synaptic noise is also present in the recurrent network. The three solid curves correspond to nets in which synapses were randomly eliminated with probabilities $p_{W}\!=\!0.005$, 0.015 and 0.025. As with previous network architectures, a non-zero amount of response noise improves performance relative to the case where no response noise is injected. In this case, however, the mean absolute error is already about 25${}^{\circ}$at the point at which response noise starts making a difference, around $p_{W}\!=\!0.005$ (Fig. 7C). This is not surprising: these types of networks are highly sensitive to changes in their synapses, so even small mismatches can lead to large errors (Seung et al., 2000; Renart et al., 2003). Also, Fig. 7C shows that the ratio $E_{\mathrm{min}}$$/E_{0}$ does not fall below 0.6, so the benefit of noise is not as large as in previous examples. The effect was somewhat weaker when synaptic variability was simulated using Gaussian noise with SD $\sigma_{W}$ instead of random synaptic elimination. Nevertheless, it is interesting that the interaction between synaptic and response noise is observed at all under these conditions, given that the response dynamics are richer and that the minimization of Equation (41) may not be the best way to produce the desired steady-state activity. 6 Discussion 6.1 Why are Synaptic and Response Fluctuations Equivalent? We have investigated the simultaneous action of synaptic and response fluctuations on the performance of neural networks and found an interaction or equivalence between them: when synaptic noise is multiplicative, its effect is similar to that of response noise. At heart, this is a simple consequence of the product of responses and synaptic weights contained in most neural models, which has the form $\sum_{j}W_{j}r_{j}$. With multiplicative noise in one of the variables, this weighted sum turns into $\sum_{j}W_{j}(1+\xi_{j})r_{j}$, which is the same whether it is the synapse or the response that fluctuates. In either case, the total stochastic component $\sum_{j}W_{j}\xi_{j}r_{j}$ scales with the synaptic weights. The same result is obtained with additive response noise. Additive synaptic noise behaves differently, however. It instead leads to a total fluctuation $\sum_{j}\xi_{j}r_{j}$ that is independent of the mean weights. Evidently, in this case the mean values of the weights have no effect on the size of the fluctuations. Thus, the key requirement for some form of equivalence between the two noise sources is that the synaptic fluctuations must depend on the strength of the synapses. This condition was applied to the three sets of simulations presented above, which corresponded to the classification of arbitrary response patterns, a sensory-motor transformation, and the generation of multiple self-sustained activity profiles. This selection of problems was meant to illustrate the generality of the observations outlined in the above paragraph. And indeed, although the three problems differed in many respects, the results were qualitatively the same. We should also point out that, in all the simulations, the criterion used to determine the optimality of the synaptic weights was based on a mean square error. But perhaps the noise interaction changes when a different criterion is used. To investigate this, we performed additional simulations of the small $2\!\times\!1$ network in which the optimal synaptic weights were those that minimized a mean absolute deviation; thus, the square in Equation (2) was substituted with an absolute value. In this case everything proceeded as before, except that the mean weight values ${\overline{W}}$ had to be found numerically. For this, the averages were performed explicitly and the downhill simplex method was used to search for the best weights (Press et al., 1992). The results, however, were very similar to those in Fig. 2A. Although the shapes of the curves were not exactly the same, the relative and minimum errors found with the absolute value varied very much like with the mean-square error criterion as functions of $\sigma_{W}$. Therefore, our conclusions do not seem to depend strongly on the specific function used to weight the errors and find the best synaptic connection values. 6.2 When Should Response Noise Increase? According to the argument above, the most general way to state our results is this: assuming that neuronal activities are determined by weighted sums, any mechanism that is able to dampen the impact of response noise will automatically reduce the impact of multiplicative synaptic noise as well. Furthermore, we suggest that under some circumstances it is better to add more response noise and increase the dampening factor, than ignore the synaptic fluctuations altogether. There are two conditions for this scenario to make sense. (1) The network must be highly sensitive to changes in connectivity. This can be seen, for instance, in Fig. 3A, which shows that the highest benefit of response noise occurs when the number of neurons matches the number of conditions to be satisfied — it is at this point that the connections need to be most accurate. (2) The fluctuations in connectivity cannot be evaluated directly. That is, why not take into account the synaptic noise in exactly the same way as the response noise when the optimal connections are sought? For example, the average in Equation (3) could also include an average over networks (synaptic fluctuations), in which case the optimal mean weights would depend not only on $\sigma_{r}$ but also on $\sigma_{W}$. In the simulations this could certainly be done, and would lead to smaller errors. But we explicitly consider the possibility that either $\sigma_{W}$ is unknown a priori, or there is no separate biophysical mechanism for implementing the corresponding corrections to the synaptic connections. Condition number 2 is not unreasonable. Realistic networks with high synaptic plasticity must incorporate mechanisms to ensure that ongoing learning does not disrupt their previously acquired functionality. Thus, synaptic modifications rules need to achieve two goals: to establish new associations that are relevant for the current behavioral task, and to make adjustments to prevent interference from other, future associations. The latter may be particularly difficult to achieve if learning rates change unpredictably with time. It is not clear whether plausible (e.g., local) synaptic modification mechanisms could solve both problems simultaneously (see Hopfield and Brody, 2004), but the present results suggest an alternative: synaptic modification rules could be used exclusively to learn new associations based on current information, whereas response noise could be used to indirectly make the connectivity more robust to synaptic fluctuations. Although this mechanism evidently doesn’t solve the problem of combining multiple learned associations, it might alleviate it. Its advantage is that, assuming that neural circuits have evolved to adaptively optimize their function in the face of true noise, simply increasing their response variability would generate synaptic connectivity patterns that are more resistant to fluctuations. 6.3 When is Synaptic Noise Multiplicative? The condition that noise should be multiplicative means that changes in synaptic weight should be proportional to the magnitude of the weight. Evidently, not all types of synaptic modification processes lead to fluctuations that can be statistically modeled as multiplicative noise; for instance, saturation may prevent positive increases, thus restricting the variability of strong synapses. However, synaptic changes that generally increase with initial strength should be reasonably well approximated by the multiplicative model. Random synapse elimination fits this model because, if a weak synapse disappears, the change is small, whereas if a strong synapse disappears, the change is large. Thus, the magnitude of the changes correlates with initial strength. Another procedure that corresponds to multiplicative synaptic noise is this. Suppose the size of the synaptic changes is fixed, so that weights can only vary by $\pm\delta w$, but suppose also that the probability of suffering a change increases with initial synaptic strength. In this case, all changes are equal, but on average a population of strong synapses whould show higher variability than a population of weak ones. In simulations, the disruption caused by this type of synaptic corruption is indeed lessened by response noise (data not shown). 6.4 Final Remarks To summarize, the scenario we envision rests on five critical assumptions: (1) the activity of each neuron depends on synaptically-weighted sums of its (noisy) inputs, (2) network performance is highly sensitive to changes in synaptic connectivity, (3) synaptic changes unrelated to a function that has already been learned can be modeled as multiplicative noise, (4) synaptic modification mechanisms are able to take into account response noise, so synaptic strengths are adjusted to minimize its impact, but (5) synaptic modification mechanisms do not directly account for future learning. Under these conditions, our results suggest that increasing the variability of neuronal responses would, on average, result in more accurate performance. Although some of these assumptions may be rather restrictive, the diversity of synaptic plasticity mechanisms together with the high response variability observed in many areas of the brain make this constructive noise effect worth considering. Acknowledgments. Research was supported by NIH grant NS044894. References Andersen et al. (1985) Andersen, R. A., Essick, G. K., and Siegel, R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230:450–458. Ben-Yishai et al. (1995) Ben-Yishai, R., Bar-Or, R. L., and Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. PNAS, 92:3844–3848. Bishop (1995) Bishop, C. M. (1995). Training with noise is equivalent to tikhonov regularization. Neural Computation, 7:108–116. Brotchie et al. (1995) Brotchie, P. R., Andersen, R. A., Snyder, L. H., and Goodman, S. J. (1995). Head position signals used by parietal neurons to encode locations of visual stimuli. Nature, 375:232–235. Carpenter and Grossberg (1987) Carpenter, G. A. and Grossberg, S. (1987). Art2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics, 26:4919–4930. Compte et al. (2000) Compte, A., Brunel, N., Goldman-Rakic, P., and Wang, X.-J. (2000). Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex, 10:910–23. Crist et al. (2001) Crist, R. E., Li, W., and D.Gilbert, C. (2001). Learning to see: experience and attention in primary visual cortex. Nature Neuroscience, 4(4):519–525. Dayan and Abbott (2001) Dayan, P. and Abbott, L. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. MIT Press. Dean (1981) Dean, A. (1981). The variability of discharge of simple cells in the cat striate cortex. Exp Brain Res, 44:437–440. Gammaitoni et al. (1998) Gammaitoni, L., Hänggi, P., Jung, P., and Marchesoni, F. (1998). Stochastic resonance. Rev. Mod. Phys., 70:223–287. Golub and van Loan (1996) Golub, G. H. and van Loan, C. F. (1996). Matrix Computations. The John Hopkins University Press, Baltimore, 3 edition. Hansel and Sompolinsky (1998) Hansel, D. and Sompolinsky, H. (1998). Modeling feature selectivity in local cortical circuits. In Koch, C. and Segev, I., editors, Methods in Neuronal Modeling: From Synapse to Networks., pages 499–567. MIT Press, Cambridge, MA. Haykin (1999) Haykin, S. (1999). Neural Networks. A Comprehensive Foundation. Upper Saddle River, NJ: Prentice Hall. Hertz et al. (1991) Hertz, J., Krogh, A., and Palmer, R. G. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley, New York. Hinton (1989) Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 40:185–234. Holt et al. (1996) Holt, G. R., Softky, W. R., Koch, C., and Douglas, R. J. (1996). Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. Journal Neurophysiology, 75:1806–1814. Hopfield and Brody (2004) Hopfield, J. J. and Brody, C. D. (2004). Learning rules and network repair in spike-timing-based computation networks. Proc Natl Acad Sci USA, 101:337–342. Kilgard and Merzenich (1998) Kilgard, M. P. and Merzenich, M. M. (1998). Plasticity of temporal information processing in the primary auditory cortex. Nature Neuroscience, 1:727–731. Laing and Chow (2001) Laing, C. R. and Chow, C. C. (2001). Stationary bumps in networks of spiking neurons. Neural Computation, 13(7):1473–1494. Levin and Miller (1996) Levin, J. E. and Miller, J. P. (1996). Broadband neural encoding in the cricket cercal sensory system enhanced by stochastic resonance. Nature, 380:165–168. McCloskey and Cohen (1989) McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. The Psychology of Learning and Motivation, 24:109–165. Murray and Edwards (1994) Murray, A. F. and Edwards, P. J. (1994). Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Transactions on Neural Networks, 5(5):792–802. Nozaki et al. (1999) Nozaki, D., Mar, D. J., Grigg, P., and Collins, J. J. (1999). Effects of colored noise on stochastic resonance in sensory neurons. Physical Review Letters, 82:2402––2405. Pouget and Sejnowski (1997) Pouget, A. and Sejnowski, T. J. (1997). Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience, 9:222–237. Press et al. (1992) Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P. (1992). Numerical Recipes in C. Cambridge University Press, New York. Renart et al. (2003) Renart, A., Song, P., and Wang, X. J. (2003). Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron, 38:473–485. Salinas (2003) Salinas, E. (2003). Background synaptic activity as a switch between dynamical states in a network. Neural Computation, 15(7):1439–1475. Salinas (2004) Salinas, E. (2004). Context-dependent selection of visuomotor maps. BMC Neuroscience, 5(1):47. Salinas and Abbott (1995) Salinas, E. and Abbott, L. F. (1995). Transfer of coded information from sensory to motor networks. Journal of Neuroscience, 15:6461–6474. Salinas and Sejnowski (2001) Salinas, E. and Sejnowski, T. J. (2001). Gain modulation in the central nervous system: where behavior, neurophysiology and computation meet. Neuroscientist, 2:539–550. Salinas and Thier (2000) Salinas, E. and Thier, P. (2000). Gain modulation: a major computational principle of the central nervous system. Neuron, 27:15–21. Seung et al. (2000) Seung, H. S., Lee, D. D., Reis, B. Y., and Tank, D. W. (2000). Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26:259–271. Shadlen and Newsome (1994) Shadlen, M. N. and Newsome, W. T. (1994). Noise, neural codes and cortical organization. Curr. Opin. Neurobiol., 4:569–579. Softky and Koch (1992) Softky, W. P. and Koch, C. (1992). Cortical cells should fire regularly, but do not. Neural Computation, 4(5):643–646. Softky and Koch (1993) Softky, W. R. and Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. Journal of Neuroscience, 13:334–350. Stevens and Zador (1998) Stevens, C. F. and Zador, A. M. (1998). Input synchrony and the irregular firing of cortical neurons. Nature Neuroscience, 1:210–217. Turrigiano and Nelson (2000) Turrigiano, G. G. and Nelson, S. B. (2000). Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol, 10:358–364. van Kampen (1992) van Kampen, N. G. (1992). Stochastic Processes in Physics and Chemistry. Elsevier, Amsterdam. Vilar and Rubi (2000) Vilar, J. M. G. and Rubi, J. M. (2000). Scaling of Noise and Constructive Aspects of Fluctuations. Lecture Notes in Physics, Berlin Springer Verlag, 557:121. Wang et al. (1995) Wang, X., Merzenich, M. M., Sameshima, K., and Jenkins, W. (1995). Remodelling of hand representation in adult cortex determined by timing of tactile stimulation. Nature, 378:71–75. Zhang (1996) Zhang, K. (1996). Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. Journal of Neuroscience, 16(6):2112–2126. Zipser and Andersen (1988) Zipser, D. and Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331:679–684.
LPTENS-11/04 SYMMETRIES AND THE WEAK INTERACTIONS JOHN ILIOPOULOS Laboratoire de Physique Théorique de L’Ecole Normale Supérieure 75231 Paris Cedex 05, France Talk given at the Cabibbo Memorial Symposium Rome, November 12, 2010 It is a great honour for me to talk in a Symposium in memory of Nicola Cabibbo for whom I had great respect and friendship. My talk will be a talk about history, given by a non-historian. My purpose is to trace the origin of the concept which was immediately named by the high energy physics community the Cabibbo angle[1]. In doing so, I will occasionally talk about the evolution of other related subjects. Many of these ideas became part of our common heritage and shaped our understanding of the fundamental forces of Nature. There are many dangers lying in wait for the amateur who attempts to write on the history of science. One is to read the old scientific articles with the light of today’s knowledge, to assume, even sub-consciously, that whatever is clear now was also clear then. A second is more specific to recent history. Because we talk about a period we have witnessed, we tend to trust our memory, or that of our colleagues. But, as real historians know, and as I have discovered experimentally, human memory, including one’s own, is partial and selective, especially for events in which one has taken part, even marginally. Actors make poor historians, so one should rather try to put his personal recollections aside. I do not expect to succeed in producing a work a real historian would approve, but I hope that the material I have collected could provide the background notes he could, eventually, find useful. By “symmetries” in the weak interactions we mean (i) space-time symmetries, (ii) global internal symmetries and (iii) gauge symmetries. In all three fronts the effort to understand their significance has been one of the most exciting and most rewarding enterprises in modern physics. It gave rise to the development of novel ideas and concepts whose importance transcends the domain of weak interactions and encompasses all fundamental physics. Covering the entire field would be the subject of a book, so here I will only touch upon a few selected topics which are more directly related to Nicola’s work. I will not talk about the first part, the establishment of the $V-A$ nature of the weak current and I will not describe the more modern developments which led to the formulation of the Standard Model. I will mention some contributions in gauge theories partly because some of them are not generally known and partly because they touch upon the concept of universality which is a central theme in my talk. Although many versions of the history of gauge theories exist already in the recent literature[2], the message has not yet reached the textbooks students usually read. I quote a comment from the review by J.D. Jackson and L.B. Okun: “… it is amusing how little the authors of textbooks know about the history of physics.” Here I shall just mention some often forgotten contributions. The vector potential was introduced in classical electrodynamics during the first half of the nineteenth century, either implicitly or explicitly, by several authors independently. It appears in some manuscript notes by Carl Friedrich Gauss as early as 1835 and it was fully written by Gustav Kirchoff in 1857, following some earlier work by Franz Neumann and, especially, Wilhelm Weber of 1846. It was soon noticed that it carried redundant variables and several “gauge conditions” were used. The condition, which in modern notation is written as $\partial_{\mu}A^{\mu}=0$, was proposed by the Danish mathematical physicist Ludvig Valentin Lorenz in 1867. Incidentally, most physics books misspell Lorenz’s name as Lorentz, thus erroneously attributing the condition to the famous Dutch H.A. Lorentz, of the Lorentz transformations111In French: On ne prête qu’aux riches.. However, for internal symmetries, the concept of gauge invariance, as we know it to-day, belongs to Quantum Mechanics. It is the phase of the wave function, or that of the quantum fields, which is not an observable quantity and produces the internal symmetry transformations. The local version of these symmetries are the gauge theories of the Standard Model. The first person who realised that the invariance under local transformations of the phase of the wave function in the Schrödinger theory implies the introduction of an electromagnetic field was Vladimir Aleksandrovich Fock in 1926[3], just after Schrödinger wrote his equation. Naturally, one would expect non-Abelian gauge theories to be constructed following the same principle immediately after Heisenberg introduced the concept of isospin in 1932. But here history took a totally unexpected route. The development of the General Theory of Relativity offered a new paradigm for a gauge theory. The fact that it can be written as the theory invariant under local translations was certainly known to Hilbert[4]. For the next decades it became the starting point for all studies on theories invariant under local transformations. The attempt to unify gravitation and electromagnetism via a five dimensional theory of general relativity is well known under the names of Theodor Kaluza and Oscar Benjamin Klein[5]. What is less known is that the idea was introduced earlier by the Finnish Gunnar Nordström[6] who had constructed a scalar theory of gravitation. In 1914 he wrote a five-dimensional theory of electromagnetism and showed that, if one assumes that the fields are independent of the fifth coordinate, the assumption made later by Kaluza, the electromagnetic vector potential splits into a four dimensional one and a scalar field identified to his scalar graviton. An important contribution from this period is due to Hermann Klaus Hugo Weyl[7]. He is more known for his 1918 unsuccessful attempt to enlarge diffeomorphisms to local scale transformations, but, in fact, a byproduct of this work was a different form of unification between electromagnetism and gravitation. In his 1929 paper, which contains the gauge theory for the Dirac electron, he introduced many concepts which have become classic, such as the Weyl two-component spinors and the vierbein and spin-connection formalism. Although the theory is no more scale invariant, he still used the term gauge invariance, a term which has survived ever since. In particle physics we put the birth of non-Abelian gauge theories in 1954, with the fundamental paper of Chen Ning Yang and Robert Laurence Mills[8]. It is the paper which introduced the $SU(2)$ gauge theory and, although it took some years before interesting physical theories could be built, it is since that date that non-Abelian gauge theories became part of high energy physics. It is not surprising that they were immediately named Yang-Mills theories. The influence of this work in High Energy Physics has often been emphasised, but here I want to mention some earlier and little known attempts which, according to present views, have followed a quite strange route. The first is due to Oscar Klein. In an obscure conference in 1938 he presented a paper with the title: On the theory of charged fields [9] in which he attempts to construct an $SU(2)$ gauge theory for the nuclear forces. This paper is amazing in many ways. First, of course, because it was done in 1938. He starts from the discovery of the muon, misinterpreted as the Yukawa meson, in the old Yukawa theory in which the mesons were assumed to be vector particles. This provides the physical motivation. The aim is to write an $SU(2)$ gauge theory unifying electromagnetism and nuclear forces. Second, and even more amazing, because he follows an incredibly circuitous road: He considers General Relativity in a five dimensional space, he compactifies à la Kaluza-Klein222He refers to his 1928 paper but he does not refer to Kaluza’s 1921 paper. Kaluza is never mentioned. In the course of this work I discovered the great interest for the historian of the way people cite their own as well as other people’s work., but he takes the $g_{4\mu}$ components of the metric tensor to be 2x2 matrices. He wants to describe the $SU(2)$ gauge fields but the matrices he is using, although they depend on three fields, are not traceless. In spite of this problem he finds the correct expression for the field strength tensor of $SU(2)$. In fact, answering an objection by Møller, he added a fourth vector field, thus promoting his theory to $U(1)\times SU(2)$. He added mass terms by hand and it is not clear whether he worried about the resulting breaking of gauge invariance. I cannot find out whether this paper has inspired anybody else’s work because the proceedings of this conference are not included in the citation index. As far as I know, Klein himself did not follow up on this idea333He mentioned this work in a 1956 Conference in Berne[10]. The second work in the same spirit is due to Wolfgang Pauli[11] who in 1953, in a letter to Abraham Pais, as well as in a series of seminars, developed precisely this approach: the construction of the $SU(2)$ gauge theory as the flat space limit of a compactified higher dimensional theory of General Relativity. He was closer to the approach followed to-day because he considered a six dimensional theory with the compact space forming an $S_{2}$. He never published this work and I do not know whether he was aware of Klein’s 1938 paper. He had realised that a mass term for the gauge bosons breaks the invariance[11] and he had an animated argument during a seminar by Yang in the Institute for Advanced Studies in Princeton in 1954[12]. What I find surprising is that Klein and Pauli, fifteen years apart one from the other, decided to construct the $SU(2)$ gauge theory for strong interactions and both choose to follow this totally counter-intuitive method. It seems that the fascination which General Relativity had exerted on this generation of physicists was such that, for many years, local transformations could not be conceived independently of general coordinate transformations. Yang and Mills were the first to understand that the gauge theory of an internal symmetry takes place in a fixed background space which can be chosen to be flat, in which case General Relativity plays no role. With the work of Yang and Mills gauge theories entered particle physics. Although the initial motivation was a theory of the strong interactions, the first semi-realistic models aimed at describing the weak and electromagnetic interactions. This story, which led a few years later to the Standard Model, has been told several times over the last years[13], so I shall not follow it up here. I shall only mention a paper by Sheldon Lee Glashow and Murray Gell-Mann[14] of 1961 which is often left out from the history articles. This paper has two parts: The first extends the Yang-Mills construction, which was originally done for $SU(2)$, to arbitrary Lie algebras. The well-known result of associating a coupling constant to every simple factor in the algebra appeared for the first time in this paper. Even the seed for a grand unified theory was there. In a footnote they say: “The remarkable universality of the electric charge would be better understood were the photon not merely a singlet, but a member of a family of vector mesons comprising a simple partially gauge invariant theory.” In the second part the authors attempt to apply these ideas to the strong and weak interactions with interesting implications to the notion of universality to which I shall come shortly. By the late fifties the $V-A$ theory was firmly established. The weak current could be written as a sum of a hadronic and a leptonic part. $${\cal L}_{W}=\frac{G_{F}}{\sqrt{2}}J^{\mu}(x)J^{\dagger}_{\mu}(x)~{}~{}~{};~{}% ~{}~{}J^{\mu}(x)=h^{\mu}(x)+{\ell}^{\mu}(x)$$ (1) where $G_{F}$ denotes the Fermi coupling constant and $h^{\mu}(x)$ and ${\ell}^{\mu}(x)$ the hadronic and leptonic parts of the weak current. It was easy to guess the form of the leptonic part in terms of the field operators of known leptons. By analogy to the electromagnetic current, we could write: $${\ell}^{\mu}(x)=\bar{e}(x)\gamma^{\mu}(1+\gamma^{5})\nu_{(e)}(x)+...$$ (2) where we have used the symbols $e$ and $\nu$ to denote the Dirac spinors of the corresponding particles and the dots stand for the term involving the muon444The separate identity of the electron and muon neutrinos was not yet established, but it was assumed by some physicists, including Julian Schwinger and Sheldon Glashow.. There was no corresponding simple form for the hadronic part, since such a form would depend on the knowledge of the dynamics of the strong interactions, in particular the notion of “elementarity” of the various hadrons. Looking back at the Fermi Lagrangian of equation (1), we see that, since it is a non-renormalisable theory, it can be taken, at best, as an effective theory, in other words only the lowest order terms can be considered. For the leptonic processes this is easy to understand, but for the processes involving $h^{\mu}(x)$ it implies that we should take the matrix elements between eigenstates of the entire strong interaction Hamiltonian. It follows that the statement find the form of $h^{\mu}(x)$ is, in fact, equivalent to the one identify it with a symmetry current of the strong interactions. As we shall see, this simple fact, which I heard Nicola explaining very clearly in a School in Gif-sur-Yvette[15], was not understood in the early days. An important step was the hypothesis of the Conserved Vector Current (C.V.C.)[16]. It allowed to identify the strangeness conserving part of the vector current with the charged components of the isospin current. Furthermore, it explained the near equality of the coupling constant measured in muon decay with that of the vector part of nuclear $\beta$-decay. The non-renormalisation of the latter by the strong interactions, represented by the pion-nucleon interactions, was correctly attributed to the conservation of the current. It was the first concrete realisation of the concept of universality, which, at this stage, was taken to mean “equal couplings for all processes”. However, the connection with the algebraic properties of the currents came a bit later. With the introduction of strange particles the picture became more complicated. Several schemes were proposed to extend isospin to a symmetry including the strange particles, which I will not review here. I will concentrate on the evolution of the ideas referring to the weak interactions. The first contribution I want to mention comes from the young CERN Theory Group which was established in Geneva in 1954 and it is the work of Bernard d’Espagnat and Jacques Prentki. They had already worked in various higher symmetry schemes and in 1958 they addressed the question of the weak interactions[17]. The title of the paper is A tentative general scheme for weak interactions and, for the first time, a comprehensive picture of the whole hierarchy of symmetries for all interactions is clearly presented. They were working under the assumption of $O(4)$ being the higher symmetry group. Four levels were considered: (i) The very strong interactions are invariant under $O(4)$. (ii) The medium strong interactions break $O(4)$ and leave one $SU(2)$ (isospin) and the third component of the other (strangeness), invariant. (iii) The electromagnetic interactions conserve only the third components of the two $SU(2)$’s (electric charge and strangeness). (iv) Finally, the weak interactions which, like the medium strong ones, conserve an $SU(2)$, but which is a different subgroup of $O(4)$. Thus, strangeness violation is presented as the result of a mismatch between the medium strong and the weak interactions. Let me add here that the idea of introducing medium strong interactions was already known, but in its early versions it was supposed to describe the interactions of $K$ mesons as opposed to those of pions which were the very strong ones. The correct scheme, as we know it to-day, appeared for the first time in d’Espagnat and Prentki’s paper. In reading this very lucid and beautiful paper one may not understand why these authors failed to discover the Cabibbo theory immediately after the introduction of $SU(3)$. This is an example of the first danger for the historian I mentioned in the introduction, namely reading old papers with to-day’s knowledge. As we shall see shortly, matters were not that simple. I skip a couple of other contributions which should be included in a complete history article and I come to a very important paper by M. Gell-Mann and Maurice Lévy[21] with the title The Axial Vector Current in Beta Decay. It presented the well-known $\sigma$-model, both the linear and non-linear versions, which became the paradigm for chiral symmetry. In the Introduction section they note that the radiative corrections to the $\mu$-decay amplitude which had just been computed gave a slight discrepancy with C.V.C., namely $G_{V}/G_{\mu}=0.97\pm 0.01$. In the published version at this point there is a Note added in proof. I copy: “Should this discrepancy be real ($g_{V}\neq 1$) it would probably indicate a total or partial failure of the conserved vector current idea. It might also mean, however, that the current is conserved but with $g_{V}<1$. Such a situation is consistent with universality if we consider the vector current for $\Delta S=0$ and $\Delta S=1$ together to be something like: $$GV_{\alpha}+GV_{\alpha}^{(\Delta S=1)}=G_{\mu}\bar{p}\gamma_{\alpha}(n+% \epsilon\Lambda)(1+\epsilon^{2})^{-1/2}+...$$ and likewise for the axial vector current. If $(1+\epsilon^{2})^{-1/2}$ =0.97, then $\epsilon^{2}$=.06, which is of the right order of magnitude for explaining the low rate of $\beta$ decay of the $\Lambda$ particle. There is, of course, a renormalization factor for that decay, so we cannot be sure that the low rate really fits in with such a picture.” (my italics). We see that the idea of considering a linear combination of the strangeness conserving and strangeness changing currents is there with the correct order of magnitude for the coefficients, but this is presented as a coincidence which was expected to be spoiled by uncontrollable renormalisation effects. Let me notice here that the paper was meant to be a model for the $\Delta S$=0 axial vector current and the properties of the $\Delta S$=1 vector current were just a side remark in a footnote. Continuing in chronological order, I come to the 1961 paper by Glashow and Gell-Mann[14]. After looking at gauge theories for higher groups I mentioned above, the authors try to apply the non-Abelian gauge theories to particle physics. They study both strong interactions, for which they attempt to identify the gauge bosons with the vector resonances which had just been discovered, as well as weak interactions. The currents were written in the Sakata model[19], although no reference to Sakata is given. Notice also that Gell-Mann had just written the paper on The eightfold way[20], but here they do not want to commit themselves on $SU(3)$ as the symmetry group of strong interactions, so they do not exploit the property of the currents to belong to an octet. The paper is remarkable in many aspects, besides the ones I mentioned already in extending Yang-Mills to higher groups. For the weak interactions it considers the Glashow $SU(2)\times U(1)$ model[21] and it correctly identifies the problems related to the absence of strangeness changing neutral currents and the small value of the $K^{0}_{1}-K^{0}_{2}$ mass difference. The question of universality is addressed in a footnote (remember that they were working in the Sakata model): “Observe that the sum of the squares of the coupling strengths to strangeness-saving charged currents and to strangeness-changing charged currents is just the square of the universal coupling strength. Should the gauge principle be extended to leptons - at least for the charged currents - the equality between $G_{V}$ and $G_{\mu}$ is no longer the proper statement of universality, for in this theory $G_{V}^{2}+G_{\Lambda}^{2}=G_{\mu}^{2}$ ($G_{\Lambda}$ is the unrenormalized (their italics) coupling strength for $\beta$-decay of $\Lambda$)”. I do not know why this paper has not received the attention it deserves, but this is partly due to the authors themselves, especially Gell-Mann, who rarely referred to it555There is a reference in the G.I.M. paper[22] where the related problems were solved.. As I said above, it was the time $SU(3)$ was introduced[20], so, it was natural to apply it to weak interactions. I want first to present an attempt by d’Espagnat and Prentki[23] who followed the lines of their 1958 paper. They reconsider their former $O(4)$ theory and tried to adapt it to $SU(3)$. In the Introduction they make their assumptions explicit and write: “Before we begin, it is proper to insist on how uncertain and speculative such attempts necessarily are: a) it is not at all proved that the strong interactions really have anything to do with $SU(3)$, b) even if they have, the statement that the same is true, in some way, for weak interactions is just a guess….” (my italics). Of course, to-day such a statement sounds strange, but we must remember we are in 1962. The simple fact we explained above, namely that in the effective Fermi theory the hadronic weak current is an operator acting in the space of hadrons, i.e. in the space of eigenstates of the strong interaction Hamiltonian, was not fully understood. d’Espagnat and Prentki had not realised that their two assumptions were not independent. In spite of that and taking into account their beautiful paper of 1958, I would expect them to go ahead and assign $SU(3)$ transformation properties to the weak current. In fact they start this way and in section 3 of their paper we can find the correct form of $h_{\mu}(x)$ as a superposition of a $\Delta S$=0 and a $\Delta S$=1 part with an angle they call $\alpha$. Then they proceed to show that in a current x current theory the two empirical selection rules $|\Delta S|\leq 1$ and $|\Delta I|<3/2$ for non-leptonic processes are related. And they stop there! They do not look at all at the semi-leptonic processes. In their paper leptons are mentioned only at the last paragraph. It seems that they were misled by an erroneous experiment claiming evidence for $\Delta S=-\Delta Q$ decays. Indeed, there was a single event $\Sigma^{+}\rightarrow\mu^{+}+n+\nu$ reported in an emulsion experiment[24] which dates from that period, but I do not know whether it is the right one, since they do not mention it anywhere in the paper. The following year a short paper appeared as a CERN preprint[1]. The author was a young visitor from Italy, Nicola Cabibbo. He took over the idea of a current which forms an angle with respect to medium strong interactions, but he carried it to its logical conclusion. This form allows for the only consistent definition of universality. Using modern quark language his remark can be translated into the statement that, with one quark of charge 2/3 and two quarks of charge -1/3, one could always construct one hadronic current which is coupled to leptons and one which is not. He naturally defined universality by the assumption that the coupled current involves the same coupling constant as the purely leptonic processes. As Glashow and Gell-Mann, he assigned the current to an octet of $SU(3)$, but he was in the eightfold way scheme and not in the Sakata model. This allowed him to compare strangeness conserving and strangeness changing semi-leptonic decays and show that the scheme agreed with experiment, thus putting the final stone into the $SU(3)$ edifice[25]. The name which was attached to this paper was The Cabibbo angle, but in fact, the most important point was the proof that the hadronic weak current has the right transformation properties under $SU(3)$. I do not know whether he was unaware of the wrong experimental result or whether he showed the good physical judgement to ignore it, but he clearly understood all the underlying physics. Since it appeared as a CERN preprint, I included it in my 1996 article on the Physics in the CERN Theory Division[26], where I wrote: “There are very few articles in the scientific literature in which one does not feel the need to change a single word and Cabibbo’s is definitely one of them. With this work he established himself as one of the leading theorists in the domain of weak interactions.” The total number of citations is not a reliable criterion for the importance of a scientific article, but, not surprisingly, Cabibbo’s paper is among the most cited ones in high energy physics. Concerning citations, I found it interesting to see how each article was cited from the protagonists in this field. Just two examples: Gell-Mann, in his 1964 article on current algebras[27] refers to himself and Cabibbo. At the same year d’Espagnat gave a very beautiful set of lectures which were published as a CERN report[28]. The title was $SU(3)$ et Interactions Faibles. He cites Feynman and Gell-Mann for CVC and Cabibbo666It is remarkable that he does not cite any of his own papers.. Cabibbo’s scientific work spans five decades, until the very last days of his illness. His name will remain in the Physics text books and he will continue to inspire the young physicists. But those of us who had the good fortune to know him will miss his sound judgement, his enthusiasm for physics, but also his gentle and friendly manners. He would never get angry and shout, only his polite smile would occasionally show disapproval. Cabibbo will be always with us, but we shall miss Nicola. References [1] N. Cabibbo, Phys. Rev. Lett. 10, 531 (1963) [2] O. Darrigol, Electrodynamics from Ampère to Einstein, Oxford University Press, 2000 ; L. O’Raifeartaigh, The Dawning of gauge theory, Princeton University Press, 1997 ; L. O’Raifeartaigh and N. Straumann, Rev. Mod. Phys. 72, 1 (2000) ; J.D. Jackson and L.B. Okun, Rev. Mod. Phys. 73, 663 (2001) [3] V. Fock, Z. Phys 39, 226 (1926) [4] D. Hilbert, Gött. Nachr. (1915) p. 395 [5] Th. Kaluza, K. Preuss. Akad. Wiss. (1921), p. 966 ; O. Klein, Z. Phys. 37, 895 (1926) [6] G. Nordström, Phys. Z. 15, 504 (1914) [7] H. Weyl, Deutsch Akad. Wiss. Berlin (1918), p. 465 ; Z. Phys. 56, 330 (1929) [8] C.N. Yang and R.L. Mills, Phys. Rev. 96, 191 (1954) ; It seems that similar results were also obtained by R. Shaw in his thesis at Imperial College. [9] O. Klein, in Les Nouvelles Théories de la Physique, Paris 1939, p. 81. Report in a Conference organised by the “Institut International de Coopération Intellectuelle”, Warsaw 1938 [10] O. Klein, Helv. Phys. Acta Suppl. IV, 58 (1956) [11] W. Pauli, Unpublished. It is summarised in a letter to A. Pais dated July 22-25 1953. A. Pais, Inward Bound, Oxford Univ. Press, 1986, p. 584 [12] C.N. Yang, Selected Papers 1945-1980 with Commentary Published by Freeman, San Francisco, p. 525 [13] See, for example, The Rise of the Standard Model, Ed. by L. Hoddeson et al, Cambridge University Press, 1997 [14] S.L. Glashow and M. Gell-Mann, Ann. of Phys. 15, 437 (1961) ; see also: R. Utiyama, Phys. Rev. 101, 1597 (1956) [15] N. Cabibbo, Les Courants Faibles, École d’été de Physique des Particules, Gif-sur-Yvette, 1974 ; N. Cabibbo and M. Veltman, CERN Report 65-30 (1965) [16] S.S. Gershtein and Ya.B. Zeldovich, JETP 29, 698 (1955), Translation: Soviet Physics JETP 2, 576 (1958) ; R.P. Feynman and M. Gell-Mann, Phys. Rev. 109, 193 (1958) [17] B. d’Espagnat and J. Prentki, Nucl. Phys. 6, 596 (1958) [18] M. Gell-Mann and M. Lévy, Nuov. Cim. 16, 705 (1960) [19] S. Sakata, Prog. Theor. Phys. 16, 686 (1956) [20] M. Gell-Mann, The Eightfold Way: A Theory of Strong Interaction Symmetry, Unpublished ; Y. Ne’eman, Nucl. Phys. 26, 222 (1961). See the reprint volume The Eightfold Way, W.A. Benjamin, 1964 [21] S.L. Glashow, Nucl. Phys. 22, 579 (1961) [22] S.L. Glashow, J. Iliopoulos and L. Maiani, Phys. Rev. D2, 1285 (1970) [23] B. d’Espagnat and J. Prentki, Nuov. Cim. 24, 497 (1962) [24] A. Barbaro-Galtieri et al., Phys. Rev. Lett. 9, 26 (1962) [25] M. Veltman, Report in this Conference [26] J. Iliopoulos, in History of CERN, Elsevier 1996, Vol 3, p. 277 [27] M. Gell-Mann, Phys. 1, 63 (1964) [28] B. d’Espagnat, CERN Report 64-42, 24 Sept. 1964
Integrating Runtime Values with Source Code to Facilitate Program Comprehension ††thanks: This work was supported by project KEGA 047TUKE-4/2016 Integrating software processes into the teaching of programming. Matúš Sulír Department of Computers and Informatics Faculty of Electrical Engineering and Informatics Technical University of Košice Košice, Slovakia matus.sulir@tuke.sk Abstract An inherently abstract nature of source code makes programs difficult to understand. In our research, we designed three techniques utilizing concrete values of variables and other expressions during program execution. RuntimeSearch is a debugger extension searching for a given string in all expressions at runtime. DynamiDoc generates documentation sentences containing examples of arguments, return values and state changes. RuntimeSamp augments source code lines in the IDE (integrated development environment) with sample variable values. In this post-doctoral article, we briefly describe these three approaches and related motivational studies, surveys and evaluations. We also reflect on the PhD study, providing advice for current students. Finally, short-term and long-term future work is described. integrated development environment, documentation, debugging, dynamic analysis, variables {textblock} 15(0.5,14.9) © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.    This is the accepted version of: M. Sulír. Integrating Runtime Values with Source Code to Facilitate Program Comprehension. 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, 2018, pp. 743–748. http://doi.org/10.1109/ICSME.2018.00093 I Introduction In this article, we would like to summarize some of the main results of the thesis [1]. We also describe the lessons learned and directions for future research. I-A Background Maintenance of existing software systems requires the developers to understand the programs of interest. This is accomplished by gradually building a mental model of selected parts of the program [2]. One way to build such a mental model is to read the source code lines in the editor. However, the source code provides only a static and abstract view of the program, separated from its runtime properties. To connect these two separate worlds, there exists a large variety of methods, approaches and tools. In our research, we are particularly interested in three types of activities related to program comprehension. First, there is a need to find the relevant pieces of code. This process is known as concept location (or feature location [3]). Second, to gain an overview of the behavior of the individual methods in the code, developers often read API (Application Programming Interface) documentation [4]. Third, to understand the details of a particular method, the developers can read the source code of the method definition. To alleviate this, many tools try to visually augment the source code directly in the editor to provide additional information in-place [5]. Dynamic analysis, i.e., the analysis of a running program, is a well-known approach to facilitate software comprehension and maintenance (see, e.g., [6], [7], [8]). However, the program execution is usually captured at a high level. The execution is often perceived only as a sequence of method calls, object creations or line executions. For example, none of the feature location approaches described in the articles surveyed by Dit et al. [3] analyzed concrete values of local or member variables during executions. I-B Synopsis The main goal of our research is to ease program understanding by integrating runtime information with the source code. Particularly, we are focused on concrete values of individual variables and expressions (such as local variables, arguments, return values or member variables). We designed three techniques aiming to help the developers to perform the three aforementioned activities – searching, documentation reading, and source code reading: • RuntimeSearch, a debugger extension which allows for searching a given text in all string expressions in a running program [9], • DynamiDoc, an automated documentation generator producing sentences with examples of arguments, return values and object state changes collected during executions [10], • RuntimeSamp – an IDE (integrated development environment) plugin showing a sample value for each variable at the end of each line in the source code editor [11]. Along with the design of these three tools, we performed supporting empirical studies and conducted related surveys. In the following chapters, we will briefly describe each of the approaches and related findings. In Fig. 1, there is an example of how the three designed techniques might be used together. However, note that each tool is also useful on its own. II Searching An important task during software maintenance is to find where the given functionality is implemented. Especially if the software is large, finding an initial investigation point in the codebase is difficult. Although there exists a large number of feature location methods, they are rarely used in practice – industrial developers prefer traditional approaches such as a textual search in the source code [12, 13]. II-A Empirical Study The search queries of developers often contain terms obtained by an observation of a running program. For instance, a developer can try to search for a label displayed in the graphical user interface (GUI) of a running application [14]. A programmer also tends to ask what part of the code generated the displayed error message [15]. A naive strategy is to statically search the displayed string in the code as-is. In our small-scale study, we aimed to find to what extent this strategy is sufficient [16]. Four desktop Java applications were scraped to produce a list of strings and words displayed in their GUIs, such as menu items or button labels. We found that about 11% of strings displayed in the GUIs of running programs were not found in the source code at all, making this strategy ineffective in these cases. More than 24% of them had more than 100 occurrences, which can be considered too much to be practical for the inspection of all results. II-B RuntimeSearch Given this motivation, we designed RuntimeSearch – a variation of a traditional text search, but for a running program instead of the static source code [9]. The target application is executed in the debug mode. At any time, the programmer can enter a string into a text field provided by RuntimeSearch. It is subsequently searched in all string-typed expressions being evaluated, such as all string variables and method return values. When a match is found, the program is paused and the traditional IDE debugger is open, offering all standard debugging possibilities, including the inspection of current variable values, stepping and resuming the program. If the current location is irrelevant, we can continue by finding next occurrences. In contrast to conditional breakpoints, RuntimeSearch searches in all expressions in the program (or the selected packages/classes), not only the selected lines. On the other hand, its capabilities are currently limited. Particularly, it supports only simple string matching. More search options, such as regular expressions, are planned for the future. II-C Evaluation First, in a case study on a 350 kLOC (thousands of lines of code) program, we found RuntimeSearch can be useful [9]: • to find an initial point of investigation (e.g. search for a text displayed in the GUI), • to search for multiple occurrences of the same string across multiple layers, such as from the GUI through helper methods to file-related routines, • to search for non-GUI strings, e.g., texts located in files, • to confirm programmer’s hypotheses (for instance, trying to find the string “https://” if the HTTPS connection is used). The second mentioned point can also be achieved by using a technique we called the “fabricated text technique”: to enter a dummy text into a part of program accepting textual input (e.g., a text field) and observe the data flow through multiple layers by finding its occurrences using RuntimeSearch. Next, to validate our approach, we performed a (not yet published) controlled experiment with 40 human participants [1]. One group used RuntimeSearch to perform simple search-focused program maintenance tasks, while the other group could use only standard IDE features. The results of the experiment are in Fig. 2. The treatment group achieved 60% higher median efficiency in terms of tasks per hour. The difference was statistically significant. The participants of the experiment were masters students. We received positive feedback from them, multiple students asked whether this tool is publicly available111Similar to other tools mentioned in this paper, RuntimeSearch is available online: http://sulir.github.io/runtimesearch. We consider RuntimeSearch a tool which can be soon ready for an industrial transfer -- we plan to publish it on the JetBrains Plugin Repository222http://plugins.jetbrains.com. However, first we should make the plugin more production-ready: clean the code, reduce manual steps required to setup the plugin, write the documentation, etc. III Documentation While API documentation is a useful resource for programmers, writing it and keeping it consistent with the source code requires a huge effort. Therefore, many automated approaches to documentation generation were devised. However, they traditionally process only the static source code or artifacts like mailing lists [17]. Although there exist documentation generators utilizing runtime information, they are specialized – e.g., FailureDoc [18] for failing unit tests or SpyREST [19] for RESTful (Representational State Transfer) APIs. III-A DynamiDoc We designed DynamiDoc, an example-based documentation generator utilizing runtime information collected during unit test executions or debugging [10]. For each method (function), it collects: • string representations of arguments and return values, • the string representations of the target object (this) before and after calling the given method, • and thrown exception types. The representations of objects are obtained using the standard toString() method in Java, which has an alternative in almost all languages. Then, using a decision table with sentence templates, DynamiDoc generates documentation sentences containing examples of these values. For instance, an excerpt from the documentation of the method Range.lowerBoundType() from Google Guava333https://github.com/google/guava may look like this: ⬇ When called on (5..8), the method  returned OPEN. When called on [5..8), the method returned  CLOSED. III-B Evaluation Using a qualitative evaluation [10], we found out DynamiDoc is particularly useful for the documentation of utility methods and data structures. On the other hand, methods which manipulate classes not having the toString() method meaningfully overridden and methods interacting with the external world are not the best candidates for DynamiDoc documentation. We also performed a preliminary quantitative evaluation [20]. We found that on average, one documentation sentence has 10% of the length of the method it describes, so it is sufficiently succinct. By manually inspecting a sample of documentation sentences, we found 88% of the described objects have the toString() method overridden. Therefore, we fulfilled basic prerequisites for the usefulness of this approach. IV Augmentation Since an understanding of a program only by reading its source code is difficult, many tools augment it with various metadata – from manually written notes through performance data to information about related emails. IV-A Surveys In our article [21], we described a taxonomy of source code labeling. The taxonomy consists of four dimensions: source (where the metadata come from, such as static or dynamic analysis), target (granularity – whole method, line, etc.), presentation (in the editor or a separate tool) and persistence. Then we performed a systematic mapping study [5], summarizing existing tools which visually augment the textual source code editor with various icons, graphics and textual labels. We found more than 20 tools augmenting the code with runtime information, but very few of them aim to display examples of concrete variable values. IDE sparklines [22] are limited to numeric variables, Debugger Canvas [23] requires the developer to manually select individual states during debugging and the prototype by Krämer et al. [24] suffers from scalability issues. Tralfamadore [25, 26] displays only arguments and return values. IV-B RuntimeSamp Our IDE extension RuntimeSamp [11] collects a few sample values of each variable during normal executions of a program by a developer, such as testing or debugging. Then, at the end of each line, one sample value is shown for each variable read or written on the given line. A demonstration, showing an excerpt from the Apache Commons Lang444https://commons.apache.org/lang/ library can be seen in Fig. 3. The idea behind the tool is that concrete variables should help the developers to get the “feeling” of runtime and concreteness in the inherently abstract and static source code. Compared to DynamiDoc, RuntimeSamp provides more fine-grained data – it displays information for individual lines and variables instead of whole methods. Furthermore, it is an interactive IDE extension, while DynamiDoc generates static textual documentation. In our article [11], we asked 7 questions which should be answered for RuntimeSamp to be useful in practice: • How to represent complicated objects succinctly? • When should we capture the variable values (e.g., is one value per line sufficient)? • If one line is executed more than once, how to decide which iteration to display? • How to detect and present such iterations? • How to keep the time overhead reasonable during the data collection? • Is it necessary to filter the displayed variables? • When to invalidate the data? For now, we answered these questions mainly in naive ways. To display the values of objects, we use their standard string representations (toString). We capture the values at the end of each line. Since we consider the caret (text cursor) as an implicit pointer to the programmer’s focus point, the first iteration which covers the line at the cursor is always displayed. An iteration is defined as a forward execution (without backward jumps) within one method. When collecting one sample value for each variable, the time overhead is about 78–213%, which is not prohibitive, but certainly requires an improvement. The measurement was performed using the DaCapo benchmark [27]. We filter the displayed data using a simple rule to prevent redundancy and invalidate all data on any edit (which is only a preliminary solution). V Lessons Learned In this section, we would like to describe reflections on the PhD study and advice for other students. V-A Seek Collaboration Some of the most valuable publications (e.g., [28]) during the PhD study were written in collaboration with other members of our research group. More people can afford to complete more time-consuming tasks – this is particularly true if they can be easily divided to sub-tasks, such as certain kinds of controlled experiments or systematic reviews. Since international collaboration is not an integral part of the research process at our institution, and we did not actively seek such a collaboration, none of the papers included in the dissertation was co-authored by people outside our research group. Therefore, cooperation with other institutions is planned in the near future. A good piece of advice for students is to actively search for opportunities to collaborate with people with similar research ideas during their studies, e.g., at conferences. V-B Focus on Your Topic Although collaboration is useful, it can be also considered a double-edged sword. Since the persons you collaborate with may have slightly different research interests than you, the cooperation with them can act as a distraction from the main goals of your thesis. This may make the process of your dissertation completion challenging: You will be left with an option to either make your dissertation topic too broad or exclude a large number of valuable papers from the thesis. Of course, collaboration is not the sole cause of distraction from the thesis topic. During the initial periods of the PhD study, we had multiple potential ideas for the dissertation topic and we even tried to pursue some of them although they had little in common. While this resulted in some interesting research results (e.g., about build system failures [29]), it also delayed the progress on the main topic. VI Future Work Finally, we will present our short-term future research tasks and long-term visions. VI-A Short-Term Goals Currently, we are working on the first question mentioned in section IV-B: How to represent an object, consisting of many properties, on a limited space? Before considering graphical representations, let us focus on the textual ones. The solution used in RuntimeSamp (and also in DynamiDoc) is to convert it to a string using a standard “toString”-like method, available in many languages, including Java. However, this representation must be written manually by the programmers, which is a reason it is sometimes left with its default (useless) implementation. Using machine learning, we try to automatically generate the string representation of objects, listing only subsets of their member variables which are considered important by programmers. Another short-term goal is to evaluate DynamiDoc and RuntimeSamp using experiments with human participants. VI-B Long-Term Goals The first long-term goal is to extend the object representation question to graphical representations. We can recognize two extremes: On one end, there are generic tree-based and graph-based (such as DDD [30]) visualizations displaying all properties of the objects, suitable for any kind of data, but revealing little domain-specific information. On the other hand, approaches such as Moldable Inspector allow the developers to craft graphical representations perfectly suitable for a particular domain, but they require manual coding effort [31]. Finding a right compromise between these two extremes is the challenge we would like to address next. This can be even more complicated if we consider not only one state, but also a difference of two or multiple states. Our main long-term goal is to blend the activities of source code reading/editing and an observation of the runtime properties of the application, so that the line between them will be almost indistinguishable. One of the research areas aiming to clear this boundary is the area of live programming systems. A large amount of work was done in this field – from the design of live programming languages [32, 33] and their visual augmentation [34] to experiments [24] and integration with unit testing [35], just to name a few advances. Although live-coding ideas are innovative and exciting, a majority of the approaches are looking at live programming from the “clean slate” perspective: They do not try to integrate live features into existing mainstream programming languages and IDEs. Even when they do, the ideas are often presented on “toy examples”, with their application on large industrial systems being disputable. Note that in reality, it is impractical to throw away existing systems, libraries, the knowledge of programmers and begin from scratch. Therefore, our vision is to gradually improve the experience of developers regarding the connection of source code and runtime in the existing languages and IDEs, without disrupting their current workflow. We consider RuntimeSamp to be the first step toward our ambitious goal. After improving the object representation, we would like to focus on the data invalidation problem. Instead of deleting all data dependent on the changed parts, we would like to recompute them whenever possible. To prevent cognitive overload, showing only task-relevant runtime information will be necessary. Finally, sufficient performance improvements could make the approach suitable for industrial use. References [1] M. Sulír, “Integrating runtime metadata with source code to facilitate program comprehension,” Ph.D. dissertation, Technical University of Košice, 2018. [Online]. Available: http://sulir.github.io/other/Thesis.pdf [2] A. von Mayrhauser and A. Vans, “Program comprehension during software maintenance and evolution,” Computer, vol. 28, no. 8, pp. 44–55, Aug. 1995. [3] B. Dit, M. Revelle, M. Gethers, and D. Poshyvanyk, “Feature location in source code: a taxonomy and survey,” Journal of Software: Evolution and Process, vol. 25, no. 1, pp. 53–95, 2013. [4] E. Duala-Ekoko and M. P. Robillard, “Asking and answering questions about unfamiliar APIs: An exploratory study,” in 2012 34th International Conference on Software Engineering (ICSE), June 2012, pp. 266–276. [5] M. Sulír, M. Bačíková, S. Chodarev, and J. Porubän, “Visual augmentation of source code editors: A systematic review,” Computer Languages, Systems & Structures, 2018, submitted (arXiv:1804.02074). [6] B. Cornelissen, A. Zaidman, A. van Deursen, L. Moonen, and R. Koschke, “A systematic survey of program comprehension through dynamic analysis,” Software Engineering, IEEE Transactions on, vol. 35, no. 5, pp. 684–702, Sep. 2009. [7] D. Röthlisberger, M. Härry, W. Binder, P. Moret, D. Ansaloni, A. Villazón, and O. Nierstrasz, “Exploiting dynamic information in IDEs improves speed and correctness of software maintenance tasks,” Software Engineering, IEEE Transactions on, vol. 38, no. 3, pp. 579–591, May 2012. [8] F. Beck, O. Moseler, S. Diehl, and G. Rey, “In situ understanding of performance bottlenecks through visually augmented code,” in Program Comprehension (ICPC), 2013 IEEE 21st International Conference on, May 2013, pp. 63–72. [9] M. Sulír and J. Porubän, “RuntimeSearch: Ctrl+F for a running program,” in Proceedings of the 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), 2017, pp. 388–393. [10] M. Sulír and J. Porubän, “Generating method documentation using concrete values from executions,” in 6th Symposium on Languages, Applications and Technologies (SLATE 2017), ser. OpenAccess Series in Informatics (OASIcs), vol. 56.   Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2017, pp. 3:1–3:13. [11] M. Sulír and J. Porubän, “Augmenting source code lines with sample variable values,” in Proceedings of the 2018 26th IEEE/ACM International Conference on Program Comprehension (ICPC), May 2018. [12] K. Damevski, D. Shepherd, and L. Pollock, “A field study of how developers locate features in source code,” Empirical Software Engineering, vol. 21, no. 2, pp. 724–747, 2016. [13] J. Wang, X. Peng, Z. Xing, and W. Zhao, “An exploratory study of feature location process: Distinct phases, recurring patterns, and elementary actions,” in Proceedings of the 2011 27th IEEE International Conference on Software Maintenance, ser. ICSM ’11.   Washington, DC, USA: IEEE Computer Society, 2011, pp. 213–222. [14] T. Roehm, “Two user perspectives in program comprehension: End users and developer users,” in Proceedings of the 2015 IEEE 23rd International Conference on Program Comprehension, ser. ICPC ’15.   Piscataway, NJ, USA: IEEE Press, 2015, pp. 129–139. [15] J. Sillito, G. Murphy, and K. De Volder, “Asking and answering questions during a programming change task,” Software Engineering, IEEE Transactions on, vol. 34, no. 4, pp. 434–451, Jul. 2008. [16] M. Sulír and J. Porubän, “Locating user interface concepts in source code,” in 5th Symposium on Languages, Applications and Technologies (SLATE’16), ser. OpenAccess Series in Informatics (OASIcs), vol. 51.   Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2016, pp. 6:1–6:9. [17] N. Nazar, Y. Hu, and H. Jiang, “Summarizing software artifacts: A literature review,” Journal of Computer Science and Technology, vol. 31, no. 5, pp. 883–909, 2016. [18] S. Zhang, C. Zhang, and M. D. Ernst, “Automated documentation inference to explain failed tests,” in Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE ’11.   Washington, DC, USA: IEEE Computer Society, 2011, pp. 63–72. [19] S. M. Sohan, C. Anslow, and F. Maurer, “SpyREST: Automated RESTful API documentation using an HTTP proxy server,” in Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), ser. ASE ’15.   Washington, DC, USA: IEEE Computer Society, 2015, pp. 271–276. [20] M. Sulír and J. Porubän, “Source code documentation generation using program execution,” Information, vol. 8, no. 4, p. 148, 2017. [21] M. Sulír and J. Porubän, “Labeling source code with metadata: A survey and taxonomy,” in 2017 Federated Conference on Computer Science and Information Systems (FedCSIS), Sep. 2017, pp. 721–729. [22] F. Beck, F. Hollerich, S. Diehl, and D. Weiskopf, “Visual monitoring of numeric variables embedded in source code,” in Software Visualization (VISSOFT), 2013 First IEEE Working Conference on, Sep. 2013, pp. 1–4. [23] R. DeLine, A. Bragdon, K. Rowan, J. Jacobsen, and S. P. Reiss, “Debugger Canvas: Industrial experience with the Code Bubbles paradigm,” in Proceedings of the 34th International Conference on Software Engineering, ser. ICSE ’12.   Piscataway, NJ, USA: IEEE Press, 2012, pp. 1064–1073. [24] J.-P. Krämer, J. Kurz, T. Karrer, and J. Borchers, “How live coding affects developers’ coding behavior,” in 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Jul. 2014, pp. 5–8. [25] G. Lefebvre, B. Cully, M. J. Feeley, N. C. Hutchinson, and A. Warfield, “Tralfamadore: Unifying source code and execution experience,” in Proceedings of the 4th ACM European Conference on Computer Systems, ser. EuroSys ’09.   New York, NY, USA: ACM, 2009, pp. 199–204. [26] A. Bradley, “IDE integration for execution mining data,” University of British Columbia, Tech. Rep. CPSC 538W Final Project Report, Apr. 2010. [Online]. Available: http://www.cs.ubc.ca/~awjb/pubs/2010-cs538w-final-report.pdf [27] S. M. Blackburn et al., “The DaCapo benchmarks: Java benchmarking development and analysis,” in Proceedings of the 21st Annual ACM SIGPLAN Conference on Object-oriented Programming Systems, Languages, and Applications.   New York, NY, USA: ACM, 2006, pp. 169–190. [28] M. Sulír, M. Nosáľ, and J. Porubän, “Recording concerns in source code using annotations,” Computer Languages, Systems & Structures, vol. 46, pp. 44–65, Nov. 2016. [29] M. Sulír and J. Porubän, “A quantitative study of Java software buildability,” in Proceedings of the 7th International Workshop on Evaluation and Usability of Programming Languages and Tools, ser. PLATEAU 2016.   New York, NY, USA: ACM, 2016, pp. 17–25. [30] A. Zeller and D. Lütkehaus, “DDD—a free graphical front-end for UNIX debuggers,” ACM SIGPLAN Notices, vol. 31, no. 1, pp. 22–27, Jan. 1996. [31] A. Chiş, O. Nierstrasz, A. Syrel, and T. Gîrba, “The moldable inspector,” in 2015 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, ser. Onward! 2015.   New York, NY, USA: ACM, 2015, pp. 44–60. [32] S. McDirmid, “Living it up with a live programming language,” in Proceedings of the 22nd Annual ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications, ser. OOPSLA ’07.   New York, NY, USA: ACM, 2007, pp. 623–638. [33] A. Sorensen and H. Gardner, “Programming with time: Cyber-physical programming with Impromptu,” in Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications, ser. OOPSLA ’10.   New York, NY, USA: ACM, 2010, pp. 822–834. [34] B. Swift, A. Sorensen, H. Gardner, and J. Hosking, “Visual code annotations for cyberphysical programming,” in Live Programming (LIVE), 2013 1st International Workshop on, May 2013, pp. 27–30. [35] T. Imai, H. Masuhara, and T. Aotani, “Making live programming practical by bridging the gap between trial-and-error development and unit testing,” in Companion Proceedings of the 2015 ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity, ser. SPLASH Companion 2015.   New York, NY, USA: ACM, 2015, pp. 11–12.
Context-Sensitive Languages, Rational Graphs and Determinism Arnaud Carayol\rsupera \lsuperaIrisa – Campus de Beaulieu – 35042 Rennes Cedex – France Arnaud.Carayol@irisa.fr  and  Antoine Meyer\rsuperb \lsuperbLiafa – Université de Paris 7 – 2 place Jussieu, case 7014, 75251 Paris Cedex 05 – France Antoine.Meyer@liafa.jussieu.fr Abstract. We investigate families of infinite automata for context-sensitive languages. An infinite automaton is an infinite labeled graph with two sets of initial and final vertices. Its language is the set of all words labelling a path from an initial vertex to a final vertex. In 2001, Morvan and Stirling proved that rational graphs accept the context-sensitive languages between rational sets of initial and final vertices. This result was later extended to sub-families of rational graphs defined by more restricted classes of transducers. Our contribution is to provide syntactical and self-contained proofs of the above results, when earlier constructions relied on a non-trivial normal form of context-sensitive grammars defined by Penttonen in the 1970’s. These new proof techniques enable us to summarize and refine these results by considering several sub-families defined by restrictions on the type of transducers, the degree of the graph or the size of the set of initial vertices. Key words and phrases:language theory, infinite graphs, automata, determinism 1991 Mathematics Subject Classification: F.4.1 \lmcsheading 2 (2:6) 2006 24 Jan. 31, 2005 Jul. 19, 2006 1. Introduction One of the cornerstones of formal language theory is the well-known hierarchy introduced by Chomsky in [Cho59]. It consists of the regular, context-free, context-sensitive and recursively enumerable languages. This hierarchy was originally defined by imposing syntactical restrictions on the rules of grammars generating the languages. These four families of languages as well as some of their sub-families have been extensively studied. In particular, they were given alternative characterizations in terms of finite acceptors. They are respectively accepted by finite automata, pushdown automata, linearly bounded automata and Turing machines. Recently, these families of languages have been characterised by families of infinite automata. An infinite automaton is a labelled countable graph together with a set of initial and a set of final vertices. The language it accepts (or simply its language) is the set of all words labelling a path from an initial vertex to a final vertex. In [CK02], a summary of four families of graphs corresponding to the four families in the Chomsky hierarchy was given: they are respectively the finite graphs, prefix-recognisable graphs [Cau96, Cau03a], rational graphs [Mor00] and transition graphs of Turing machines [Cau03b] (for a survey, see for instance [Tho01]). This work specifically deals with a family of infinite automata for context-sensitive languages. The first result on this topic is due to Morvan and Stirling [MS01], who showed that the languages accepted by rational graphs, whose vertices are words and whose edges are defined by rational transducers, taken between rational or finite sets of vertices, are precisely the context-sensitive languages. This result was later extended by Rispal [Ris02] to the more restricted families of synchronized rational graphs, and even to synchronous graphs. A summary can be found in [MR04]. All proofs provided in these works use context-sensitive grammars in Penttonen normal form [Pen74] to characterize context-sensitive languages, which has two main drawbacks. First, this normal form is far from being obvious, and the proofs and constructions provided in [Pen74] are known to be difficult. Second, and more importantly, there is no grammar-based characterization of deterministic context-sensitive languages, which forbids one to adapt these results to the deterministic case. Our main contributions are a new syntactical proof of the theorem by Stirling and Morvan based on the thight correspondance between tiling systems and synchronized graphs and an in depth study of the trade off between the structure of the rational graphs (number of initial vertices and out-degree), the transducers defining them, and the family of languages they accept, as summarized in Table 1. Each row of the table concerns a family or sub-family of rational graphs, and each column corresponds to a structural restriction of that family with respect to sets of initial vertices and degree. The first case is that of rational (infinite) sets of initial vertices, while the second case only considers the fixed rational initial set $\{a\}^{*}$ over a single letter $a$. The two remaining cases concern graphs with a unique initial vertex, with respectively arbitrary and finite out-degree. A cell containing an equality symbol indicates that the languages accepted by the considered family of graphs (row) from the considered set of initial vertices (column) are the context-sensitive languages. An inclusion symbol indicates that their languages are strictly included in context-sensitive languages. A question mark denotes a conjecture. When relevant, we give a reference to the proposition, theorem or remark which states each result. Finally, we investigate the case of deterministic languages. A long-standing open problem in language theory is the equivalence between deterministic and non-deterministic (or even unambiguous) context-sensitive languages [Kur64]. Thanks to our constructions, we characterize two syntactical sub-families of rational graphs respectively accepting the unambiguous and deterministic context-sensitive languages. Outline. Our presentation is structured along the following lines. The definitions of rational graphs and context-sensitive languages are given in Section 2. The results concerning languages accepted by rational and synchronous rational graphs are given in Section 3. In Section 4, we investigate rational graphs under structural constraints, and finally Section 5 is devoted to deterministic context-sensitive languages. 2. Definitions 2.1. Notations Before all, we fix notations for words, languages and automata, as well as directed graphs and the languages they accept. For a more thorough introduction to monoids and rationality, the interested reader is referred to [Ber79, Sak03]. 2.1.1. Languages and Automata We consider finite sets of symbols, or letters, called alphabets. In the following, $\Sigma$ and $\Gamma$ always denote finite alphabets. Tuples of letters are called words, and sets of words languages. The word $u$ corresponding to the tuple $(u_{1},\ldots,u_{n})$ is written $u_{1}\ldots u_{n}$. Its $i$-th letter is denoted by $u(i)=u_{i}$. The set of all words over $\Sigma$ is written $\Sigma^{*}$. The number of letter occurrences of $u$ is its length, written $|u|=n$. The unique word of length $0$ is written $\varepsilon$. The concatenation of two words $u=u_{1}\ldots u_{n}$ and $v=v_{1}\ldots v_{m}$ is the word $uv=u_{1}\ldots u_{n}v_{1}\ldots v_{m}$. This operation extends to sets of words: for all $A,B\subseteq\Sigma^{*}$, $AB$ stands for the set $\{uv\;|\;u\in A\;\textrm{and}\;v\in B\}$. By a slight abuse of notation, we will usually denote by $u$ both the word $u$ and the singleton $\{u\}$. A monoid is composed of a set $M$ together with an associative internal binary law on $M$ called product, with a neutral element in $M$. The product of two elements $x$ and $y$ of $M$ is written $x\cdot y$. An automaton over $M$ is a tuple $A=(L,Q,q_{0},F,\delta)$ where $L\subseteq M$ is a finite set of labels, $Q$ a finite set of control states, $q_{0}\in Q$ is the initial state, $F\subseteq Q$ is the set of final states and $\delta\subseteq Q\times L\times Q$ is the transition relation of $A$. A run of $A$ is a sequence of transitions $(q_{0},l_{1},q_{1})\ldots(q_{n-1},l_{n},q_{n})$. It is associated to the element $m=l_{1}\cdot\ldots\cdot l_{n}\in M$. If $q_{n}$ belongs to $F$, the run is accepting (or successful), and $m$ is accepted, or recognized, by $A$. The set of elements accepted by $A$ is written $L(A)$. $A$ is unambiguous if there is only one accepting run for each element in $L(A)$. The star of a set $X\subseteq M$ is defined as $X^{*}:=\bigcup_{k\geq 0}X^{k}$ with $X^{0}=\{\varepsilon\}$ and $X^{k+1}=X\cdot X^{k}$. Similarly, we write $X^{+}:=\bigcup_{k\geq 1}X^{k}$. The set of rational subsets of a monoid is the smallest set containing all finite subsets and closed under union, product and star. The set of all words over $\Sigma$ together with the concatenation operation forms the so-called free monoid whose neutral element is the empty word $\varepsilon$. Finite automata over the free monoid $\Sigma^{*}$ are known to accept the rational subsets of $\Sigma^{*}$, also called rational languages. 2.1.2. Graphs A labeled, directed and simple graph is a set $G\subseteq V\times\Gamma\times V$ where $\Gamma$ is a finite set of labels and $V$ a countable set of vertices. An element $(s,a,t)$ of $G$ is an edge of source $s$, target $t$ and label $a$, and is written $s\overset{a}{\underset{G}{\longrightarrow}}t$ or simply $s\overset{a}{\longrightarrow}t$ if $G$ is understood. The set of all sources and targets of a graph form its support $V_{G}$. A sequence of edges $s_{1}\overset{a_{1}}{\longrightarrow}t_{1},\ldots,s_{k}\overset{a_{k}}{% \longrightarrow}t_{k}$ with $\forall i\in[2,k],\ s_{i}=t_{i-1}$ is called a path. It is written $s_{1}\overset{u}{\longrightarrow}t_{k}$, where $u=a_{1}\ldots a_{k}$ is the corresponding path label. A graph is deterministic if it contains no pair of edges having the same source and label. The path language of a graph $G$ between two sets of vertices $I$ and $F$ is the set $$L(G,I,F)\ :=\ \{\ w\ |\ s\overset{w}{\underset{G}{\longrightarrow}}t,\ s\in I,% \ t\in F\}.$$ If two infinite automata recognize the same language, we say they are trace-equivalent. In this paper, we consider infinite automata: infinite graphs together with sets of initial and final vertices. We will no longer distinguish the notion of graph with initial and final vertices from the notion of automaton. However, as we will see in Section 3, with no restriction on the set of initial vertices and on the structure of the graph this might not provide a reasonable extension of finite automata. 2.2. Word transducers Automata can be used to accept more than languages. In particular, when the edges of an automaton are labelled with pairs of letters (with an appropriate product operation), its language is a set of pairs of words, which can be seen as a binary relation on words. Such automata are called finite automata with output, or transducers, and they recognize rational relations. We will now recall their definition as well as some of their important properties. For a detailed presentation of transducers, see for instance [Ber79, Pri00, Sak03]. Consider the monoid whose elements are the pairs of words $(u,v)$ in $\Sigma^{*}$, and whose composition law is defined by $(u_{1},v_{1})\cdot(u_{2},v_{2})=(u_{1}u_{2},v_{1}v_{2})$, generally called the product monoid and written $\Sigma^{*}\times\Sigma^{*}$. A transducer $T$ over a finite alphabet $\Sigma$ is a finite automaton over $\Sigma^{*}\times\Sigma^{*}$ with labels in $(\Sigma\cup\{\varepsilon\})\times(\Sigma\cup\{\varepsilon\})$. Finite transducers accept the rational subsets of $\Sigma^{*}\times\Sigma^{*}$. We do not distinguish a transducer from the relation it accepts and write $(w,w^{\prime})\in T$ if $(w,w^{\prime})$ is accepted by $T$. The domain $\mathrm{Dom}(T)$ (resp. range $\mathrm{Ran}(T)$) of a transducer $T$ is the set $\{w\;|\;(w,w^{\prime})\in T\}$ (resp. $\{w^{\prime}\;|\;(w,w^{\prime})\in T\}$). We also write $T(L)$ the set of all vertices $v$ such that $(u,v)\in T$ for some $u\in L$. A transducer accepting a function is called functional. In general, there is no bound on the size difference between input and output in a transducer. Interesting subclasses are obtained by enforcing some form of synchronization. For instance, length-preserving rational relations are recognized by transducers with labels in $\Sigma\times\Sigma$, called synchronous transducers. Such relations only pair words of the same size. A more relaxed form of synchronization was introduced by Elgot and Mezei [EM65]: a transducer over $\Sigma$ with initial state $q_{0}$ is left-synchronized if for every path $$q_{0}\overset{x_{0}/y_{0}}{\longrightarrow}q_{1}\ldots q_{n-1}\overset{x_{n}/y% _{n}}{\longrightarrow}q_{n},$$ there exists $k\in[0,n]$ such that for all $i\in[0,k]$, $x_{i}$ and $y_{i}$ belong to $\Sigma$ and either $x_{j}=\varepsilon$ for all $j>k$ or $y_{j}=\varepsilon$ for all $j>k$. In other terms, a left-synchronized relation is a finite union of relations of the form $S\cdot F$ where $S$ is a synchronous relation and $F$ is either equal to $\{\varepsilon\}\times R$ or $R\times\{\varepsilon\}$ where $R$ is a rational language. Right-synchronized transducers are defined similarly. In the following, unless otherwise stated, we will refer to left-synchronized transducers simply as synchronized transducers. The standard notion of determinism for automata does not have much meaning in the case of transducers because it does not rely only on the input but on both the input and the output. A more refined notion is that of sequentiality: a transducer $T$ with states $Q$ is sequential if for all $q,q^{\prime}$ and $q^{\prime\prime}$ in $Q$, if $q\overset{x/y}{\longrightarrow}q^{\prime}$ and $q\overset{x^{\prime}/y^{\prime}}{\longrightarrow}q^{\prime\prime}$ then either $x=x^{\prime}$, $y=y^{\prime}$ and $q^{\prime}=q^{\prime\prime}$, or $x\neq\varepsilon$, $x^{\prime}\neq\varepsilon$ and $x\neq x^{\prime}$. {rem} The standard determinization procedure applied to a synchronous transducer yields an equivalent unambiguous synchronous transducer (i.e for every pair of words $(u,v)$ accepted by the transducer there is exactly one accepting run of the transducer labelled by $u/v$). This remains true for synchronized transducers. It is well-known that there is a close relationship between rational languages and rational transductions. In particular, rational relations have rational domains and ranges, and are closed under restriction to a rational domain or range. Moreover, the restriction of a sequential (resp. synchronous) transducer to a rational domain is still sequential (resp. synchronous) (see for instance [Ber79]). 2.3. Rational graphs The Chomsky-like hierarchy of graphs presented in [CK02] uses words to represent vertices. Each of these graphs is thus a finite union of binary relations on words, each relation corresponding to a given edge label. In particular, the family of rational graphs owes its name to the fact that their sets of edges are given by rational relations on words, i.e. relations recognized by word transducers. {defi} [[Mor00]] A rational graph labelled by $\Sigma$ with vertices in $\Gamma^{*}$ is given by a tuple of transducers $(T_{a})_{a\in\Sigma}$ over $\Gamma$. For all $a\in\Sigma$, $G$ has an edge labelled by $a$ between vertices $u$ and $v\in\Gamma^{*}$ if and only if $(u,v)\in T_{a}$. For $w\in\Sigma^{+}$ and $a\in\Sigma$, we write $T_{wa}=T_{w}\circ T_{a}$, and $u\overset{w}{\longrightarrow}v$ if and only if $(u,v)\in T_{w}$. Note that $T_{w}\circ T_{a}$ stands for the set of all pairs $(u,v)$ such that $(u,x)\in T_{w}$ and $(x,v)\in T_{a}$ for some $x$. Figure 1 shows an example of rational graph, the infinite grid, with the rational transducers which define its edges. By the properties of rational relations, the support of a rational graph is a rational subset of $\Gamma^{*}$. The rational graphs with synchronized transducers were already defined by Blumensath and Grädel in [BG00] under the name automatic graphs and by Rispal in [Ris02] under the name synchronized rational graphs. It follows from the definitions (see Section 2.2) that sequential synchronous, synchronous, synchronized and rational graphs form an increasing hierarchy. This hierarchy is strict (up to isomorphism): first, sequential synchronous graphs are deterministic graphs whereas synchronous graphs can be non-deterministic. Second, synchronous graphs have a finite degree whereas synchronized graphs can have an infinite degree. Finally, to separate synchronized graphs from rational graphs, we can use the following properties on the growth rate of the out-degree in the case of graphs of finite out-degree. {prop} [Mor01] For any rational graph $G$ of finite out-degree and any vertex $x$, there exists $c\in\mathbb{N}$, such that the out-degree of vertices at distance $n$ of $x$ is at most $c^{c^{n}}$. This upper bound can be reached: consider the unlabeled rational graph $G_{0}=\{T\}$ where $T$ is the transducer over $\Gamma=\{A,B\}$ with one state $q_{0}$ which is both initial and final and a transition $q_{0}\overset{X/YZ}{\longrightarrow}q_{0}$ for all $X,Y$ and $Z\in\Gamma$. It has an out-degree of $2^{2^{n+1}}$ at distance $n$ of $A$. In the case of synchronized graphs of finite out-degree, the bound on the out-degree is simply exponential. {prop} [Ris02] For any synchronized graph $G$ of finite out-degree and vertex $x$, there exists $c\in\mathbb{N}$ such that the out-degree of vertices at distance $n>0$ of $x$ is at most $c^{n}$. It follows from the above proposition that $G_{0}$ is rational but not synchronized. Hence, the synchronized graphs form a strict sub-family of rational graphs. 2.4. Context sensitive languages In this work, we are concerned with the family of context-sensitive languages111In order to simplify our presentation, we only consider context-sensitive languages that do not contain the empty word $\varepsilon$ (this is a standard restriction).. Several finite formalisms are known to accept this family of languages, the most common being linearly bounded machines (LBM), which are Turing machines working in linear space. Less well-known acceptors for these languages are bounded tiling systems, which are not traditionally studied as language recognizers. However, one can show that these formalisms are equivalent, and that syntactical translations exist between them. Since they are at the heart of our proof techniques, we now give a detailed definition of tiling systems. For more information about linearly bounded machines the reader is referred to [HU79]. Tiling systems were originally defined to recognize or specify picture languages, i.e. two-dimensional words on finite alphabets [GR96]. They can be seen as a normalized form of dominos systems [LS97b]. Such sets of pictures are called local picture languages. However, by only looking at the words contained in the first row of each picture of a local picture language, one obtains a context-sensitive language [LS97a]. A $(n,m)$-picture $p$ over an alphabet $\Gamma$ is a two dimensional array of letters in $\Gamma$ with $n$ rows and $m$ columns. We denote by $p(i,j)$ the letter occurring in the $i$th row and $j$th column starting from the top-left corner, by $\Gamma^{n,m}$ the set of $(n,m)$-pictures and by $\Gamma^{**}$ the set of all pictures222We do not consider the empty picture.. Given a $(n,m)$-picture $p$ over $\Gamma$ and a letter $\text{\small$\#$}\not\in\Gamma$, we denote by ${p}_{\text{\tiny$\#$}}$ the $(n+2,m+2)$-picture over $\Gamma\cup\{\#\}$ defined by: • ${p}_{\text{\tiny$\#$}}(i,1)={p}_{\text{\tiny$\#$}}(i,m+2)=\text{\small$\#$}$ for $i\in[1,n+2]$, • ${p}_{\text{\tiny$\#$}}(1,j)={p}_{\text{\tiny$\#$}}(n+2,j)=\text{\small$\#$}$ for $j\in[1,m+2]$, • ${p}_{\text{\tiny$\#$}}(i+1,j+1)=p(i,j)$ for $i\in[1,n]$ and $j\in[1,m]$. For any $n,m\geq 2$ and any $(n,m)$-picture $p$, $T(p)$ is the set of $(2,2)$-pictures appearing in $p$. A $(2,2)$-picture is also called a tile. A picture language $K\subseteq\Gamma^{**}$ is local if there exists a symbol $\text{\small$\#$}\not\in\Gamma$ and a finite set of tiles $\Delta$ such that $K=\{p\in\Gamma^{**}\mid T({p}_{\text{\tiny$\#$}})\subseteq\Delta\}$. To any set of pictures over $\Gamma$, we can associate a language of words by looking at the frontiers of the pictures. The frontier of a $(n,m)$-picture $p$ is the word $\mathrm{fr}\left(p\right)=p(1,1)\ldots p(1,m)$ corresponding to the first row of the picture. {defi} A tiling system $S$ is a tuple $(\Gamma,\Sigma,\text{\small$\#$},\Delta)$ where $\Gamma$ is a finite alphabet, $\Sigma\subset\Gamma$ is the input alphabet, $\text{\small$\#$}\not\in\Gamma$ is a frame symbol and $\Delta$ is a finite set of tiles over $\Gamma\cup\{\text{\small$\#$}\}$. It recognizes the local picture language $P(S)=\{p\in\Gamma^{**}\mid T({p}_{\text{\tiny$\#$}})\subseteq\Delta\}$ and the word language $L(S)=\mathrm{fr}\left(P(S)\right)\cap\Sigma^{*}$. A tiling system $S$ recognizes a language $L\subseteq\Sigma^{+}$ in height $f(n)$ for some mapping $f:\mathbb{N}\mapsto\mathbb{N}$ if for all $w\in L(S)$ there exists a $(n,m)$-picture $p$ in $P(S)$ such that $w=\mathrm{fr}\left(p\right)$ and $n\leq f(m)$. We can now precisely state the following well-known equivalence result. {thm} The following simulations link linearly bounded machines and tiling systems: (1) A linearly bounded Turing machine ${T}$ working in $f(n)$ reversals can be simulated by a tiling system of height $f(n)+2$. (2) A tiling system of height $f(n)$ can be simulated by a linearly bounded Turing machine working in $f(n)$ reversals. {exa} Figure 2 shows the set of tiles $\Delta$ of a tiling system $S$ over $\Gamma=\{a,b,\bot\}$, $\Sigma=\{a,b\}$ and the border symbol $\#$. The language $L(S)$ is exactly the set $\{a^{n}b^{n}\;|\;n\geq 1\}$. A context-sensitive language is called deterministic if it can be accepted by a deterministic LBM or tiling system, where a tiling system is deterministic if one can infer from each row in a picture a single possible next row. 3. The languages of rational graphs In this section, we consider the languages accepted by rational graphs and their sub-families from and to a rational set of vertices. We give a simplified presentation of the result by Morvan and Stirling [MS01] stating that the family of rational graphs accepts the context-sensitive languages. This is done in several steps. First, Proposition 3.1 states that the rational graphs are trace-equivalent to the synchronous rational graphs. Then, Proposition 3.2 and Proposition 3.2 establish a very tight relationship between synchronous graphs and tiling systems. It follows that the languages of synchronous rational graphs are also the context-sensitive languages (Theorem 6). The original result is given as Corollary 3.2. Finally, Proposition 3.3 establishes that even the smallest sub-family we consider, the family of sequential synchronous rational graphs, accepts all context-sensitive languages. The various transformations presented in this section are summarized in Figure 3. 3.1. From rational graphs to synchronous graphs We present an effective construction that transforms a rational graph $G$ with two rational sets $I$ and $F$ of initial and final vertices into a synchronous graph $G^{\prime}$ trace-equivalent between two rational sets $I^{\prime}$ and $F^{\prime}$. The construction is based on replacing the symbol $\varepsilon$ in the transitions of the transducers defining $G$ by a fresh symbol $\#$. Let $(T_{a})_{a\in\Sigma}$ be the set of transducers over $\Gamma$ characterizing $G$ and let $\#$ be a symbol not in $\Gamma$. For all $a$, we define $\bar{a}$ to be equal to $a$ if $a\in\Gamma$, and to $\varepsilon$ if $a=\text{\small$\#$}$. We extend this to a projection from $(\Gamma\cup\text{\small$\#$})^{*}$ to $\Gamma^{*}$ in the standard way. We define $G^{\prime}$ as the rational graph defined by the set of transducers $(T^{\prime}_{a})_{a\in\Sigma}$ where $T^{\prime}_{a}$ has the same set of control states $Q_{a}$ as $T_{a}$ and a set of transitions given by $$\big{\{}\,p\overset{a/b}{\longrightarrow}q\;|\;p\overset{\bar{a}/\bar{b}}{% \longrightarrow}q\in T_{a}\,\big{\}}\ \cup\ \big{\{}\,p\overset{\text{\tiny$\#% $}/\text{\tiny$\#$}}{\longrightarrow}p\;|\;p\in Q_{a}\,\big{\}}.$$ By definition of each $T^{\prime}_{a}$, $G^{\prime}$ is a synchronous rational graph. Let $I^{\prime}$ and $F^{\prime}$ be the two rational sets such that $I^{\prime}=\{u\;|\;\bar{u}\in I\}$ and $F^{\prime}=\{v\;|\;\bar{v}\in F\}$ (the automaton accepting $I^{\prime}$ (resp. $F^{\prime}$) is obtained from the automaton accepting $I$ (resp. $F$) by adding a loop labeled by $\#$ on each control state). We claim that $G^{\prime}$ accepts between $I^{\prime}$ and $F^{\prime}$ the same language as $G$ between $I$ and $F$. For example, Figure 4 illustrates the previous construction applied to the graph of Figure 1. Only one connected component of the obtained graph is shown. Before we prove the correctness of this construction, we need to establish a couple of technical lemmas. Let $\mathcal{B}$ be the set of all mappings from $\mathbb{N}$ to $\mathbb{N}$. To any mapping $\delta\in\mathcal{B}$, we associate a mapping from $(\Gamma\cup\{\text{\small$\#$}\})^{*}$ to $(\Gamma\cup\{\text{\small$\#$}\})^{*}$ defined as follows: for all $w=\text{\small$\#$}^{i_{0}}a_{1}\text{\small$\#$}^{i_{1}}\ldots a_{n}\text{% \small$\#$}^{i_{n}}$ with $a_{1},\ldots,a_{n}\in\Gamma$, let $\delta w=\text{\small$\#$}^{i_{0}+\delta(0)}a_{1}\text{\small$\#$}^{i_{1}+% \delta(1)}\ldots a_{n}\text{\small$\#$}^{i_{n}+\delta(n)}$. Before proceeding, we state two properties of these mappings with respect to the sets of transducers $(T_{a})$ and $(T^{\prime}_{a})$. {lem} We have the following properties: $$\displaystyle\forall u,v\in\Gamma^{*},\ \,(u,v)\in T_{a}\ \iff\ \exists\delta_% {u},\delta_{v}\in\mathcal{B},$$ $$\displaystyle\ (\delta_{u}u,\delta_{v}v)\in T^{\prime}_{a},$$ (1) $$\displaystyle\forall(u,v)\in T^{\prime}_{a},\ \forall\delta\in\mathcal{B},\ % \exists\delta^{\prime}\in\mathcal{B},$$ $$\displaystyle\ (\delta u,\delta^{\prime}v)\in T^{\prime}_{a},$$ (2) $$\displaystyle\text{and dually \quad}\forall(u,v)\in T^{\prime}_{a},\ \forall% \delta\in\mathcal{B},\ \exists\delta^{\prime}\in\mathcal{B},$$ $$\displaystyle\ (\delta^{\prime}u,\delta v)\in T^{\prime}_{a}.$$ (3) We can now prove the correctness of the construction: a word $w$ is accepted by $G$ between $I$ and $F$ if and only if it is accepted by $G^{\prime}$ between $I^{\prime}$ and $F^{\prime}$. {prop} For every rational graph $G$ and rational sets of vertices $I$ and $F$, there is a synchronous graph $G^{\prime}$ and two rational sets $I^{\prime}$ and $F^{\prime}$ such that $L(G,I,F)=L(G^{\prime},I^{\prime},F^{\prime})$. Proof. We show by induction on $n$ that for all $u_{0},\ldots,u_{n}\in\Gamma^{*}$, if there is a path $$u_{0}\overset{w(1)}{\underset{G}{\longrightarrow}}u_{1}\ldots u_{n-1}\overset{% w(n)}{\underset{G}{\longrightarrow}}u_{n},$$ then there exist words $u^{\prime}_{0},\ldots,u^{\prime}_{n}\in(\Gamma\cup\{\text{\small$\#$}\})^{*}$ such that for all $i$, $\bar{u}^{\prime}_{i}=u_{i}$, and $$u^{\prime}_{0}\overset{w(1)}{\underset{G^{\prime}}{\longrightarrow}}u^{\prime}% _{1}\ldots u^{\prime}_{n-1}\overset{w(n)}{\underset{G^{\prime}}{% \longrightarrow}}u^{\prime}_{n}.$$ The case where $n=0$ is trivial. Suppose the property is true for all paths of length at most $n$, and consider a path $$u_{0}\overset{w(1)}{\underset{G}{\longrightarrow}}\ldots\overset{w(n)}{% \underset{G}{\longrightarrow}}u_{n}\overset{w(n+1)}{\underset{G}{% \longrightarrow}}u_{n+1}.$$ By induction hypothesis, one can find mappings $\delta_{0},\ldots\delta_{n}$ such that $$\delta_{0}u_{0}\overset{w(1)}{\underset{G^{\prime}}{\longrightarrow}}\ldots% \overset{w(n)}{\underset{G^{\prime}}{\longrightarrow}}\delta_{n}u_{n}.$$ We now use the properties of mappings stated in Lemma 3.1. By (1), there exist $\delta^{\prime}_{n}$ and $\delta^{\prime}_{n+1}$ such that $\delta^{\prime}_{n}u_{n}\overset{w(n+1)}{\longrightarrow}\delta^{\prime}_{n+1}% u_{n+1}\in G^{\prime}$. Let $\gamma_{n}$ and $\gamma^{\prime}_{n}$ be two elements of $\mathcal{B}$ such that $\delta^{\prime}_{n}\circ\gamma^{\prime}_{n}=\delta_{n}\circ\gamma_{n}$. By Lemma (2) and (3), we can find mappings $\gamma^{\prime}_{n+1}$ and $\gamma_{0}$ to $\gamma_{n-1}$ such that: $$\gamma_{0}\delta_{0}u_{0}\overset{w(1)}{\underset{G^{\prime}}{\longrightarrow}% }\ldots\overset{w(n)}{\underset{G^{\prime}}{\longrightarrow}}\gamma_{n}\delta_% {n}u_{n}=\gamma^{\prime}_{n}\delta^{\prime}_{n}u_{n}\overset{w(n+1)}{\underset% {G^{\prime}}{\longrightarrow}}\gamma^{\prime}_{n+1}\delta^{\prime}_{n+1}u_{n+1}$$ which concludes the proof by induction. If we suppose that $u_{0}\in I$ and $u_{n}\in F$, then necessarily $u^{\prime}_{0}\in I^{\prime}$ and $u^{\prime}_{n}\in F^{\prime}$. It follows that for every path in $G$ between $I$ and $F$, there is a path in $G^{\prime}$ between $I^{\prime}$ and $F^{\prime}$ with the same path label. Conversely, by (1), for any such path in $G^{\prime}$, erasing the occurrences of $\#$ from its vertices yields a valid path in $G$ between $I$ and $F$. Hence $L(G,I,F)=L(G^{\prime},I^{\prime},F^{\prime})$. ∎ 3.2. Equivalence between synchronized graphs and tiling systems The following propositions establish the tight relationship between tiling systems and synchronous rational graphs. Proposition 3.2 presents an effective transformation of a tiling system into a synchronous rational graph. {prop} Given a tiling system $S=(\Gamma,\Sigma,\#,\Delta)$, there exists a synchronous rational graph $G$ and two rational sets $I$ and $F$ such that $L(G,I,F)={L}(S)$. Proof. Consider the finite automaton $A$ on $\Gamma$ with a set of states $Q=\Gamma\cup\{\text{\small$\#$}\}$, initial state $\#$, a set of final states $F$ and a set of transitions $\delta$ given by: $$F\ :\ a\quad\text{ such that }\quad\begin{array}[]{|@{}c@{}|@{}c@{}|}\hline% \minipage[c][][c]{0.0pt}\centering{$a$}\@add@centering\endminipage&\minipage[c% ][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering% \endminipage&\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}% \@add@centering\endminipage\\ \hline\end{array}\ \in\Delta$$ $$\delta\ :\ \text{\small$\#$}\overset{a}{\underset{A}{\longrightarrow}}a,\ a% \overset{b}{\underset{A}{\longrightarrow}}b\text{\quad for all \quad}\begin{array}[]{|@{}c@{}|@{}c@{}|}\hline\minipage[c][][c]{0.0pt}% \centering{$\text{\small$\#$}$}\@add@centering\endminipage&\minipage[c][][c]{0% .0pt}\centering{$\text{\small$\#$}$}\@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$a$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering% \endminipage\\ \hline\end{array},\quad\begin{array}[]{|@{}c@{}|@{}c@{}|}\hline\minipage[c][][% c]{0.0pt}\centering{$a$}\@add@centering\endminipage&\minipage[c][][c]{0.0pt}% \centering{$\text{\small$\#$}$}\@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$b$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering% \endminipage\\ \hline\end{array}\in\Delta\text{ (respectively).}$$ Call $M$ the language recognized by $A$, $M$ represents the set of possible last columns of pictures of ${P}(S)$. Note that this does not imply that each word of $M$ actually is the last column of a picture in ${P}(S)$, only that it is compatible with the right border tiles of $\Delta$. Let us build a synchronous rational graph $G$ and two rational sets $I$ and $F$ such that $L(G,I,F)=L(S)$. The transitions of the set of transducers $(T_{e})_{e\in\Sigma}$ of $G$ are: $$\displaystyle(\text{\small$\#$},\text{\small$\#$})$$ $$\displaystyle\overset{c/d}{\underset{T_{d}}{\longrightarrow}}(c,d)$$ $$\displaystyle\text{\quad for all \quad}\begin{array}[]{|@{}c@{}|@{}c@{}|}% \hline\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering% \endminipage&\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}% \@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$c$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$d$}\@add@centering\endminipage\\ \hline\end{array}$$ $$\displaystyle\in\Delta,\ d\neq\text{\small$\#$}$$ $$\displaystyle(a,b)$$ $$\displaystyle\overset{c/d}{\underset{T_{e}}{\longrightarrow}}(c,d)$$ $$\displaystyle\text{\quad for all \quad}\begin{array}[]{|@{}c@{}|@{}c@{}|}% \hline\minipage[c][][c]{0.0pt}\centering{$a$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$b$}\@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$c$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$d$}\@add@centering\endminipage\\ \hline\end{array}$$ $$\displaystyle\in\Delta,\ b,d\neq\text{\small$\#$},\ e\in\Sigma$$ where $(\text{\small$\#$},\text{\small$\#$})$ is the unique initial state of each transducer and the set of final states $F$ of each transducer is given by: $$\begin{array}[]{lcl}F&:&(a,b)\in(\Gamma\cup\{\text{\small$\#$}\})\times\Gamma% \;\;\text{such that}\;\;\begin{array}[]{|@{}c@{}|@{}c@{}|}\hline\minipage[c][]% [c]{0.0pt}\centering{$a$}\@add@centering\endminipage&\minipage[c][][c]{0.0pt}% \centering{$b$}\@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}\@add@centering% \endminipage&\minipage[c][][c]{0.0pt}\centering{$\text{\small$\#$}$}% \@add@centering\endminipage\\ \hline\end{array}\in\Delta.\end{array}$$ A pair of words $(s,t)$ is accepted by the transducer $T_{e}$ if and only if $e$ is the first letter of $t$, and either $s$ and $t$ are two adjacent columns of a picture in ${P}(S)$ or $s\in\text{\small$\#$}^{*}$ and $t$ is the first column of a picture in ${P}(S)$. As a consequence, $L(S)=L(G,\text{\small$\#$}^{*},M)$. ∎ {exa} Figure 5 shows the transducers obtained using the previous construction on the tiling system of Figure 2. They define a rational graph whose path language between $\text{\small$\#$}^{*}$ and $b^{*}\bot$ is $\{a^{n}b^{n}\mid n\geq 1\}$. Figure 6 presents the corresponding synchronous graph whose vertices are the rational set of words $\#^{\geq 2}\;\cup\;a^{+}\bot^{+}\;\cup\;b^{+}\bot^{+}$, the set of initial vertices is $\#^{\geq 2}$ and the set of final vertices is $b^{+}\bot$. Remark that in this example, the set of vertices accessible from the initial vertices is rational: this is not true in the general case. {rem} The correspondence between a tiling system $S$ and the synchronous graph $G$ constructed from $S$ in Proposition 3.2 is tight: each picture $p$ with frontier $w$ can be mapped to a unique accepting path for $w$ in $G$ (and conversely). Conversely, Proposition 3.2 states that the languages accepted by synchronous rational graphs between rational sets of vertices can be accepted by a tiling system. To make the construction simpler, we first prove that the sets of initial and final vertices can be chosen over a one-letter alphabet without loss of generality. {lem} For every synchronous rational graph $G$ with vertices in $\Gamma^{*}$ and rational sets $I$ and $F$, one can find a synchronous rational graph $H$ and two symbols $i$ and $f\notin\Gamma$ such that $L(G,I,F)=L(H,i^{*},f^{*})$. Proof. Let $G=(K_{a})_{a\in\Sigma}$ be a synchronous rational graph with vertices in $\Gamma^{*}$. For $i,f$ two new distinct symbols, we define a new synchronous rational graph $H$ characterized by the set of transductions $\big{(}T_{a}=(T_{I}\circ K_{a})\ \cup\ K_{a}\ \cup\ (K_{a}\circ T_{F})\big{)}_% {a\in\Sigma}$ where $T_{I}=\{(i^{n},u)\;|\;n\geq 0,\ u\in I,\ |u|=n\}$ and $T_{F}=\{(v,f^{n})\;|\;n\geq 0,\ v\in F,\ |v|=n\}$. For all vertices $u\in I,\ v\in F$ we have $u\overset{w}{\underset{G}{\longrightarrow}}v$ if and only if $i^{|u|}\overset{w}{\underset{H}{\longrightarrow}}f^{|u|}$, i.e. $L(G,I,F)=L(H,i^{*},f^{*})$. ∎ We are now able to establish the converse of Proposition 3.2, which states that all the languages accepted by synchronous rational graphs between rational sets of vertices can be accepted by a tiling system. {prop} Given a synchronous rational graph $G$ and two rational sets $I$ and $F$, there exists a tiling system $S$ such that ${L}(S)={L}(G,I,F)$. Proof. Let $G=(T_{a})_{a\in\Sigma}$ be a synchronous rational graph with vertices in $\Gamma^{*}$ (with $\Sigma\subseteq\Gamma$). By Lemma 3.2, we can consider without loss of generality that $I=i^{*}$ and $F=f^{*}$ for some distinct letters $i$ and $f$, and that neither $i$ nor $f$ occurs in any vertex which is not in $I$ or $F$. Furthermore by Remark 2.2, we can assume that $T_{a}$ is non-ambiguous for all $a\in\Sigma$. We write $Q_{a}$ the set of control states of $T_{a}$. We suppose that all control state sets are disjoint, and designate by $q_{0}^{a}\in Q_{a}$ the unique initial state of each transducer $T_{a}$, and by $Q_{F}$ the set of final states of all $T_{a}$. Let $a,b,c,d\in\Sigma$, $x,x^{\prime},y,y^{\prime},z,z^{\prime}\in\Gamma$, and $p,p^{\prime},q,q^{\prime},r,r^{\prime},s,s^{\prime}\in\bigcup_{a\in\Sigma}Q_{a}$. We define a tiling system $S=(\Gamma,\Sigma,\text{\small$\#$},\Delta)$, where $\Delta$ is the set of tiles from Figure 7. By construction, ${P}(S)$ is in exact bijection with the set of accepting paths in $G$ with respect to $I$ and $F$. Let $\phi$ be the function associating to a picture $p\in P(S)$ with columns $a_{1}w_{1},\ldots,a_{n}w_{n}$, the path $i^{|w_{1}|}\overset{a_{1}}{\longrightarrow}\widetilde{w_{1}}\ldots\overset{a_{% n}}{\longrightarrow}\widetilde{w_{n}}$ where $\widetilde{w}$ is obtained by removing the control states from $w$. By construction of $S$, the function $\phi$ is well defined. It is easy to check that $\phi$ is an onto function. As the transducers defining $G$ are non-ambiguous, two distinct pictures have distinct images by $\phi$ and therefore $\phi$ is an injection. Hence, the tiling system $(\Gamma,\Sigma,\text{\small$\#$},\Delta)$ exactly recognizes $L(G,I,F)$. ∎ {rem} As in Remark 3.2, the set of paths in $G$ from $I$ to $F$ and the set of pictures $P(S)$ accepted by $S$ are in bijection, and the length of the vertices along the path is equal to the height of the corresponding picture. Putting together Propositions 3.2 and 3.2 and Theorem 2.4, we obtain the following result concerning the path languages of synchronous rational graphs. {thm} [[Ris02]] The languages accepted by synchronous rational graphs between rational sets of initial and final vertices are the context-sensitive languages. Note that this formulation of the theorem could be made a bit more precise by recalling that initial and final sets of vertices only of the form $x^{*}$, where $x$ is a letter, are sufficient to accept all context-sensitive languages, as stated in Lemma 3.2. By Proposition 3.1, this implies as a corollary the original result by Morvan and Stirling [MS01]. {cor} The languages accepted by rational graphs between rational sets of initial and final vertices are the context-sensitive languages. If we transform a rational graph into a Turing machine by applying successively the construction of Proposition 3.1, Proposition 3.2 and Theorem 2.4, we obtain the same Turing machine as in [MS01]. 3.3. Sequential synchronous graphs are enough Theorem 6 shows that when considering rational sets of initial and final vertices, synchronous graphs are enough to accept all context-sensitive languages. Interestingly, when considering rational sets of initial and final vertices, the even more restricted class of sequential synchronous transducers are sufficient. {prop} The languages accepted by sequential synchronous rational graphs between rational sets of initial and final vertices are the context-sensitive languages. Proof. Thanks to Proposition 3.2, it suffices to prove that any context sensitive language $L\subseteq\Sigma^{*}$ is accepted by a synchronous sequential rational graph. By Theorem 2.4, we know that there exists a tiling system $S=(\Gamma,\Sigma,\text{\small$\#$},\Delta)$ such that ${L}(S)=L$. Let $\Lambda=\Gamma\cup\{\text{\small$\#$}\}$ and $[$ and $]$ be two symbols that do not belong to $\Lambda$. We associate to each picture $p\in\Lambda^{**}$ with rows $l_{1},\ldots,l_{n}$ the word $[l_{1}]\ldots[l_{n}]$. We are going to define a set of sequential synchronous transducers that, when iterated, recognize the words corresponding to pictures in $P(S)$. First, for any finite set of tiles $\Delta$, we construct a transducer $T_{\Delta}$ which checks that a word in $([\Lambda^{\geq 3}])^{\geq 2}$ represents a picture with tiles in $\Delta$. The checking is done column by column, and we introduce marked letters to keep track of the column being checked. Let $\widetilde{{\Lambda}}$ be a finite alphabet in bijection with but disjoint from $\Lambda$. For all $x\in\Lambda$ we write $\widetilde{{x}}\in\widetilde{{\Lambda}}$ the marked version of $x$. For every word $w=u\widetilde{{x}}v\in\Lambda^{*}\widetilde{{\Lambda}}\Lambda^{*}$, we write $\pi(w)$ the word $uxv\in\Lambda^{*}$ and $\rho(w)=|u|+1$ designates the position of the marked letter in the word. We consider words in $[\Lambda^{*}\widetilde{{\Lambda}}\Lambda^{*}]^{\geq 2}$. Let $\mathrm{Shift}$ be the relation that shifts all marks in a word one letter to the right. More precisely, $\mathrm{Shift}$ satisfies $Dom(\mathrm{Shift})=([\Lambda^{*}\widetilde{{\Lambda}}\Lambda^{+}])^{\geq 2}$, and $\mathrm{Shift}([w_{1}]\ldots[w_{n}])=[w^{\prime}_{1}]\ldots[w^{\prime}_{n}]$ with $\pi(w^{\prime}_{i})=\pi(w_{i})$ and $\rho(w^{\prime}_{i})=\rho(w_{i})+1$ for all $i\in[1,n]$. The rational relation $\mathrm{Shift}$ can be realized by a synchronous sequential transducer $T_{\mathrm{Sh}}$. Consider the following rational language: $$R_{\Delta}=\left\{\,[w_{1}x_{1}\widetilde{{y_{1}}}w^{\prime}_{1}]\ldots[w_{n}x% _{n}\widetilde{{y_{n}}}w^{\prime}_{n}]\mid n\geq 2\;\text{and}\;\forall i\in[2% ,n],\ \begin{array}[]{|@{}c@{}|@{}c@{}|}\hline\minipage[c][][c]{0.0pt} \centering{$x_{i-1}$} \@add@centering\endminipage&\minipage[c][][c]{0.0pt} \centering{$y_{i-1}$} \@add@centering\endminipage\\ \hline\minipage[c][][c]{0.0pt}\centering{$x_{i}$}\@add@centering\endminipage&% \minipage[c][][c]{0.0pt}\centering{$y_{i}$}\@add@centering\endminipage\\ \hline\end{array}\in\Delta\,\right\}.$$ The transducer $T_{\Delta}$ obtained by restricting $T_{\mathrm{Sh}}$ to the domain $R_{\Delta}$ is both synchronous and sequential. For all $w=[w_{1}]\ldots[w_{n}]\in([\Lambda\widetilde{{\Lambda}}\Lambda^{*}])^{\geq 2}$, if $w^{\prime}=T_{\Delta}^{N}(w)$ then $w^{\prime}=[w^{\prime}_{1}]\ldots[w^{\prime}_{n}]$ with $\pi(w_{i})=\pi(w^{\prime}_{i})$ and $\rho(w_{i}^{\prime})=N+2$ for all $i\in[1,n]$. Let $r_{i}$ be the word containing the $N+1$ first letters of $w^{\prime}_{i}$, a straightforward induction on $N$ proves that the picture $p$ formed of the rows $r_{1},\ldots,r_{n}$ only has tiles in $\Delta$. In particular, $T_{\Delta}^{N}(w)$ belongs to $([\Lambda^{*}\widetilde{{\Lambda}}])^{*}\cap R_{\Delta}$ if and only if $\pi(w)$ represent a picture $p$ of width $N+2$ such that $T(p)\subseteq\Delta$. We now define more precisely the sequential rational graph $G=(T_{a})_{a\in\Sigma}$ accepting $L$. For all $a\in\Sigma$, the transducer $T_{a}$ is obtained by restricting the domain of $T_{\Delta}$ to the set of words representing pictures whose marked symbol on the second row is $a$, i.e. to the set $[(\Lambda\cup\widetilde{{\Lambda}})^{*}][(\Lambda^{*}\widetilde{{a}}\Lambda^{*% }][(\Lambda\cup\widetilde{{\Lambda}})^{*}]^{*}$. $T_{a}$ can be chosen synchronous and sequential. The set of initial vertices $I$ is $[\text{\small$\#$}\widetilde{{\text{\small$\#$}}}\text{\small$\#$}^{*}]([\text% {\small$\#$}\widetilde{{\Gamma}}\Gamma^{*}\text{\small$\#$}])^{*}[\text{\small% $\#$}\widetilde{{\text{\small$\#$}}}\text{\small$\#$}^{*}]$ and the set of final vertices $F$ is $[\text{\small$\#$}^{*}\widetilde{{\text{\small$\#$}}}]([\text{\small$\#$}% \Gamma^{*}\widetilde{{\text{\small$\#$}}}])^{*}[\text{\small$\#$}^{*}% \widetilde{{\text{\small$\#$}}}]$. ∎ {exa} Figure 8 shows a part of the result of the previous construction when applied to the language $\{a^{n}b^{n}\mid n\geq 1\}$ as recognized by the tiling system of Figure 2. Each vertex is represented by the corresponding picture, instead of the word coding for it. Also, only one connected component of the graph is shown. The other connected components all have the same linear structure: the degree of the graph is bounded by 1. The leftmost vertex belongs to the set $I$, and the rightmost to the set $F$, hence the word $a^{2}b^{2}$ is accepted. {rem} In the case of synchronized transducers, it has been shown in Lemma 3.2 that $I$ could be taken over a one letter alphabet without loss of generality. This does not seems to hold for sequential transducers as the proof we present relies on the expressiveness of the initial set of vertices. In fact, as shown in Proposition 5.2, the languages recognized by sequential synchronous graph from $i^{*}$ are deterministic context-sensitive languages. 4. Rational graphs seen as automata The structure of the graphs obtained in the previous section (propositions 3.2 and 3.3) is very poor. Synchronous graphs are by definition composed of a possibly infinite set of finite connected components. In the case of Proposition 3.3, we obtain an even more restricted family of graphs since both their in-degree and out-degree is bounded by 1. However, when considering accepted languages from a possibly infinite rational set of vertices, even this extremely restricted family accepts the same languages as the most general rational graphs, namely all context-sensitive languages. This is why, in order to compare the expressiveness of the different sub-families of rational graphs and to obtain graphs with richer structures, we need to impose structural restrictions. We first consider graphs with a single initial vertex, but this restriction alone is not enough. In fact, both synchronized and rational graphs with a rational set of initial vertices accept the same languages as their counterparts with a single initial vertex. {lem} For every rational graph (resp. synchronized graph) $G$ and for every pair of rational sets $I$ and $F$, there exists a rational graph (resp. a synchronized graph) $G^{\prime}$, a vertex $i$ and a rational set $F^{\prime}$ such that $L(G,I,F)=L(G^{\prime},\{i\},F^{\prime})$. Proof. Let $G=(T_{a})_{a\in\Sigma}$ be a rational graph with vertices in $\Gamma^{*}$ and let $i$ be a symbol which does not belong to $\Gamma$ and $\Gamma^{\prime}=\Gamma\cup\{i\}$. For all $a\in\Sigma$, let $T^{\prime}_{a}$ be a transducer recognizing the rational relation $T_{a}\cup\{(i,w)\mid w\in T_{a}(I)\}$. Remark that if $T_{a}$ is synchronized then $T^{\prime}_{a}$ can also be chosen synchronized. If $\varepsilon\not\in L(G,I,F)$ then $F^{\prime}=F$ else $F^{\prime}=F\cup\{i\}$. It is straightforward to show that $L(G,I,F)=L(G^{\prime},\{i\},F^{\prime})$. ∎ It follows from Proposition 3.2 and Lemma 4 that the synchronized rational graphs with one initial vertex accept the context-sensitive languages [Ris02]. {rem} It is fairly obvious that this result does not hold for synchronous graphs: indeed, the restriction of a synchronous rational graph to the vertices reachable from a single vertex is finite. Hence, the languages of synchronous graphs from a single vertex are rational. Similarly, as any rational language is accepted by a deterministic finite graph, it can also be accepted by a sequential synchronous graph with a single initial vertex. Note that the construction of Lemma 4 relies on infinite out-degree to transform a synchronous graph with a rational set of initial vertices into a rational one with a single initial vertex. In order to obtain more satisfactory notions of infinite automata, we now restrict our attention to graphs of finite out-degree with a single initial vertex. 4.1. Rational graphs of finite out-degree with one initial vertex. We present a syntactical transformation of a synchronous rational graph with a rational set of initial vertices into a rational graph of finite out-degree with a unique initial vertex accepting the same language. The construction relies on the fact that for a synchronous graphs to recognize a word of length $n>0$, it is only necessary to consider vertices whose length is smaller than $c^{n}$ (where $c$ is a constant depending only on the graph). We first establish a similar result for tiling systems and conclude using the close correspondence between synchronous graphs and tiling systems established in Proposition 3.2. {lem} For any tiling system ${S}=(\Gamma,\Sigma,\#,\Delta)$, if $p\in{P}({S})$ then there exists a $(n,m)$-picture $p^{\prime}$ such that $\mathrm{fr}\left(p\right)=\mathrm{fr}\left(p^{\prime}\right)$ and $n\leq|\Gamma|^{m}$. Proof. Let $p^{\prime}$ be a $(n,m)$-picture with $n>|\Gamma|^{m}$, and suppose that $p^{\prime}$ is the smallest picture in ${P}({S})$ with frontier $\mathrm{fr}\left(p\right)$. Let $l_{1},\ldots,l_{n}$ be the rows of $p^{\prime}$. As $n>|\Gamma|^{m}$ then there exists $j>i\geq 1$ such that $l_{i}=l_{j}$. Let $p^{\prime\prime}$ be the picture with rows $l_{1},\ldots,l_{i},l_{j+1},\ldots,l_{n}$. It is easy to check that $T({p^{\prime\prime}}_{\text{\tiny$\#$}})\subset T({p^{\prime}}_{\text{\tiny$\#% $}})$, we have that $p^{\prime\prime}\in{P}({S})$ and as $p^{\prime\prime}$ has a smaller height than $p^{\prime}$ but the same frontier, we obtain a contradiction. ∎ We know from Remark 3.2 that for every synchronous rational graph $G=(T_{a})_{a\in\Sigma}$ and two rational sets $I$ and $F$, there exists a tiling system ${S}$ such that $i\overset{w}{\underset{G}{\longrightarrow}}f$ with $i\in I$ and $f\in F$ if and only if there exists $p\in K$ such that $\mathrm{fr}\left(p\right)=w$ and $p$ has height $|i|=|f|$. Hence, as a direct consequence of Lemma 4.1, one gets: {lem} For every synchronous rational graph $G$ and rational sets $I$ and $F$, there exists $k\geq 1$ such that: $$\forall w\in L(G,I,F),\exists i\in I,f\in F\text{ such that }i\overset{w}{% \underset{G}{\longrightarrow}}f\text{ and }|i|=|f|\leq k^{|w|}.$$ We can now construct of a rational graph of finite out-degree accepting from a single vertex the same language as a synchronous graph with a rational set of initial vertices. {prop} For every synchronous rational graph $G$ and rational sets $I$ and $F$ such that $I\cap F=\emptyset$, there is a rational graph $H$ of finite out-degree and a vertex $i$ such that $L(G,I,F)=L(H,\{i\},F)$. Proof. According to Lemma 3.2, there exists a synchronous rational graph $R$ described by a set of transducers $(T_{a})_{a\in\Sigma}$ over $\Gamma^{*}$ such that $L(G,I,F)=L(R,\text{\small$\#$}^{*},F)$. Note that for all $w\in\text{\small$\#$}^{*}$ and $w^{\prime}\in\Gamma^{*}$, if $w\overset{}{\underset{R}{\longrightarrow}}w^{\prime}$ then $w^{\prime}$ does not contain $\#$. We define a graph $H$ such that $L(G,I,F)=L(H,\{i\},F)$ for some vertex $i$ of $H$. Let $k$ be the constant involved in Lemma 4.1, $T$ and $T^{\prime}$ two transducers realizing the rational relations $\left\{(\text{\small$\#$}^{n},\text{\small$\#$}^{kn})\;|\;n\in\mathbb{N}\right\}$ and $\left\{\text{\small$\#$}^{n},\text{\small$\#$}^{m}\;|\;m\in[1,n]\right\}$ respectively. For all $a,b,c\in\Sigma$ and $u\in\Sigma^{*}$, $H$ has edges: $$\displaystyle\forall n\in\mathbb{N},$$ $$\displaystyle u$$ $$\displaystyle|\text{\small$\#$}^{n}$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle ua$$ $$\displaystyle|\,T\circ T(\text{\small$\#$}^{n})$$ (Type 1) $$\displaystyle\forall n\in\mathbb{N},$$ $$\displaystyle bu$$ $$\displaystyle|\text{\small$\#$}^{n}$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle ua$$ $$\displaystyle|\,T\circ T^{\prime}\circ T_{b}(\text{\small$\#$}^{n})$$ (Type 2) $$\displaystyle\forall n\in\mathbb{N},$$ $$\displaystyle bcu$$ $$\displaystyle|\text{\small$\#$}^{n}$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle ua$$ $$\displaystyle|\,T^{\prime}\circ T_{b}\circ T_{c}(\text{\small$\#$}^{n})$$ (Type 3) $$\displaystyle\forall w\in\left(\Gamma\setminus\{\text{\small$\#$}\}\right)^{*},$$ $$\displaystyle bcu$$ $$\displaystyle|w$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle ua$$ $$\displaystyle|\,T_{b}\circ T_{c}(w)$$ (Type 4) $$\displaystyle\forall w\in\left(\Gamma\setminus\{\text{\small$\#$}\}\right)^{*},$$ $$\displaystyle b$$ $$\displaystyle|w$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle T_{b}\circ T_{a}(w)$$ (Type 5) $$\displaystyle|\text{\small$\#$}$$ $$\displaystyle\overset{a}{\longrightarrow}$$ $$\displaystyle T\circ T^{\prime}\circ T_{a}(\text{\small$\#$})$$ (Type 6) The graph $H$ is clearly rational and of finite out-degree. We take $i=|\text{\small$\#$}$ as initial vertex. Remark that in $H$ an edge of type 2 or 3 cannot be followed by edges of type 1, 2 or 3, and at most one edge of type 2 or 3 and of type 5 or 6 can be applied. Moreover, an edge of type 1 increases the length of the left part of the word by one, and an edge of type 4 decreases it by one. Also, in any accepting path, the last edge is of type 5 or 6. Figure 9 illustrates the structure of the obtained graph. It is technical but straightforward to show a correspondence between accepting paths in $H$ and $R$, and to conclude that $L(R,\text{\small$\#$}^{*},F)=L(H,\{i\},F)$. ∎ From Proposition 3.1 and Proposition 4.1, we deduce that the rational graphs of finite out-degree with one initial vertex accept all context-sensitive languages. This result was proved in [MS01] using the Penttonen normal form of context-sensitive grammars [Pen74]. {thm} The path languages of rational graphs of finite out-degree from a unique initial vertex to a rational set of final vertices are the context-sensitive languages. 4.2. Synchronized graphs of finite out-degree with one initial vertex We now consider the languages of synchronized graphs of finite out-degree with one initial vertex. First, we characterize them as the languages recognized by tiling systems with square pictures (i.e. for which there exists $c\in\mathbb{N}$ such that for every word $w\in{L}(S)$, there exists a $(n,m)$-picture in $P(S)$ with $n\leq cm$ and with frontier $w$). A slight adaptation of the construction of Proposition 4.1 gives the first inclusion. {prop} Let $S=(\Gamma,\Sigma,\#,\Delta)$ be a tiling system with square pictures. There exists a synchronized rational graph of finite degree accepting ${L}(S)$ from one initial vertex. Proof. Let $G=(T_{a})_{a\in\Sigma}$ be the synchronized graph obtained from $S$ in Proposition 3.2. In the construction from the proof of Proposition 4.1, if we replace the transducer $T$ by a transducer $S$ realizing the synchronized relation $\{(\#^{n},\#^{n+c})\;|\;n\in\mathbb{N}\}$, we obtain a synchronized graph $H$, a vertex $i$ and a set $F$ such that ${L}(H,i,F)={L}({S})$. ∎ Before proceeding with the converse, we state a result similar to Lemma 4.1 for synchronized graphs of finite out-degree that states that when recognizing a word $w$ from a unique initial vertex $i$, the vertices involved have a length at most linear in the size of $w$. {lem} For any synchronized rational graph $G$ of finite out-degree with vertices in $\Gamma^{*}$ and for every vertex $i$, there exists a constant $k$ such that for all $w$ in $L(G,\{i\},F)$, there exists a path from $i$ to some $f\in F$, labeled by $w$, and with vertices of size at most $k\cdot|w|$. Proof. It follows from the definition of synchronized transducers that for every synchronized transducer of finite out-degree there exists $c\in\mathbb{N}$ such that $(x,y)\in T$ implies that $|x|\leq|y|+c$ (see [Sak03] for a proof of this result). We take $k$ to be the maximum over the set of transducers defining $G$ of these constants. The result follows by a straightforward induction on the size of $w$. ∎ The converse inclusion is obtained by remarking that the composition of the construction of Proposition 3.1 and Proposition 3.2 gives a tiling system with square pictures when applied to a synchronized graph of finite out-degree. {prop} Let $G=(T_{a})_{a\in\Sigma}$ be a synchronized graph of finite out-degree. For every initial vertex $i$ and set of final vertices $F$, there exists a tiling system ${S}$ with square pictures such that ${L}({S})={L}(G,\{i\},F)$. Proof. Let $G^{\prime}$, $I^{\prime}$ and $F^{\prime}$ be the synchronous graph and the rational set of initial and final vertices obtained by applying the constructions of Proposition 3.1 to $G$, $\{i\}$ and $F$. It is easy to show that for every word $w\in{L}(G^{\prime},I^{\prime},F^{\prime})$, there exists $i^{\prime}\in I^{\prime}$ and $f^{\prime}\in F^{\prime}$ such that $i^{\prime}\overset{w}{\Longrightarrow}f^{\prime}$ with $|i^{\prime}|=|f^{\prime}|\leq k|w|$ where $k$ is the constant of Lemma 4.2 for $G$. We conclude by Proposition 3.2, that states the existence of a tiling system $S$ such that ${L}(S)={L}(G^{\prime},I^{\prime},F^{\prime})$. By Remark 3.2, $S$ is a tiling system with square pictures. ∎ Putting together Proposition 4.2 and Proposition 4.2 and with the use of the simulation result from Theorem 2.4, we obtain the following theorem. {thm} The languages accepted by synchronized graphs of finite out-degree from a unique vertex to a rational set of vertices are the context-sensitive languages recognized by non-deterministic linearly bounded machines with a linear number of head reversals. We conjecture that this class is strictly contained in the context-sensitive languages. However, few separation results exist for complexity classes defined by time and space restrictions (see for example [vM04]). In particular, the diagonalization techniques (see [For00]) used to prove that the polynomial time hierarchy (with no space restriction) is strict do not apply for lack of a suitable notion of universal LBM. 4.3. Bounding the out-degree It is natural to wonder if the rational graphs still accept the context-sensitive languages when considering bounded out-degree. This is a difficult question, to which we only provide here a partial answer concerning the synchronized graphs of bounded out-degree. It follows from Lemma 4.2 that the vertices used to accept a word $w$ in a synchronized rational graph have a length at most linear in the length of $w$ and therefore, can be stored on the tape of a LBM. Moreover if the graph is deterministic, we can construct a deterministic LBM accepting its language. {prop} The language accepted by a deterministic synchronized graph from a unique initial vertex is deterministic context-sensitive. Proof. Let $G=(T_{a})_{a\in\Sigma}$ be a deterministic synchronized graph over $\Gamma$, $i$ a vertex and $F$ a rational set of vertices. We define a deterministic LBM $M$ accepting ${L}(G,\{i\},F)$. When accepting $w=a_{1}\ldots a_{|w|}$, $M$ starts by writing $i$ on its tape. It successively applies $T_{a_{1}}$, …, $T_{a_{n-1}}$ and $T_{a_{n}}$ to $i$. If the image of the current tape content by one of these transducers is not defined, the machine rejects. Otherwise, it checks whether the last tape content represents a vertex which belongs to $F$. We now detail how the machine $M$ can apply one of the transducers $T$ of $G$ to a word $x$ in a deterministic manner. As $T$ has a finite image, we can assume without loss of generality that $T=(\Gamma,Q,i,F,\delta)$ is in real-time normal form: $\delta\subset Q\times\Gamma\times\Gamma^{*}\times Q$ (see for instance [Ber79] for a presentation of this result). The machine enumerates all paths in $T$ of length less than $c|x|$ in the lexicographic order where $c$ is the constant associated to $G$ in Lemma 4.2. For each such path $\rho$, it checks if it is an accepting path for input $x$, and in that case replaces $x$ by the output of $\rho$. The space used by $M$ when starting with a word $w$ is bounded by $(2c+1)|w|$. Moreover if $M$ accepts $w$, then there exists a path from $i$ to a vertex $F$ in $G$ labeled by $w$. Conversely, if $w$ belongs to ${L}(G,\{i\},F)$ then by Lemma 4.2, there exists a path in $G$ from $i$ to $F$ with vertices of length at most $c|w|$ and by construction $M$ accepts $w$. Hence, $M$ is a deterministic linearly bounded Turing machine accepting ${L}(G,\{i\},F)$. ∎ {rem} The result of Proposition 4.3 extends to any deterministic rational graph satisfying the property expressed by Lemma 4.2. The previous result can be extended to synchronized graphs of bounded out-degree thanks to a uniformization result by Weber. First observe that a rational graph is of out-degree bounded by some constant $k$ if and only if it is defined by transducers which associate at most $k$ distinct images to any input word. The relations realized by these transducers are called $k$-valued rational relations. {prop} [[Web96]] For any $k$-valued rational relation $R$, there exist $k$ functional rational relations $F_{1},\ldots,F_{k}$ such that $R=\bigcup_{i\in[1,k]}F_{i}$. Note that even if $R$ is a synchronized relation, the $F_{i}$’s are not necessarily synchronized. However, they still satisfy the inequality $|y|\leq|x|+c$ for all $(x,y)\in F_{i}$. To any synchronized graph $G$ with an out-degree bounded by $k$ defined by a set of transducers $(T_{a})_{a\in\Sigma}$, we associate the deterministic rational graph $H$ defined by $(F_{a_{i}})_{a\in\Sigma,i\in[1,k]}$ where for all $a\in\Sigma$, $(F_{a_{i}})_{i\in[1,k]}$ is the set of rational functions associated to $T_{a}$ by Proposition 4.3. According to Proposition 4.3 and to Remark 4.3, ${L}(H,\{i\},F)$ is a deterministic context-sensitive language. Let $\pi$ be the alphabetical projection defined by $\pi(a_{i})=a$ for all $a\in\Sigma$ and $i\in[1,k]$, it is straightforward to establish that $\pi\left({L}(H,\{i\},F)\right)={L}(G,\{i\},F)$. As deterministic context-sensitive languages are closed under alphabetical projections, ${L}(G,\{i\},F)$ is a deterministic context-sensitive language. {thm} The language accepted by a synchronized graph of bounded out-degree from a unique initial vertex is deterministic context-sensitive. The converse result is not clear, for reasons similar to those presented in the previous section for synchronized graphs of finite degree. A precise characterization of the family of languages accepted by synchronized rational graphs of bounded degree would be interesting. 5. Notions of determinism In this last part of the section on rational graphs, we investigate families of graphs which accept the deterministic context-sensitive languages. First of all, we examine the family yielded by the previous constructions when applied to deterministic languages. Then, we propose a global property over sets of transducers characterizing a sub-family of rational graphs whose languages are precisely the deterministic context-sensitive languages. 5.1. Unambiguous context-sensitive languages When applying the construction of Proposition 3.2 to a deterministic tiling system ${S}$, one obtains a synchronous rational graph $G$ (which is non-deterministic in general) and two rational sets of vertices $I$ and $F$ such that $L(G,I,F)=L({S})$, with the particularity that for every word $w$ in $L$, there is exactly one path labeled by $w$ leading from some vertex in $I$ to a vertex in $F$: $G$ is unambiguous with respect to $I$ and $F$. However, the converse is not granted: given a graph $G$ and two rational sets $I$ and $F$ such that $G$ is unambiguous with respect to $I$ and $F$, we cannot ensure that $L(G,I,F)$ is a deterministic context-sensitive language. Rather, the obtained languages can be accepted by unambiguous linearly bounded machines. This class of languages is called $\mathrm{USPACE}(n)$, and it is not known whether it coincides with either the context-sensitive or deterministic context-sensitive languages. {thm} Let $L$ be a language, the following properties are equivalent: (1) $L$ is an unambiguous context-sensitive language. (2) There exist a rational graph $G$ with unambiguous transducers and two rational sets $I$ and $F$ with respect to which $G$ is unambiguous, such that $L={L}(G,I,F)$. This result only holds if one considers unambiguous transducers, i.e. transducers in which there is at most one accepting path per pair of words. The reason is that ambiguity in the transducers would induce ambiguity in the machine. However, since synchronized transducers can be made unambiguous (Cf. Remark 2.2), we can drop this requirement in the case of synchronized graphs. Note that the unambiguity of a rational or synchronized graphs with respect to rational sets of vertices is undecidable. However, since any rational function can be realized by an unambiguous transducer [Kob69, Sak03], the language of any deterministic rational graph is, by to Theorem 5.1, unambiguous. {cor} The languages of deterministic rational graphs from an initial vertex $i$ to a rational set $F$ of vertices are unambiguous context-sensitive languages. 5.2. Globally deterministic sets of transducers We just saw an attempt at characterizing natural families of graphs whose languages are the deterministic context-sensitive languages, which was based on a restriction of previous constructions to the deterministic case, but failed to meet its objective because of a slight nuance between the notions of determinism and unambiguity for tiling systems. First, we naturally consider the class of sequential synchronous automata with an initial set of the form $\{a\}^{*}$, where $a$ is a letter of the vertex alphabet (in other words, a given initial vertex does not code for any information besides its length). It is easy to check that when applying the construction of Proposition 3.2 to one of these automata, we obtain a deterministic tiling system. {prop} The languages of sequential synchronous graphs from $\{a\}^{*}$ are deterministic context-sensitive languages. The converse result seems difficult to prove due to the local nature of the determinism involved in this class. Hence, we consider a global property of the set of transducers characterizing a rational graph, so as to ensure that each accepting path corresponds to the run of a deterministic linearly bounded machine on the corresponding input, or equivalently that each accepting path corresponds to a picture recognized by a deterministic tiling system and whose upper frontier is the path label under consideration. For any rational language $L$, we write $T_{L}$ the minimal synchronous transducer recognizing the identity relation over $L$. {defi} Let $T$ be a set of synchronous transducers over $\Gamma$. We say $T$ is globally deterministic with respect to two rational languages $I$ and $F\subseteq\Gamma^{*}$ if all transducers in $T$ are deterministic333i.e. whenever $q\overset{a/b}{\underset{}{\longrightarrow}}q^{\prime}$ and $q\overset{c/d}{\underset{}{\longrightarrow}}q^{\prime\prime}$ with $q^{\prime}\not=q^{\prime\prime}$, it implies $(a,b)\neq(c,d)$. and for every pair of transducers $T_{1}\in T\cup\{T_{I}\}$ and $T_{2}\in T\cup\{T_{F}\}$, and every pair of control states $q_{1}\in Q_{T_{1}}$ and $q_{2}\in Q_{T_{2}}$, there is at most one $b$ such that $$q_{1}\overset{a/b}{\underset{T_{1}}{\longrightarrow}}q^{\prime}_{1}\land q_{2}% \overset{b/c}{\underset{T_{2}}{\longrightarrow}}q^{\prime}_{2}\text{ for some }a,c\in\Gamma,\ q^{\prime}_{1}\in Q_{T_{1}},\ q^{\prime}_{2}\in Q_{T_{2}}.$$ Intuitively, this condition states that, whenever a part of the output of one transducer can be read as input by a second transducer, there is only one way to add a letter to this word such that it is still compatible with both transducers. This property of sets of transducers is trivially decidable, since it is sufficient to check the above condition for every pair of control states of transducers in $(T\cup\{T_{I}\})\times(T\cup\{T_{F}\})$. This allows us to capture a sub-family of rational graphs whose languages are the deterministic context-sensitive languages. {thm} Let $L$ be a language, the following two properties are equivalent: (1) $L$ is a deterministic context-sensitive language. (2) There is a synchronous rational graph $G$ and two rational sets $I$ and $F$ such that $L={L}(G,I,F)$ and $G$ is globally deterministic between $I$ and $F$. Proof. Let $G=(T_{a})_{a\in\Sigma}$ be a synchronous rational graph which is globally deterministic between $I$ and $F$. The graph $H=(T^{\prime}_{a})_{a\in\Sigma}$ obtained by applying Lemma 3.2 to $G$ is such that $L(H,i^{*},f^{*})=L(G,I,F)$. Moreover, $H$ is globally deterministic between $i^{*}$ and $f^{*}$. We will show that the construction of Proposition 3.2, when applied to a rational graph $H$ between $i^{*}$ and $f^{*}$ yields a deterministic tiling system. Suppose that this is not the case. Then, by definition of a non-deterministic tiling system, there must be words $u$, $v_{1}$ and $v_{2}$ with $v_{1}\neq v_{2}$ such that the two-rows pictures $p_{1}$ and $p_{2}$ with first row $\text{\small$\#$}u\text{\small$\#$}$ and second row $\text{\small$\#$}v_{1}\text{\small$\#$}$ and $\text{\small$\#$}v_{2}\text{\small$\#$}$ respectively only have tiles in $\Delta$. Since $v_{1}\neq v_{2}$, let $i$ be the smallest index such that $v_{1}(i)\neq v_{2}(i)$. Let $v_{1}(i)=xp$, $v_{2}(i)=x^{\prime}p^{\prime}$. By the construction of Prop. 3.2, there are two transducers $T_{a}$ and $T_{b}$ such that $$q_{a}\overset{y/x}{\underset{T_{a}}{\longrightarrow}}p\land q_{b}\overset{x/z}% {\underset{T_{b}}{\longrightarrow}}q^{\prime}_{b}\land q_{a}\overset{y/x^{% \prime}}{\underset{T_{a}}{\longrightarrow}}p^{\prime}\land q_{b}\overset{x^{% \prime}/z^{\prime}}{\underset{T_{b}}{\longrightarrow}}q^{\prime\prime}_{b}$$ for some symbols $y,y^{\prime},z,z^{\prime}\in\Gamma$ and control states $q_{a},q_{b},q^{\prime}_{b}$ and $q^{\prime\prime}_{b}$. As $T_{a}$ is deterministic, if $x$ is equal to $x^{\prime}$, then $p=p^{\prime}$ and $v_{1}(i)=v_{2}(i)$. Hence $x\not=x^{\prime}$, and the above relations contradicts the global determinacy of $H$. To prove the converse, we introduce yet another family of acceptors for context-sensitive languages, namely cellular automata. A cellular automaton is a tuple $(\Gamma,\Sigma,F,[,],\delta)$ where $\Gamma$ and $\Sigma\subseteq\Gamma$ are the work and input alphabets, $F\subseteq\Gamma$, $[,]\not\in\Gamma$ and $\delta$ is a set of 4-tuples over $\Gamma$ called transition rules. These rules induce a transition relation over words of the form $[u]\in[\Gamma^{*}]$: $c^{\prime}=[v]$ is a successor of $c=[u]$ if $|c|=|c^{\prime}|=n$ and for all $i\in[2,n-1]$, $(c(i-1),c(i),c(i+1),c^{\prime}(i))\in\delta$. A word $w$ is accepted if, starting from $[w]$ one can derive a word $[u]$ with $u\in F^{*}$. An cellular automaton is deterministic if for all $A,B,C$ there is at most one $D$ such that $(A,B,C,D)\in\delta$. Equivalence of (deterministic) cellular automata with (deterministic) LBMs or tiling systems is common knowledge. Let $L$ be any deterministic context-sensitive language, there exists a deterministic cellular automaton ${C}=(\Gamma,\Sigma,\bot,\delta,[,])$ recognizing $L$. One can easily build two rational languages $I$ and $F$ and a set of transducers $T$ globally deterministic with respect to $I$ and $F$ such that $L(G,I,F)=L$ where $G$ is the rational graph defined by $T$. The work alphabet of $T$ is $\Gamma^{\prime}=\Sigma\cup\{[,]\}\cup\delta$. The set of control states of transducer $T_{a}\in T$ is $\{q_{0}^{a}\}\cup\{q^{a}_{AB}\mid A,B\in\Gamma\cup\{[\}\}$, where $q_{0}^{a}$ is the unique initial state. Its transitions are: $$\displaystyle\forall a,b\in\Sigma,\quad q_{0}^{a}\overset{[/a}{\longrightarrow% }q^{a}_{[a}\quad\text{ and }\quad q_{0}^{a}\overset{b/a}{\longrightarrow}q^{a}% _{ba}$$ $$\displaystyle\forall d_{1}=([,A,B,A^{\prime})\in\delta,\quad q^{a}_{[A}% \overset{[/d_{1}}{\longrightarrow}q^{a}_{[A^{\prime}}$$ $$\displaystyle\forall d_{1}=(A,B,C,B^{\prime}),d_{2}=(B,C,D,C^{\prime})\in% \delta,\quad q^{a}_{BC}\overset{d_{1}/d_{2}}{\longrightarrow}q^{a}_{B^{\prime}% C^{\prime}}$$ The terminal states of $T_{a}$ are $q_{[\bot}$ and $q_{\bot\bot}$. Now let $I=([)^{*}$ and $F=\Sigma R^{*}$ where $R=\{(a,b,],b^{\prime})\in\delta\mid a,b,b^{\prime}\in\Gamma\}$. By construction and since ${C}$ is deterministic, $T$ is globally deterministic with respect to $I$ and $F$. One can easily verify that $L(G,I,F)=L$. ∎ 6. Conclusion This work is a summary of new and existing results concerning rational graphs and their relation to context-sensitive languages. Its main contributions are, first, to show the language equivalence between rational graphs and synchronous rational graphs, and second to establish a tight connection between synchronous rational graphs and finite tiling systems. Since tiling systems accept precisely the context-sensitive languages, this yields a new and simpler proof that the languages of rational graphs coincide with this family. Thanks to this, we studied the impact of structural restrictions on the obtained family of languages, in particular when considering finite or bounded degree and a single initial vertex. This approach also enables us to consider the case of deterministic languages. We show how one can define sub-families of rational graphs whose languages are precisely the unambiguous or deterministic context-sensitive languages. However, due to their syntactical nature, these results brings little new insight as to the difficult question of the strictness of inclusions between deterministic, unambiguous and general context-sensitive languages. This presentation gives rise to a few interesting open questions. A thorough study of graphs of bounded degree seems necessary, albeit difficult. More generally, the question of knowing whether any “tractable” family of graphs accepting the context-sensitive languages exists remains. We saw that synchronous graphs are not a good option since they lose all their expressive power when only a finite number of initial vertices are considered. Synchronized graphs form an interesting class, especially since their first order theory is decidable, but it seems reasonable to believe that they require infinite out-degree to accept all context-sensitive languages. Another question is to compare the rational graphs with the transition graphs of linearly bounded machines [KP99, Pay00]. This last point is addressed to some extent in [CM05], where it is shown that all bounded degree rational graphs are isomorphic to transition graphs of linearly bounded machines. Acknowledgement. The authors would like to thank Kamal Lodaya for his comments, and for pointing out the interest of using tiling systems, and Didier Caucal for his general advice and support. References [Ber79] J. Berstel. Transductions and Context-Free Languages. Teubner Verlag, 1979. [BG00] A. Blumensath and E. Grädel. Automatic structures. In Proceedings of the 15th IEEE Symposium on Logic in Computer Science (LICS 2000), pages 51–62. IEEE, 2000. [Cau96] D. Caucal. On infinite transition graphs having a decidable monadic theory. In ICALP, pages 194–205, 1996. [Cau03a] D. Caucal. On infinite transition graphs having a decidable monadic theory. Theoretical Computer Science, 290:79–115, 2003. [Cau03b] D. Caucal. On the transition graphs of Turing machines. Theoretical Computer Science, 296:195–223, 2003. [Cho59] N. Chomsky. On certain formal properties of grammars. Information and Control, 2:137–167, 1959. [CK02] D. Caucal and T. Knapik. A Chomsky-like hierarchy of infinite graphs. In Mathematical Foundations of Computer Science 2002, 27th International Symposium (MFCS 2002), volume 2420 of Lecture Notes in Computer Science, pages 177–187, 2002. [CM05] A. Carayol and A. Meyer. Linearly bounded infinite graphs. In Mathematical Foundations of Computer Science 2005, 30th International Symposium (MFCS 2005), volume 3618 of Lecture Notes in Computer Science, pages 180–191. Springer Verlag, 2005. Long version to appear in Acta Informatica. [EM65] C. Elgot and J. Mezei. On relations defined by finite automata. IBM Journal of Research and Development, 9:47–68, 1965. [For00] L. Fortnow. Diagonalization. Bulletin of the European Association for Theoretical Computer Science, 71:102–112, 2000. [GR96] D. Giammarresi and A. Restivo. Handbook of Formal Languages, volume 3, chapter Two-dimensional languages. Springer Verlag, 1996. [HU79] J. Hopcroft and J. Ullman. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, 1979. [Kob69] K. Kobayashi. Classification of formal languages by functional binary transductions. Information and Control, 15:95–109, 1969. [KP99] T. Knapik and É. Payet. Synchronized product of linear bounded machines. In Fundamentals of Computation Theory, 12th International Symposium (FCT 1999), volume 1684 of Lecture Notes in Computer Science, pages 362–373, 1999. [Kur64] S. Kuroda. Classes of languages and linear-bounded automata. Information and Control, 7(2):207–223, 1964. [LS97a] M. Latteux and D. Simplot. Context-sensitive string languages and recognizable picture languages. Information and Computation, 138(2):160–169, 1997. [LS97b] M. Latteux and D. Simplot. Recognizable picture languages and domino tiling. Theoretical Computer Science, 178(1-2):275–283, 1997. [Mor00] C. Morvan. On rational graphs. In Foundations of Software Science and Computation Structures, Third International Conference (FoSSaCS 2000), volume 1784 of Lecture Notes in Computer Science, pages 252–266, 2000. [Mor01] C. Morvan. Les graphes rationnels. PhD thesis, IFSIC, Université de Rennes 1, 2001. [MR04] C. Morvan and C. Rispal. Families of automata characterizing context-sensitive languages. to appear in Acta Informatica, 2004. [MS01] C. Morvan and C. Stirling. Rational graphs trace context-sensitive languages. In Mathematical Foundations of Computer Science 2001, 26th International Symposium (MFCS 2001), volume 2136 of Lecture Notes in Computer Science, pages 548–559, 2001. [Pay00] E. Payet. Thue Specifications, Infinite Graphs and Synchronized Product. PhD thesis, Université de la Réunion, 2000. [Pen74] M. Penttonen. One-sided and two-sided context in formal grammars. Information and Control, 25(4):371–392, 1974. [Pri00] Ch. Prieur. Fonctions rationnelles de mots infinis et continuité. PhD thesis, Université de Paris 7, 2000. [Ris02] C. Rispal. The synchronized graphs trace the context-sensitive languages. In 4th International Workshop on Verification of Infinite-State Systems (Infinity 2002), volume 68 of Electronic Notes in Theoretical Computer Science, 2002. [Sak03] J. Sakarovitch. Éléments de théorie des automates. Éditions Vuibert, 2003. [Tho01] W. Thomas. A short introduction to infinite automata. In Developments in Language Theory, 5th International Conference (DLT 2001), volume 2295 of Lecture Notes in Computer Science, pages 130–144, 2001. [vM04] D. van Melkebeek. Time-space lower bounds for NP-complete problems. Current Trends in Theoretical Computer Science, pages 265–291, 2004. [Web96] A. Weber. Decomposing a k-valued transducer into k unambiguous ones. Informatique Théorique et Applications, 30(5):379–413, 1996.
Data and Incentives††thanks: We are grateful to Eduardo Azevedo, Dirk Bergemann, Alessandro Bonatti, Sylvain Chassang, Yash Deshpande, Ben Golub, Yizhou Jin, Navin Kartik, Rishabh Kirpalani, Alessandro Lizzeri, Steven Matthews, Xiaosheng Mu, Larry Samuelson, Juuso Toikka, and Weijie Zhong for useful conversations, and to National Science Foundation Grant SES-1851629 for financial support. We thank Changhwa Lee for valuable research assistance on this project. Annie Liang111Department of Economics, University of Pennsylvania    Erik Madsen222Department of Economics, New York University (April 26, 2020) Abstract Many firms, such as banks and insurers, condition their level of service on a consumer’s perceived “quality,” for instance their creditworthiness. Increasingly, firms have access to consumer segmentations derived from auxiliary data on behavior, and can link outcomes across individuals in a segment for prediction. How does this practice affect consumer incentives to exert (socially-valuable) effort, e.g. to repay loans? We show that the impact of an identified linkage on behavior and welfare depends crucially on the structure of the linkage—namely, whether the linkage reflects quality (via correlations in types) or a shared circumstance (via common shocks to observed outcomes). JEL classification: D62,D83,D40 \AtAppendix\doparttoc\faketableofcontents 1 Introduction Many important economic transactions involve provision of a service whose profitability depends on an unobserved characteristic, or “quality,” of the recipient. For instance, the profitability of a car insurance policy depends on the insuree’s driving ability, while the profitability of issuing a credit card or personal loan depends on the borrower’s creditworthiness. While a recipient’s quality is not directly observable, service providers can use data to help forecast it, with these forecasts used to set the terms of service—e.g. an insurance premium or interest rate. This paper is about the interaction between two kinds of data which inform these forecasts: traditional past outcome data—e.g. insurance claims rates or credit card repayment—and novel consumer segment data identifying ‘‘similar” individuals based on aggregated online activity and other digitally tracked behaviors. These segments can identify individuals associated with a diverse range of preferences, lifestyle choices, and recent life events.333See Appendix A for a list of actual consumer segments compiled by data brokers. To understand the interaction between these two kinds of data, consider two prototypical consumers, Alice and Bob. In the absence of any identified linkages between these individuals, each recipient’s past outcomes are useful only for predicting his or her own quality: e.g. if Alice was in an automobile accident last claims cycle, that event is informative about her driving ability, but not about Bob’s. If, on the other hand, the provider learns that Alice and Bob both enjoy extreme sports, then Alice’s accident from the last claims cycle may be informative about Bob’s future accident risk as well. Our goal is to build a general model of identified ‘‘data linkages” across individuals, and to characterize how these linkages reshape incentives for productive effort---for instance, driving more carefully in an auto insurance context, or exercising financial prudence in a consumer credit market.444Our assumption that effort is socially valuable distinguishes our setting from Frankel and Kartik (2019) and Ball (2019), who model effort as a “gaming” device that degrades signal quality but yields no social value. We then use the model to shed light on the welfare implications of these emerging practices.555A number of instances of organizations bolstering prediction with novel datasets have already come to light. In 2008, the subprime lender CompuCredit was revealed to have reduced credit lines based on visits to various “red flag” establishments, including marriage counselors and nightclubs (see https://www.bloomberg.com/news/articles/2008-06-18/your-lifestyle-may-hurt-your-credit). Several health insurance companies have reportedly purchased datasets on purchasing and consumption habits from data brokers like LexisNexis to help predict anticipated healthcare costs (see https://www.pbs.org/newshour/health/why-health-insurers-track-when-you-buy-plus-size-clothes-or-binge-watch-tv). The car insurance company Allstate recently filed a patent for adjusting insurance rates based on routes and historical accident patterns (see https://www.usatoday.com/story/money/personalfinance/2016/11/14/route-risk-patent--car-insurance-rate-price/93287372). And perhaps most strikingly, China’s “social credit” system determines whether an individual is a good citizen based on detailed attributes ranging from the size of their social network to how often they play video games (see https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory). The main takeaway of our analysis is that the structure of an identified linkage across individuals matters for predicting behavior and welfare—in particular, linkages identifying correlations across persistent traits (i.e. intrinsic quality) have very different consequences than do ones identifying correlations across transient shocks (i.e. shared circumstances). Thus, regulations which treat all “big data” homogeneously are too crude to achieve socially optimal outcomes, and should be tailored based on the role that data plays in forecasting. Our framework is a multiple-agent version of the classic career concerns model (Holmström, 1999). Each agent has an unknown type (e.g. creditworthiness), which a principal (a bank) would like to predict. Agents choose whether to opt-in to interaction with the principal (sign up for a credit card), and any agent opting-in receives a transfer from the principal. The principal observes an outcome (the agent’s past repayment behavior) from each agent who opts in, which is informative about the agent’s underlying type, but also perhaps about the types of others in his segment. The agent can manipulate his own outcome via costly effort (exercising financial prudence), with the goal of improving his perceived type and accruing a reputational payoff.666The desire of an agent to be perceived as a high type contrasts with models of price discrimination, in which agents prefer to be perceived as a low type and receive a lower price. If agents were unrelated, each agent’s type would be forecast on the basis of their own past outcome alone. We consider an environment in which a data linkage identifies correlations between the outcomes of agents in the population, so that each agent’s perceived quality is determined by the outcomes of other participating agents, in addition to the agent’s own outcome. In general the correlation structure between outcomes and types can be quite complex. Moreover, consumers may have uncertainty about which segments are used by the provider, or precisely what correlations are implied by those segments. But we show that this potentially complex structure boils down to two forces captured by two distinct kinds of linkages. First, some linkages identify agents with correlated types—we call these quality linkages. This linkage may be a lifestyle pattern (e.g. “Frequent Flier,” “Fitness Enthusiast”) or personal characteristic (e.g. “Working-class Mom,” “Spanish Speaker”). Second, some linkages identify agents who have encountered similar shocks to their outcomes. We refer to these as circumstance linkages. For example, drivers who commute on the same roads to work are exposed to similar variations in local road conditions, e.g. construction or bad weather. Quality and circumstance linkages turn out to have opposing effects on incentives for effort, with quality linkages depressing effort while circumstance linkages encourage it. This is because under quality linkages, outcomes are substitutes for inferring an agent’s type, while under circumstance linkages, they are complements. Consider first the impact of a quality linkage. In this case, observation of outcomes from other agents in the segment helps the principal to learn an average quality for the segment, reducing the marginal informativeness of a given agent’s outcome about his type. Exerting effort to distort one’s outcome thus has a smaller influence on the principal’s perception about one’s type. In contrast, under a circumstance linkage, observation of outcomes for other agents in the segment is informative about the size of the average shock to outcomes. Each agent’s outcome therefore becomes more informative about his type—once debiased by the estimated common shock—increasing the value of exerting effort to improve one’s outcome. We establish these effort comparative statics in a model with very general type and noise distributions, imposing only a standard log-concavity condition ensuring that posterior estimates of latent variables are monotone in signal realizations. Reasoning about incentives for effort distortion in such an environment is challenging, because outside special cases the principal’s posterior expectation is a complex nonlinear function of signal realizations. A technical contribution of our paper is the development of techniques for establishing comparative statics of marginal incentives for effort as the number of correlated signals grows.777Our comparative statics can be viewed as adapting the results of Dewatripont et al. (1999) to an additive signal structure and generalizing them to many signals, although we derive our results independently using different techniques. The effort comparative statics outlined above have direct implications for consumer payoffs from participation: In our model, as in Holmström (1999), the principal correctly infers the equilibrium level of effort and can de-bias observed outcomes.888Frankel and Kartik (2019) introduces uncertainty in the ability of agents to manipulate outcomes, so that the principal cannot perfectly de-bias the impact of effort. In such settings a reduction in incentives for effort improves the precision of forecasts, creating a tradeoff for the principal when effort and precise forecasts are both valuable. Since effort is costly, higher equilibrium effort necessarily means lower payoffs for agents. (This need not imply lower social welfare, as we discuss below.) Thus under a quality linkage, agent participation decisions are strategic complements: Participation by one agent improves the payoffs to participation for other agents by decreasing equilibrium effort. We show existence of a unique equilibrium in which all agents choose to opt-in to interaction with the principal. In contrast, under a circumstance linkage, participation creates a negative externality on other agents by increasing equilibrium effort. For small populations, all agents opt-in in equilibrium, while for large populations, agents must mix over entry in the unique symmetric equilibrium. We next use these equilibrium characterizations to analyze the impact of data sharing on consumer and social welfare. As a benchmark, we consider a “no linkages” environment in which the principal is permitted to use only an agent’s own past data to predict his type. (This may correspond either to absence of consumer segment data, or to an environment in which use of consumer segment data has been prohibited by regulation.) We compare equilibrium outcomes against this benchmark, under different assumptions about how transfers are determined. We first suppose transfers are held fixed when a linkage is introduced, and that the transfer is generous enough that all agents would participate in the absence of a linkage. This assumption reflects regulated environments in which service providers can’t discriminate toward or against consumers solely on the basis of a data linkage. When agents share a quality linkage, aggregation of data across agents leads to a reduction in both consumer and social welfare. In contrast, when agents share a circumstance linkage, consumer welfare declines while social welfare increases for small populations. These results suggest that the type of data being used to link agents is a crucial determinant of the welfare effect of data linkages. We next suppose that the principal is a monopolist who freely sets the transfer to maximize profits, potentially adjusting it in response to a linkage. As agents possess no private information about their type, they are always held to their outside option, and the principal extracts all surplus whether or not a linkage is present. Thus in this environment, the principal chooses a transfer which maximizes social welfare. We find that welfare rises under a circumstance linkage and falls under a quality linkage for any population size. Additionally, we show that while full participation is ensured under a circumstance linkage, the principal may optimally induce only partial entry under a quality linkage. Finally, we consider an environment in which multiple principals compete to serve agents, and use the results to comment on a current policy debate regarding whether firms should have proprietary ownership of their data, or if this data should be shared across an industry (as for example recently recommended by the European Commission).999As reported in European Commission (2020): “[T]he Commission will explore the need for legislative action on issues that affect relations between actors in the data-agile economy to provide incentives for horizontal data sharing across sectors.” Such action might “support business-to-business data sharing, in particular addressing issues related to usage rights for co-generated data…typically laid down in private contracts. The Commission will also seek to identify and address any undue existing hurdles hindering data sharing and to clarify rules for the responsible use of data (such as legal liability). The general principle shall be to facilitate voluntary data sharing.” To model competition between principals, we extend our model by having several principals each set a transfer simultaneously, after which agents choose which firm (if any) to participate with. We consider two different data regimes—under proprietary data, an agent’s reputational payoff is determined exclusively based on the outcomes of other agents participating at their chosen firm, while under data sharing, the outcomes of all agents are shared across firms for use in forecasting types. We show that regardless of whether agents are linked by quality or circumstance, data sharing leads to an increase in consumer welfare. Market forces play a key role in this result: in particular, if firms were not able to freely choose transfers, then the welfare implications of data sharing would depend on the nature of the linkage. 1.1 Related Literature Our paper contributes to an emerging literature regarding the welfare consequences of data markets and algorithmic scoring. This literature has tackled several important social questions, such as whether predictive algorithms discriminate (Chouldechova, 2017; Kleinberg et al., 2017); how to protect consumers from loss of privacy (Acquisti et al., 2015; Dwork and Roth, 2014; Fainmesser et al., 2019; Eilat et al., 2019); how to price data (Bergemann et al., 2018; Agarwal et al., 2019); whether seller or advertiser access to big data harms consumers (Jullien et al., 2018; Gomes and Pavan, 2019); and how to aggregate big data into market segments or consumer scores (Ichihashi, 2019; Bonatti and Cisternas, 2019; Yang, 2019; Hidir and Vellodi, 2019; Elliott and Galeotti, 2019). There is additionally a growing literature about strategic interactions with machine learning algorithms: see Eliaz and Spiegler (2018) on the incentives to truthfully report characteristics to a machine learning algorithm, and Olea et al. (2018) on how economic markets select certain models for making predictions over others. In particular, Acemoglu et al. (2019) and Bergemann et al. (2019) also consider externalities created by social data. Different from us, these papers study data sharing in environments where consumers may sell their data. In Bergemann et al. (2019), one agent’s information improves a firm’s ability to price-discriminate against other agents, which can decrease consumer surplus. In Acemoglu et al. (2019), agents value privacy, and thus information collected about one agent imposes a direct negative externality on other agents when types are correlated. The externality of interest in the present paper is how information provided by other agents reshapes incentives to exert costly effort. As we show, this externality can be positive or negative—in particular, when agents are connected by a quality linkage, their equilibrium payoffs turn out to be increasing in other agents’ participation. At a theoretical level, our paper builds on the career concerns model of Holmström (1999), the classic framework for analyzing the role of reputation-building in motivating effort. The interaction of this incentive effect with informational externalities from other agents’ behavior is the main focus of our analysis. The literature following Holmström (1999) has largely focused on signal extraction about a single agent’s type in dynamic settings,101010A small set of papers, e.g. Auriol et al. (2002), study career concerns in a multiple-agent setting. These papers typically look at effort externalities instead of informational externalities. One exception is Meyer and Vickers (1997), which considers the impact of adding an additional agent with correlated outcomes in the context of a ratchet effect model with incentive contracts. while we are interested in the externalities of social data in a multiple-agent setting. Our paper is most closely related to Dewatripont et al. (1999), which studies how auxiliary data impacts agents’ incentives for effort. That paper considers the externality of a single exogenous auxiliary signal, while we endogenize the auxiliary data as information from other players, who strategically decide whether or not to provide data. Thus, the number of auxiliary signals is determined in equilibrium, and may also be uncertain; this requires comparison of equilibrium actions across various information structures. Our circumstance linkage model, in which the principal uses outcomes from some agents to help de-bias the outcomes of other agents, is reminiscent of the team production and tournament literatures (Holmström, 1982; Lazear and Rosen, 1981; Green and Stokey, 1983; Shleifer, 1985). In these papers, the observable output of each agent depends both on the agent’s effort as well as on a common shock experienced by all agents. In such environments the relative output of an agent is a more precise signal of effort than the absolute output. Thus the principal may be able to extract more effort through rewarding good relative outcomes rather than good absolute outcomes. Although we do not consider a contracting environment here, similar forces in our model permit the principal to extract more effort from agents when their outcomes are related by correlated shocks. Finally, our paper contributes to work on strategic manipulation of information. Recent papers in this category include: Frankel and Kartik (2020) and Ball (2019), which characterize the degree to which a principal with commitment power should link his decision to a manipulated signal about the agent’s type; Hu et al. (2019), which shows that heterogeneous manipulation costs across different social groups can lead to inequities in outcomes; and Georgiadis and Powell (2019), which studies optimal information acquisition for a designer setting a wage contract. Our paper contributes to this literature by exploring the role of correlations across data in an individual’s incentives to manipulate an observed outcome. 2 Model A single principal interacts with $N<\infty$ agents, who have been identified as belonging to a common population segment. Each agent $i$ has a type $\theta_{i}\in\mathbb{R}$, which is unknown to all parties (including agent $i$) and is commonly believed to be drawn from the distribution $F_{\theta}$ with mean $\mu>0$ and finite variance $\sigma_{\theta}^{2}>0$.111111None of our results would change if we gave the principal access to additional privately observed covariates for use in forecasting. Specifically, we could allow $\theta_{i}$ to be decomposable as $\theta_{i}=\theta^{0}_{i}+\Delta\theta_{i},$ where $\theta^{0}_{i}$ is commonly unobserved with mean 0 while $\Delta\theta_{i}$ is an idiosyncratic type shifter, independent of $\theta^{0}_{i}$ with mean $\mu$, which is privately observed by the principal. Types are drawn symmetrically but may not be independent across agents. As in the classic career concerns model of Holmström (1999), each agent’s payoffs are increasing in the principal’s perception of his type, and the agent can exert costly effort to influence an outcome realization that the principal observes (Section 2.2). Different from Holmström (1999), we introduce a preliminary stage at which the agent first chooses whether to opt-in or out of interaction with the principal (Section 2.1), and—most importantly—we allow the principal to aggregate the outcomes of multiple agents for prediction (Section 2.3). The model unfolds over three periods, with opt-in/out decisions made in period $t=0,$ effort exerted in period $t=1,$ and forecasts of each agent’s type based on outcomes updated in period $t=2.$ 2.1 Period 0—Opt-In/Opt-Out At period $t=0$, each agent $i$ first chooses whether to opt-in or opt-out of an interaction with the principal, where this decision is observed by the principal, but not by other agents. Opting out yields a payoff that we normalize to zero. The set of agents who opt-in is denoted $\mathscr{I}_{\text{opt-in}}\subseteq\{1,\dots,N\}$. 2.2 Period 1—Choice of Costly Effort to Influence Outcome In period $t=1$, each agent $i\in\mathscr{I}_{\text{opt-in}}$ privately chooses a costly effort level $a_{i}\in\mathbb{R}_{+}$ to influence an observable outcome. The outcome, $S_{i}$, is related to the agent’s type and effort level via $$S_{i}=\theta_{i}+a_{i}+\varepsilon_{i},$$ where $\varepsilon_{i}\sim F_{\varepsilon}$ is a noise shock with mean $\mathbb{E}[\varepsilon_{i}]=0$ and finite variance $\mathbb{E}[\varepsilon_{i}^{2}]=\sigma_{\varepsilon}^{2}>0$. Noise shocks are drawn symmetrically but not necessarily independently across agents. We describe the correlation structure across shocks in Section 2.3. The agent’s payoff in this period is $$R-C(a_{i})$$ where $R\in\mathbb{R}$ is a monetary opt-in reward from the principal (possibly negative), and $C(a_{i})$ is the cost to choosing effort $a_{i}$. We suppose that the cost function is twice continuously differentiable and satisfies $\lim_{a_{i}\rightarrow\infty}C^{\prime}(a_{i})>1$, $C(0)=C^{\prime}(0)=0$, and $C^{\prime\prime}(a_{i})>0$ for all $a_{i}$. 2.3 Period 2—Principal’s Forecast of Agent’s Type In a second (and final) period, each agent $i\in\mathscr{I}_{\text{opt-in}}$ receives the principal’s forecast of the agent’s type $\theta_{i}$. The principal’s forecast is based on the observed outcomes of all agents who have opted-in; thus, agent $i$’s payoff in the second period is $$\mathbb{E}\left[\theta_{i}\mid S_{j},j\in\mathscr{I}_{\text{opt-in}}\right].$$ (2.1) Note that since each agent’s effort choice is private, the forecast is based on a conjectured effort choice, which in equilibrium is simply the equilibrium effort level. This payoff is a stand-in for the reputational consequences of the agent’s period-1 outcome.121212Formally, one could view this payoff as representing the agent’s payoff in a second-period market where multiple firms compete to serve the agent. Note that the agent’s payoff is increasing in the principal’s forecast of their type, reflecting the role of $\theta_{i}$ as a quality variable determining average outcomes. The quantity in (2.1) depends on the (random) realizations of output; thus, the agent optimizes over his expectation of (2.1). We will discuss this iterated expectation of $\theta_{i}$ in detail in Section 3.1. Finally, the agent’s total payoff is the sum of his expected payoffs across the two periods. This timeline is summarized in Figure 1. So far we have not described how agent outcomes are correlated, a specification which is crucial for computing the posterior expectation in (2.1). Our main analysis contrasts two kinds of relationships across agents, one in which agents within a segment have related qualities, and another in which they share a related circumstance: Quality Linkage. Suppose first that agents within the segment have correlated qualities. We model this by decomposing $\theta_{i}$ as $$\theta_{i}=\overline{\theta}+\theta^{\perp}_{i},$$ where $\overline{\theta}\sim F_{\overline{\theta}}$ is a common component of the type and $\theta^{\perp}_{i}\sim F_{\theta^{\perp}}$ is a personal or idiosyncratic component, with each $\theta^{\perp}_{i}$ independent of $\overline{\theta}$ and all $\theta^{\perp}_{j}$ for $j\neq i.$ Without loss, we assume $\mathbb{E}[\overline{\theta}]=\mu$ while $\mathbb{E}[\theta^{\perp}_{i}]=0.$ In contrast, the shocks $\varepsilon_{i}$ are mutually independent. Circumstance Linkage. Another possibility is that agents within the segment don’t have qualities which are intrinsically related, but instead have experienced a shared shock to outcomes. Formally, we suppose that the noise shock can be decomposed as $$\varepsilon_{i}=\overline{\varepsilon}+\varepsilon^{\perp}_{i}$$ where $\overline{\varepsilon}\sim F_{\overline{\varepsilon}}$ is shared across agents and $\varepsilon^{\perp}_{i}\sim F_{\varepsilon^{\perp}}$ is idiosyncratic, with each $\varepsilon^{\perp}_{i}$ independent of $\overline{\varepsilon}$ and all $\varepsilon^{\perp}_{j}$ for $j\neq i$. In contrast, agents’ types $\theta_{i}$ are mutually independent. The distinction between quality and circumstance linkages can be interpreted in at least two ways. One interpretation is that $\theta_{i}$ is the portion of the outcome that is valuable to the principal, while $\varepsilon_{i}$ is a confounder that has an effect on the observed outcome, but is not payoff-relevant. Another interpretation is that the type $\theta_{i}$ is a permanent component of the agent’s performance while $\varepsilon_{i}$ is a shock that affects performance only temporarily. The examples of circumstance linkages in Section 2.4 follow the latter interpretation, with $\varepsilon_{i}$ reflecting a transient characteristic that affected outcomes in a previous observation cycle, but is no longer present in future interactions. For example, if an agent was pregnant during the determination of $S_{i}$, but has since given birth, then the principal should optimally de-noise the “pregnancy effect” from the prior outcome when predicting future behaviors. Throughout the paper, we consider these two models of linkage separately in order to clarify the difference between them. Note that while the correlation structure across agent outcomes differs in the two models, we will hold the marginal distributions of each agent’s type and noise shock fixed across models (see Assumption 2.2). 2.4 Examples Commuters and auto-insurers. The principal is an auto-insurer and the agents are commuters. Agent $i$’s type $\theta_{i}$ is a function of his accident risk while driving, with higher-type commuters experiencing a lower risk of accidents while driving to work. Each commuter decides whether to own a car versus commuting via rideshares or public transit. Conditional on owning a car, the commuter then chooses how much effort to exert to drive safely. The insurance company observes his claims rate during an initial enrollment period, and uses that outcome to predict his future claims rates. Examples of quality linkage segments include drivers who share similar commutes to work, e.g. routes primarily through surface streets or via highways, where these routes are discoverable from geolocational data. If we suspect that commutes are stable and that the route taken contributes to the risk of accident, then claims rates for other drivers in the segment are directly informative about the future accident risk for a given driver. Examples of circumstance linkage segments include drivers who passed through routes that were previously affected by unusual road or weather conditions. Crucially, these conditions are not expected to persist into the subsequent period.131313If the conditions are persistent, we would consider the consumers instead to be related by a quality linkage. The principal can use claims rates from drivers in this segment to learn the size and direction of the “road shock” or “weather shock,” allowing them to de-bias observed accident rates. Consumers and credit-card issuers. The principal is a bank issuing a credit card and agents are consumers. Agent $i$’s type $\theta_{i}$ is his creditworthiness, with more creditworthy consumers being better able to pay back short-term loans. Each agent decides whether to sign up for a credit card versus making payments by debit card or cash. If an agent signs up for a credit card, he decides how much effort to exert in order to ensure repayment (e.g. by increasing income or avoiding activities that risk financial loss), and the card issuer observes his repayment behavior during an initial enrollment period. Quality linkages relevant to creditworthiness include lifestyles (“Frequent Flier”) and financial sophistication (“Subscriber to Financial Newsletter”), categories which can be revealed by social media usage and online subscription databases. Circumstance linkages include whether a consumer’s child was previously attending college (but has since graduated) and whether a family member was previously experiencing a serious illness (but has since improved), as inferred for example from purchasing and travel histories. 2.5 Solution Concept We study Nash equilibria in which agents choose symmetric participation strategies and pure strategies in effort. Our focus on symmetric participation reflects the ex-ante symmetry of consumers in our model, and their anonymity with respect to one another in most data markets. In the absence of a centralized mechanism, we expect that consumers would find it challenging to coordinate asymmetric participation. Our restriction to equilibria with deterministic effort follows the career concerns literature, and plays an important role in maintaining tractability.141414When agents mix over effort, then even under the assumptions imposed in Section 2.6 higher output is not guaranteed to lead to higher inferences about types. Depending on the equilibrium distribution of effort, the principal may instead attribute a positive output shock to high realized effort. See Rodina (2017) for further discussion. We additionally impose a refinement on out-of-equilibrium beliefs. Since agents choose participation and effort simultaneously in our model, Nash equilibrium puts no restrictions on the principal’s inference about effort in the event that an agent unexpectedly enters. We require that if an agent unilaterally deviates to entry, the principal expects that the agent will exert the equilibrium effort choice from a single-agent game with exogenous entry. This refinement mimics sequential rationality in a modified model in which agents make entry and effort decisions sequentially rather than simultaneously. In what follows, we will us the term equilibrium without qualification to refer to symmetric equilibria in pure effort strategies satisfying this refinement. 2.6 Distributional Assumptions We impose several regularity conditions on the distributions $F_{\overline{\theta}},$ $F_{\theta^{\bot}},$ $F_{\overline{\varepsilon}},$ and $F_{\varepsilon^{\bot}}$, which are maintained throughout the paper. Assumptions 2.1 through 2.4 are purely technical, and ensure that all distributions have full support and are smooth enough for appropriate derivatives of conditional expectations to exist. Assumptions 2.5 and 2.6 are substantive, and ensure monotonicity of inferences about latent variables in outcome and sufficiency of the first-order approach for characterizing equilibrium effort. Assumption 2.1 (Regularity of densities). The distribution functions $F_{\overline{\theta}},$ $F_{\theta^{\bot}},$ $F_{\overline{\varepsilon}},$ $F_{\varepsilon^{\bot}}$ admit strictly positive, $C^{1}$ density functions $f_{\overline{\theta}},$ $f_{\theta^{\bot}},$ $f_{\overline{\varepsilon}},$ $f_{\varepsilon^{\bot}}$ with bounded first derivatives on $\mathbb{R}$. Assumption 2.2 (Invariance of marginal densities). In each model, the distribution functions $F_{\theta}$ and $F_{\varepsilon}$ have density functions $f_{\theta}$ and $f_{\varepsilon}$ satisfying $f_{\theta}=f_{\overline{\theta}}*f_{\theta^{\bot}}$ and $f_{\varepsilon}=f_{\overline{\varepsilon}}*f_{\varepsilon^{\bot}},$ where $*$ is the convolution operator. In each model one half of Assumption 2.2 is redundant, as in the quality linkage model $\theta_{i}=\overline{\theta}+\theta^{\bot}_{i}$ while in the circumstance linkage model $\varepsilon_{i}=\overline{\varepsilon}+\varepsilon^{\bot}_{i}.$ The remaining half of the assumption ensures that $\theta_{i}$ and $\varepsilon_{i}$ have the same marginal distributions across models. The following corollary reflects the fact that convolutions of variables satisfying the properties of Assumption 2.1 inherit those properties. Corollary 2.1. $f_{\theta}$ and $f_{\varepsilon}$ are strictly positive, $C^{1}$, and have bounded first derivatives on $\mathbb{R}.$ The following assumption ensures that posterior expectations are smooth enough to compute first and second derivatives of an agent’s value function, and to compute the marginal impact of a change in one agent’s outcome on the forecast of another agent’s type. Let $\mathbf{S}=(S_{1},...,S_{N})$ be the vector of outcomes for all agents, with $\mathbf{a}=(a_{1},...,a_{N})$ the vector of actions for all agents. Assumption 2.3 (Regularity of posterior expectations). For each model, population size $N$, agent $i\in\{1,...,N\},$ and outcome-action profile $(\mathbf{S},\mathbf{a})$: • $\frac{\partial}{\partial S_{j}}\mathbb{E}[\theta_{i}\mid\mathbf{S};\mathbf{a}]$ exists and is continuous in $\mathbf{S}$ for every $j\in\{1,...,N\}$, • $\frac{\partial^{2}}{\partial S_{i}^{2}}\mathbb{E}[\theta_{i}\mid\mathbf{S};% \mathbf{a}]$ exists. The following assumption is a slight strengthening of the requirement that the Fisher information of $S_{i}$ about its common component ($\overline{\theta}$ in the quality linkage model or $\overline{\varepsilon}$ in the circumstance linkage model) be finite. Let $f_{\varepsilon+\theta^{\bot}}\equiv f_{\theta^{\bot}}*f_{\varepsilon}$ and $f_{\theta+\varepsilon^{\bot}}\equiv f_{\theta}*f_{\varepsilon^{\bot}}.$ Assumption 2.4 (Finite Fisher information). For each $f\in\{f_{\varepsilon+\theta^{\bot}},f_{\theta+\varepsilon^{\bot}}\},$ there exists a $\overline{\Delta}>0$ and a dominating function $J:\mathbb{R}\rightarrow\mathbb{R}_{+}$ such that $$\left(\frac{1}{\Delta}\frac{f(z-\Delta)-f(z)}{f(z)}\right)^{2}\leq J(z)$$ for all $z\in\mathbb{R}$ and $\Delta\in(0,\overline{\Delta})$ and $$\int J(z)f(z)\,dz<\infty.$$ Roughly, this assumption ensures that finite-difference approximations to the Fisher information are also finite and uniformly bounded as the approximation becomes more precise.151515 A sufficient condition for Assumption 2.4 is that $f_{\varepsilon+\theta^{\bot}}$ and $f_{\theta+\varepsilon^{\bot}}$ don’t vanish at the tails “much faster” than their derivatives: specifically, for each $f\in\{f_{\varepsilon+\theta^{\bot}},f_{\theta+\varepsilon^{\bot}}\}$ there should exist a $K>0$ and $\overline{\Delta}>0$ such that: $\max_{\varepsilon\in\mathbb{R},\Delta\in[0,\overline{\Delta}]}\left|\frac{f^{% \prime}(\varepsilon-\Delta)}{f(\varepsilon)}\right|\leq K.$ This sufficient condition is satisfied, for example, by the $t$-distribution and the logistic distribution. It is not satisfied by the normal distribution, although we show in Appendix O.2.1 using other methods that the normal distribution does satisfy Assumption 2.4. We are not aware of any commonly-used distributions which violate Assumption 2.4. The following assumption imposes enough structure on the distributions of the components of each agent’s outcome to ensure that higher outcome realizations imply monotonically higher forecasts of the components of the outcome. Assumption 2.5 (Monotone forecasts). The density functions $f_{\overline{\theta}},$ $f_{\theta^{\bot}}$, $f_{\overline{\varepsilon}},$ and $f_{\varepsilon^{\bot}}$ are strictly log-concave.161616A function $g>0$ is strictly log-concave if $\log g$ is strictly concave. One basic property of strictly log-concave functions is that the convolution of two log-concave functions is also strictly log-concave. Thus an immediate corollary of Assumption 2.5 is the following: Corollary 2.2. $f_{\theta}$ and $f_{\varepsilon}$ are strictly log-concave. Assumption 2.5 implies monotonicity of forecasts for the following reason. In general, given three random variables $X,Y,Z$ such that $X=Y+Z$ and $Y$ and $Z$ are independent, strict log-concavity of the density function of $Z$ is both necessary and sufficient for the distribution of $X$ to satisfy a strict monotone likelihood-ratio property in $Y$ (Saumard and Wellner, 2014): $$\frac{f_{X\mid Y}(x^{\prime}\mid y^{\prime})}{f_{X\mid Y}(x\mid y^{\prime})}>% \frac{f_{X\mid Y}(x^{\prime}\mid y^{\prime})}{f_{X\mid Y}(x\mid y)}\quad\mbox{% if and only if}\quad x^{\prime}>x,y^{\prime}>y.$$ This monotone likelihood-ratio property is the canonical sufficient condition ensuring monotonicity of the conditional expectation of $Y$ in the observed value of $X$ (Milgrom, 1981). Assumption 2.5 guarantees that the appropriate monotone likelihood-ratio properties are satisfied in our model; see Appendix B.1 for details. Finally, we assume the cost function is “sufficiently convex” that effort choices satisfying a first-order condition are globally optimal. The assumption is a joint condition on the cost function and the distribution of the outcome, since the required amount of convexity depends on how sensitive the posterior expectation is to the realization of individual outcomes. Assumption 2.6 (Sufficient convexity). There exists a $K\in\mathbb{R}$ such that $C^{\prime\prime}(x)>K$ for every $x\in\mathbb{R}_{+}$, and for every population size $N$ and agent $i\in\{1,...,N\}$, $\frac{\partial^{2}}{\partial S_{i}^{2}}\mathbb{E}[\theta_{i}\mid\mathbf{S};% \mathbf{a}]\leq K$ for every $(\mathbf{S},\mathbf{a}).$ One important set of models satisfying these regularity conditions is Gaussian uncertainty.171717The Gaussian versions of our quality and circumstance linkage models represent special cases of the information environment considered in Meyer and Vickers (1997) and Bergemann et al. (2019), both of whom allow for correlation between both types and shocks. The Gaussian version of our quality linkage model also corresponds to a symmetric version of the environment considered in Acemoglu et al. (2019). Example (Gaussian). For each agent $i$, $$\left(\begin{array}[]{c}\overline{\theta}\\ \theta^{\perp}_{i}\\ \overline{\varepsilon}\\ \varepsilon^{\perp}_{i}\end{array}\right)\sim\mathcal{N}\left(\left(\begin{% array}[]{c}\mu\\ 0\\ 0\\ 0\end{array}\right),\left(\begin{array}[]{cccc}\sigma_{\overline{\theta}}^{2}&% 0&0&0\\ 0&\sigma_{\theta^{\perp}}^{2}&0&0\\ 0&0&\sigma_{\overline{\varepsilon}}^{2}&0\\ 0&0&0&\sigma_{\varepsilon^{\perp}}^{2}\end{array}\right)\right).$$ We verify in Appendix O.2.1 that Assumptions 2.1 through 2.5 are all met in this case, and Assumption 2.6 is satisfied by any strictly concave cost function. 3 Preliminary Results: Exogenous Entry We begin our analysis by studying effort choices in a restricted model in which the set of agents who opt-in is exogenously specified. Without loss, we suppose that all $N$ agents participate. 3.1 Marginal Value of Effort In equilibrium, agents choose effort such that the marginal impact of effort on the principal’s forecast in the second period, which we will refer to as the marginal value of effort, equals its marginal cost. Here we define the marginal value of effort and explore its properties. Fix an equilibrium effort profile $(a^{\ast}_{1},...,a^{\ast}_{N})$. The principal believes that each outcome is distributed $S_{i}=\theta_{i}+a_{i}^{*}+\varepsilon_{i}$, and any agent $i$ who chooses the equilibrium effort level $a_{i}^{*}$ believes the same. But if some agent $i$ deviates to a non-equilibrium action $a_{i}\neq a_{i}^{*}$, then he knows that his outcome is distributed $S_{i}=\theta_{i}+a_{i}+\varepsilon_{i}$. This means that the agent’s expected period-2 reward (i.e. the agent’s expectation of the principal’s forecast of his type) is an iterated expectation with respect to two different probability measures over the space of types and outcomes. Formally, let $\mathbb{E}^{\Delta}$ denote expectations when agent $i$ chooses effort level $a_{i}^{*}+\Delta$. For any profile of realized outcomes $(S_{1},\dots,S_{N})$, the principal’s expectation of agent $i$’s type is $$\mathbb{E}^{0}[\theta_{i}\mid S_{1},\dots,S_{N}].$$ If agent $i$ exerts effort $a_{i}=a^{\ast}_{i}+\Delta$, then his ex-ante expectation of the principal’s forecast is $$\mu_{N}(\Delta)\equiv\mathbb{E}^{\Delta}[\mathbb{E}^{0}[\theta_{1}\mid S_{1},% \dots,S_{N}]].$$ Note that if the agent does not distort his effort away from the equilibrium level, then $\mu_{N}(0)=\mu$, reflecting the usual martingale property of posterior expectations. When $\Delta\neq 0,$ posterior expectations under the principal’s beliefs are not a martingale from agent 1’s perspective: As we show in Appendix C, $\mu_{N}(\Delta)$ is strictly increasing in $\Delta.$ Thus, increasing effort beyond the expected effort level always leads to a higher expected value of the principal’s expectation.181818Kartik et al. (2019) showed that if two agents with differing priors update beliefs in response to signals about an unknown state, the more optimistic agent expects the other’s expectation of the state to increase. Our Lemma C.1 complements this result, finding an analogous effect when two agents share a common prior but disagree about the correlation between the state and the signal. The agent’s incentives to distort effort away from its equilibrium level are characterized by the marginal value of effort $MV(N)$, which is defined as $$MV(N)\equiv\mu_{N}^{\prime}(0).$$ Our notation reflects the fact that $\mu^{\prime}_{N}(0)$, thus also $MV(N),$ is independent of the equilibrium effort levels $a^{\ast}_{1},...,a^{\ast}_{N},$ due to the additive dependence of outcomes on effort. Example. In the Gaussian model described in Section 2.6, an agent who exerts effort $a=a^{*}+\Delta$ expects the principal’s forecast of his type to be $$\mu_{N}(\Delta)=\mu+\beta(N)\cdot\Delta$$ for a function $\beta(N)$ that is independent of $\Delta$ and $a$. See Online Appendix O.2.2 for the closed-form expression for $\beta(N)$ (which differs depending on whether we assume a quality linkage or circumstance linkage). The existence of closed-form expressions, as well as linearity of $\mu_{N}(\Delta)$, are particular to Gaussian uncertainty, although independence with respect to the equilibrium effort level is general. The marginal value of effort $MV(N)=\mu_{N}^{\prime}(0)$ is then simply the constant slope $\beta(N)$ in this Gaussian setting. Throughout, we use $MV_{Q}(N)$ and $MV_{C}(N)$ to denote the marginal value functions in the quality linkage and circumstance linkage models, dropping the subscript when a statement holds in both models. 3.2 Equilibrium Effort Since agents are symmetric, they share the same marginal value and marginal cost of effort. There is therefore a unique effort level $a^{\ast}(N)$ satisfying each agent’s equilibrium first-order condition $$MV(N)=C^{\prime}(a^{*}(N))$$ (3.1) equating the marginal value of effort $MV(N)$ with its equilibrium marginal cost $C^{\prime}(a^{\ast}(N))$. This condition is both necessary and sufficient to ensure that—when the principal expects all agents to exert effort $a^{*}(N)$—each agent’s optimal effort choice is indeed $a^{*}(N)$. The unique equilibrium of the exogenous-entry model then entails choice of $$a^{\ast}(N)=C^{\prime-1}(MV(N))$$ (3.2) by every agent. When we wish to denote equilibrium effort in the quality linkage or the circumstance linkage model specifically, we will write $a^{*}_{Q}(N)$ or $a^{*}_{C}(N)$ respectively. Note that $a_{Q}^{*}(1)=a_{C}^{*}(1)$; that is, the equilibrium action is the same in the single-agent version of both models. 3.3 Key Lemma: Population Size and the Marginal Value of Effort We now characterize how the number of participating agents impacts each agent’s incentives to exert effort. This comparative static plays a key role in characterizing equilibrium in the full model. Lemma 3.1. The marginal value of effort exhibits the following comparative static in population size: (a) $MV_{Q}(N)$ is strictly decreasing in $N$ and $\lim_{N\rightarrow\infty}MV_{Q}(N)>0$. (b) $MV_{C}(N)$ is strictly increasing in $N$ and $\lim_{N\rightarrow\infty}MV_{C}(N)<1.$ That is, the marginal value of effort declines in the number of agents in the quality linkage model, and increases in the circumstance linkage model. Since $C^{\prime}$ is strictly increasing, it is immediate from this lemma and (3.2) that the equilibrium actions $a^{*}(N)$ display the same comparative statics.191919Meyer and Vickers (1997) establish the same comparative static in a Gaussian setting with up to two agents; see their Proposition 1. Proposition 3.1. Equilibrium effort in the exogenous entry model exhibits the following comparative static in population size: (a) $a_{Q}^{*}(N)$ is strictly decreasing in $N$ and $\lim_{N\rightarrow\infty}a^{*}_{Q}(N)>0$. (b) $a_{C}^{*}(N)$ is strictly increasing in $N$ and $\lim_{N\rightarrow\infty}a_{C}^{*}(N)<\infty$. The key to this result is understanding how the number of observations $N$ impacts the sensitivity of the principal’s forecast of $\theta_{i}$ to the realization of $S_{i}$. All else equal, the stronger the dependence of this forecast on $i$’s outcome, the stronger the incentive to manipulate its distribution. In the circumstance linkage model, other agents’ data (which are informative about the common component of the noise term $\overline{\varepsilon}$) complements agent $i$’s outcome, improving its marginal informativeness. Thus, the larger $N$ is, the more weight the principal puts on $i$’s outcome in its forecast of $\theta_{i}$. This force incentivizes effort. In the limit as $N\rightarrow\infty$, the principal learns $\overline{\varepsilon}$ perfectly and can de-bias the outcomes accordingly, so the incentives for agent $i$ to exert effort are the same as in a single-agent model with $S_{i}=\theta_{i}+\varepsilon^{\perp}_{i}$. By contrast, in the quality linkage model other agents’ data (which are informative about the common part of the type $\overline{\theta}$) substitutes for $i$’s signal; thus, the larger $N$ is, the less weight the principal puts on the realization of $i$’s outcome in its forecast of $\theta_{i}$. This force de-incentivizes effort. In the limit as $N\rightarrow\infty$, the principal can extract $\overline{\theta}$ perfectly from the outcomes of other agents but retains uncertainty about $\theta_{i}^{\perp}$, so manipulation of $S_{i}$ is still valuable. Specifically, the marginal value of effort is the same as in a single-agent model with $S_{i}=\theta^{\perp}_{i}+\varepsilon_{i}$. Although this intuition is straightforward, we do not in general have access to the distribution of the principal’s posterior expectation in closed form, so we cannot directly quantify the “strength” of the posterior expectation’s dependence on the outcome $S_{i}$. Moreover, although it is straightforward to show that the sequence of functions $\mu_{N}(\Delta)$ converge pointwise to a limiting function $\mu_{\infty}(\Delta)$, the rates of this convergence may vary across $\Delta$. Since we are interested in the limiting marginal value $\lim_{N\rightarrow\infty}MV(N)=\lim_{N\rightarrow\infty}\mu_{N}^{\prime}(0)$, we need the stronger property of uniform convergence of $\mu_{N}(\Delta)$ around $\Delta=0$. In Appendix C.2.2, we show that the expected impact of increasing effort by $\Delta$, i.e. $\mu_{N}(\Delta)-\mu_{N}(0)$, can be bounded by an expression that shrinks (for Part (a)) or grows (for Part (b)) in $N$ uniformly in $\Delta$.202020An implication of Lemma 3.1 is that as $N\rightarrow\infty$, the agent’s expectation of the principal’s forecast converges to the agent’s own expectation of his type; that is, $\mu$. This implication has the flavor of the classic Blackwell and Dubins (1962) result on merging of opinions, which says that if two agents have different prior beliefs which are absolutely continuous with respect to one another, then given sufficient information, their posterior beliefs must converge. The difference is that the Blackwell and Dubins (1962) result demonstrates almost-sure convergence, while we are interested in $l_{1}$-convergence under a shifted measure—that is, whether the agent’s expectation of the principal’s expectation converges to the agent’s own expectation given sufficient data, where the agent and principal use different priors. Neither of these two notions of convergence directly imply the other. This establishes that the marginal value of deviating from equilibrium effort at finite $N$, $\mu_{N}^{\prime}(0)$, indeed converges to the marginal value of effort in the limiting model, $\mu_{\infty}^{\prime}(0)$, which we can separately characterize. 4 Main Results We now return to the main model, where the agents who participate (and thus the segment size $N$ from the previous section) are endogenously determined. In this section we assume that $R$ is exogenously fixed, and does not change in response to introduction of a linkage. In Sections 5 and 6 we explore extensions of the model in which $R$ is chosen endogenously. 4.1 Equilibrium In equilibrium, the principal correctly de-biases the impact of effort on observed outcomes. The agent’s expected payoff in the second period is thus the prior mean $\mu$, no matter the equilibrium effort level. Therefore opt-in is (weakly) optimal as part of an equilibrium strategy if and only if the agent’s equilibrium action $a^{*}$ satisfies $$R+\mu-C(a^{*})\geq 0.$$ We impose the following lower bound on $R$, which guarantees that agents would find it optimal to opt-in when no other agents are present in the segment. This restricts attention to settings in which a functioning market existed prior to identification of linkages across consumers. Assumption 4.1 (Individual Entry). $R\geq C(a^{*}(1))-\mu$, where $a^{*}(1)$ is the equilibrium effort in the exogenous-entry game with a single agent (as defined in (3.1) with $N=1$). In light of Assumption 4.1, there exists no equilibrium (respecting the refinement introduced in Section 2.5) featuring no entry. This is because in any no-entry equilibrium, an agent deviating to entry and choosing effort $a^{\ast}(1)$ would receive a payoff of $R+\mu-C(a^{\ast}(1))>0$ given that the principal expects the agent to exert effort $a^{\ast}(1)$ following such a deviation. Our main results characterize how the equilibrium implications of quality and circumstance linkages differ: Theorem 4.1. In the quality linkage model, there is a unique equilibrium for all population sizes $N$. In this equilibrium, each agent opts-in and chooses effort $a^{\ast}_{Q}(N).$ Theorem 4.2. In the circumstance linkage model, there is a unique equilibrium for all population sizes $N$. There exists an $N^{*}\in\{1,2,...\}\cup\{\infty\}$ such that: • If $N\leq N^{*}$, each agent opts-in and chooses effort $a_{C}^{\ast}(N),$ • If $N>N^{*}$, each agent opts-in with probability $p(N)\in(0,1)$ and chooses effort $a^{**}\in[a_{C}^{\ast}(N^{\ast}),a_{C}^{\ast}(N^{\ast}+1)).$ The effort level $a^{**}$ is independent of $N$, while the opt-in probability $p(N)$ is strictly decreasing in $N$ and satisfies $\lim_{N\rightarrow\infty}p(N)=0$. The threshold $N^{\ast}$ is increasing in $R,$ and is finite for all $R$ sufficiently small. The equilibrium actions characterized in Theorems 4.1 and 4.2 are depicted in Figure 2. When the segment size is small, Assumption 4.1 ensures that opting-in is strictly profitable for all agents in each model, and so the equilibrium effort levels $a_{Q}^{*}(N)$ and $a_{C}^{*}(N)$ are the same as in the previous section. Thus, the equilibrium effort levels inherit the properties described in Proposition 3.1. As the population size grows, opting-in becomes increasingly attractive in the quality linkage model, since equilibrium effort $a^{*}_{Q}(N)$ decreases in $N$. As a result, all agents participate no matter how large the population. But in the circumstance linkage model, effort $a^{*}_{C}(N)$ increases in $N$ and so participation becomes less attractive as the population of entering agents grows. If $N$ is large enough that the total cost of participation $C[a^{\ast}(N)]$ exceeds the expected reward $R+\mu,$ then full participation cannot be an equilibrium. We let $N^{\ast}$ denote the largest $N$ for which $R+\mu\geq C[a^{\ast}(N)].$ Then for any $N>N^{\ast},$ agents randomize over entry in equilibrium.212121If the opt-in reward $R$ is large enough, it may be that $N^{\ast}=\infty$ and all agents enter no matter how large the population, as even the limiting effort level for very large populations is worth incurring for the large entry reward. The value $N^{\ast}$ is finite whenever $R$ is not too large. In this mixed equilibrium, agents must enter at a rate $p(N)<1$ and exert an effort level $a^{\ast\ast}$ so as to satisfy two conditions: 1. Agents are indifferent over entry: $$R+\mu=C(a^{\ast\ast}),$$ 2. The marginal value of distortion equals its marginal cost: $$\mathbb{E}\left[MV(1+\widetilde{N})\ \middle|\ \widetilde{N}\sim\text{Bin}(N-1% ,p(N))\right]=C^{\prime}(a^{**})$$ The entry condition pins down the action level $a^{\ast\ast},$ which is independent of the population size. The entry rate $p(N)$ is then pinned down by the requirement that the expected marginal value of effort must equal the marginal cost when agents who enter take action level $a^{\ast\ast}.$ Since the expected marginal value of effort rises with the number of entering agents, $p(N)$ must drop with $N$ to equilibrate marginal values and costs.222222In general, this probability $p(N)$ is not the same as the probability $p^{*}(N)$ satisfying $MV\left(1+p^{*}(N)\cdot(N-1)\right)=C^{\prime}(a^{**}),$ i.e. the opt-in probability such that equilibrium effort is $a^{**}$ given deterministic entry of $p^{*}(N)\cdot(N-1)$ other agents. In the Gaussian setting (and we suspect more generally) $MV(N)$ is a concave function of $N$, implying that uncertainty about the number of entrants increases the equilibrium rate of entry. 4.2 Welfare Implications We now analyze the welfare implications of the equilibrium outcomes derived in Section 4.1. Following Holmström (1999), we consider outcomes to represent socially valuable surplus generated by service provision, while effort is socially costly. In addition, we consider the forecast $\mathbb{E}[\theta_{i}\mid S_{j},j\in\mathscr{I}_{\text{opt-in}}]$ to reflect surplus that the agent receives, e.g. through future service. These factors contribute to social surplus only for participating agents, since surplus is not generated by agents who opt-out. Meanwhile, we take the reward $R$ to represent a monetary transfer, which affects the split of surplus but not the amount generated.232323In Section 7.3 we consider how results change if improved prediction also contributes to social welfare. For any symmetric strategy profile $(p,a)$ chosen by a population of $N$ agents, where $p$ is the opt-in probability and $a$ is an action choice, we define total expected welfare to be $$\displaystyle W(p,a,N)$$ $$\displaystyle=\mathbb{E}\left[\sum_{i=1}^{N}\mathbbm{1}(\text{opt-in})\times% \left[S_{i}+\mathbb{E}\left(\theta_{i}\mid S_{j},j\in\mathscr{I}_{\text{opt-in% }}\right)-C(a)\right]\right]$$ $$\displaystyle=pN\cdot(a+2\mu-C(a)).$$ (4.1) Total welfare is divided between the principal and agents as follows: the principal receives the outcome $S_{i}$ and pays a reward $R$ to every participating agent $i,$ yielding expected profits $$\Pi(p,a,N)=pN\cdot(a+\mu-R).$$ Meanwhile every participating agent receives reward $R$ and the reputational payoff $\mathbb{E}[\theta_{i}\mid S_{j},j\in\mathscr{I}_{\text{opt-in}}]$, and incurs effort cost $C(a)$. Total consumer welfare is therefore $$CS(p,a,N)=pN\cdot(R+\mu-C(a)).$$ Note that $W(p,a,N)=\Pi(p,a,N)+CS(p,a,N),$ so all surplus goes to either the principal or one of the agents. We consider how each of these welfare measures compares to a “no data linkages” benchmark in which the principal does not observe the linkage across agents, and uses only agent $i$’s outcome $S_{i}$ to predict their type $\theta_{i}$. That is, the principal’s forecast is $\mathbb{E}(\theta_{i}\mid S_{i}).$ In equilibrium in this benchmark, each agent opts-in (by Assumption 4.1), and chooses effort level $$a_{NDL}\equiv a^{*}(1)$$ (4.2) i.e. the action that would be taken for a population of size 1. (Recall that this action is the same for both linkage models.) In a similar spirit to Assumption 4.1, we assume that serving agents is profitable absent a linkage: Assumption 4.2 (Profitable market). $a^{\ast}(1)+\mu>R.$ This assumption ensures that a functioning market existed prior to linkages becoming available, and that the principal would not prefer to drop out rather than serve the market. 4.2.1 Consumer welfare Consumer welfare depends only on the action each agent is induced to take upon entry, and not on equilibrium entry rates. This is because agents randomize over entry only when opting-in and -out yield the same payoff. So consumer welfare can be computed as if every agent entered and exerted the equilibrium effort level, and this welfare is declining in effort. Therefore consumer welfare drops under any quality linkage and rises under any circumstance linkage, no matter the population size. 4.2.2 Principal profits Principal profits are rising in effort, and also in the participation rate whenever per-agent profits are positive. When agents within a segment have correlated quality, Theorem 4.1 indicates that use of the linkage for prediction (increasing the effective population size from 1 to $N$) will lead to depressed effort by agents without affecting participation, thus reducing firm profits relative to the no-linkage benchmark. Firms may therefore prefer to commit not to use big data analytics for forecasting outcomes based on such linkages. On the other hand, when agents experience shared circumstances (that affect current-period outcomes but are not reflective of underlying quality), Theorem 4.2 shows that use of the linkage will boost agent effort but may reduce participation. For small segments, firms benefit from the effort boost, and the linkage is profitable. However, for sufficiently large segments the effect of dampened participation outweighs this benefit (since $p(N)\rightarrow 0$ as $N\rightarrow\infty$ but effort levels are bounded), and the linkage becomes unprofitable. 4.2.3 Social surplus While firm profits are always increasing in effort and consumer welfare is always decreasing, social welfare is non-monotone in effort. Each participating agent generates a surplus of $$a+2\mu-C(a),$$ which is maximized at the unique effort level $a_{FB}$ satisfying $C^{\prime}(a_{FB})=1.$ Since $\mu>0,$ surplus is strictly positive at this effort level, and so aggregate surplus is maximized when all agents enter and exert effort $a_{FB}.$ We first show that equilibrium actions are below the first-best action in both models no matter how many agents participate. This result implies that, fixing the level of participation, linkages which boost effort improve social welfare. Lemma 4.1. For every population size $N$, equilibrium effort is inefficiently low in both models: $$a^{*}(N)<a_{FB}.$$ As $N$ increases: • Effort in the circumstance linkage model $a_{C}^{*}(N)$ becomes more efficient but is bounded below the efficient level: $\lim_{N\rightarrow\infty}a_{C}^{\ast}(N)<a_{FB}$. • Effort in the quality linkage model $a_{Q}^{*}(N)$ becomes less efficient. Recall that the equilibrium action $a^{*}$ satisfies $C^{\prime}(a^{*})=MV(N)$ while the first-best action $a_{FB}$ satisfies $C^{\prime}(a_{FB})=1$. The lemma is proved by demonstrating that $MV(N)<1$ in both models for all $N$. Intuitively, some effort is always dissipated, since the realization of the outcome is noisy, so the principal’s forecast of $\theta_{i}$ moves less than 1-to-1 with the outcome. This result generalizes a classic result from Holmström (1999), which demonstrated that $a^{\ast}(1)<a_{FB}$ in the case of Gaussian random variables. The following proposition builds on the previous result and compares $W_{NDL}(N)$, $W_{Q}(N)$, and $W_{C}(N)$, which respectively denote social welfare under the no-linkage benchmark, a quality linkage, and a circumstance linkage. Proposition 4.1. For every $N>1,$ $$W_{Q}(N)<W_{NDL}(N).$$ There exists a population threshold $\overline{N}$ such that $$W_{NDL}(N)<W_{C}(N)$$ for all $1<N<\overline{N}$ while $$W_{C}(N)<W_{NDL}(N)$$ for all $N>\overline{N}$. For all populations with $N\geq 2$ agents, quality linkages lead to a reduction in social welfare. This follows directly from Lemma 4.1: Since there is full entry in the no-data linkages benchmark as well as in the quality linkage equilibrium, the welfare comparison is completely determined by the relative sizes of the equilibrium actions, which are ranked $a_{NDL}(N)=a_{Q}^{*}(1)>a_{Q}^{*}(N)$. In contrast, under a circumstance linkage, the comparison depends on the population size $N$. In small populations, all agents opt-in, so again the action comparison completely determines welfare. Since $a_{NDL}(N)=a_{C}^{*}(1)<a_{C}^{*}(N)$, data linkages leads to an improvement in social welfare. In large populations, depressed entry dominates and results in lower social welfare despite increased effort levels from participating agents. (Both regimes exist whenever the population threshold $N^{\ast}$ above which agents randomize over entry is finite and larger than 1.) These results suggest that a social planner should restrict use of big data to identify linkages over quality while encouraging use of big data to identify linkages over circumstances that are shared by small populations. 5 A Monopolist Principal In the previous section we considered the impact of a data linkage in a setting in which the principal’s transfer $R$ to agents was held fixed. We now consider the implications of allowing the principal to adjust $R$ freely to maximize profits subject to each agent’s participation constraint. Formally, we augment our baseline setup with an initial stage in which the principal chooses $R,$ following which agents play the game described in Section 2. We continue to restrict attention to equilibria in which players choose symmetric participation strategies. Further, whenever multiple equilibria exist in the game among agents, we select the principal’s optimal equilibrium.242424In Section 4.1 we established equilibrium uniqueness whenever the inequality $\mu+R\geq C(a^{\ast}(1))$ is satisfied. When this inequality is violated, there can exist multiple equilibria in the quality linkage model, and the principal’s profit-maximizing choice of $R$ depends on the equilibrium selection. Our first result is that social surplus increases under a circumstance linkage and decreases under a quality linkage, no matter the population size. Meanwhile consumers are indifferent to introduction of a linkage, as the principal extracts all consumer surplus in either case. To state this result formally, let $W^{\dagger}_{NDL}(N),W^{\dagger}_{Q}(N),$ and $W^{\dagger}_{C}(N)$ denote total social surplus in a population of $N$ agents under monopoly pricing given no data linkage, a quality linkage, and a circumstance linkages. (Social surplus is defined as to Section 5.2). Lemma 5.1. Suppose that $R$ is chosen optimally by the principal. Then for every $N>1$, $$W^{\dagger}_{Q}(N)<W^{\dagger}_{NDL}(N)<W^{\dagger}_{C}(N).$$ Total consumer welfare is zero with or without a data linkage. The intuition for this result follows from the fact that agents have no private information about their willingness to pay, and so the principal can always extract all surplus from an interaction through the transfer $R,$ which may be negative (in which case it represents a price consumers must pay to participate). Given this fact, the principal’s choice of $R$ seeks to maximize the surplus it can appropriate. Since higher effort is achievable under a circumstance linkage than without one, and since higher effort is efficiency-enhancing, total welfare increases under a circumstance linkage. On the other hand under a quality linkage it is never possible to induce full entry at effort level $a^{\ast}(1)$ no matter the choice of $R,$ and so surplus falls under a quality linkage. Our second result describes how equilibrium patterns of effort and participation change under a linkage. Note that without a linkage, all agents participate and exert effort $a^{\ast}(1)$ under an optimal choice of $R$. Lemma 5.2. Under a quality linkage, agents exert effort $a^{\dagger}_{Q}(N)\in[a^{\ast}_{Q}(N),a^{\ast}(1))$ and enter with probability $p^{\dagger}_{Q}(N)\in(0,1].$ Under a circumstance linkage, agents exert effort $a^{\ast}_{C}(N)$ and enter with probability 1. In contrast to the baseline model, when the principal optimally chooses $R$, participation may fall under a quality linkage but not under a circumstance linkage. The result for the circumstance linkage is straightforward: Increasing the number of participating agents boosts surplus both through the value of the transaction and via increased effort by all participating agents. Thus, the principal optimally sets $R$ so that all agents enter. The quality linkage result is more subtle, because in that setting increased participation has countervailing effects, increasing the surplus generated by the transaction, but reducing effort by participating agents. Depending on model parameters, the latter effect may dominate the former, leading the principal to suppress entry and extract more effort from agents who do participate. To establish the result for the quality-linkage model we demonstrate that it is in fact possible for the principal to coordinate agents on a partial-entry equilibrium. This result may be surprising in light of our result from the baseline model that all agents participate in the unique equilibrium. The key to that result was Assumption 4.1, which ensured $R$ was large enough for participation to be profitable in a single-agent model. However, when $R$ is flexible, the principal may contemplate choosing $R$ low enough that a single agent wouldn’t enter if he expected to exert effort $a^{\ast}(1).$ So long as $R$ is sufficiently large that entry is profitable at effort level $a^{\ast}_{Q}(N),$ there still exists a full-entry equilibrium. However, there also exist two additional equilibria due to the strategic complementarity of agents’ entry choices—a no-entry equilibrium (which the principal never prefers) and a partial-entry equilibrium. The principal must then choose between the full-entry equilibrium induced by the reward $R=C(a^{\ast}_{Q}(N))-\mu,$ and the range of partial-entry equilibria induced by rewards $R\in(C(a^{\ast}_{Q}(N))-\mu,C(a^{\ast}(1))-\mu].$ In the proof, we show that the latter may be optimal, depending on model parameters. In particular, if the curvature of the effort cost function is low, then small changes in participation induce large changes in effort, making partial-entry equilibria especially profitable. 6 Data Sharing, Markets, and Consumer Welfare So far we have considered the implications of data linkages for a single firm which uses data to inform predictions about consumer behavior. This focus allowed us to isolate the direct effect of data linkages on consumer effort and participation. When multiple firms compete for consumers, additional important questions regarding behavior and welfare arise which we can leverage our model to answer. In this section we address a recent policy debate regarding data sharing. In many markets, a consumer’s business brings with it data on the consumer’s behavior, which by default is privately owned by the organization with which the consumer interacts. Recently, proposals have been made to form so-called ‘‘data commons’’ to make this data freely accessible to all organizations in the market. For example, the European Commission has begun exploring legislative action that would support ‘‘business-to-business data sharing,” and new platforms for data sharing, such as Data Republic, permit organizations to share anonymised data with one another.252525See https://www.zdnet.com/article/data-republic-facilitates-diplomatic-data-sharing-on-aws/. We study here the impact of such data sharing on effort provision and consumer welfare.262626Our focus on consumer welfare mirrors recent policy discussions regarding data collection and sharing, which have been mostly concerned with the impact of these activities on consumers. Our main findings would be similar if we instead analyzed total social surplus. In particular, an analog of Proposition 6.1 holds when considering the impact of data sharing on social surplus. To do this, we extend our model to $K\geq 2$ firms who compete over $N$ consumers according to the following timeline: $t=-1:\,\,$ Each firm $k$ simultaneously chooses a reward $R_{k}$. These transfers are publicly observed. $t=0:\,\,$ Each consumer chooses a firm to participate with (if any). $t=1:\,\,$ Participating consumers choose what level of effort to exert, without observing the participation decisions of other consumers. $t=2:\,\,$ Participating consumers receive their firm’s forecast of their type. Payoffs and consumer welfare are as in the single-principal model. We contrast a proprietary data regime, under which each firm observes only the outcomes of the consumers who interact with them, with a data sharing regime, under which the outcomes of all participating agents are shared across firms. These settings differ only in the information that firms have access to when making their forecasts at time $t=2$. We assume that whether data is proprietary or shared is common knowledge. As our solution concept, we use subgame-perfect Nash equilibria in pure strategies (which we henceforth refer to simply as an equilibrium).272727This restriction differs slightly from the one we used in the single-principal model: we require that agents not mix over participation, but we allow agents to make asymmetric participation decisions. Imposing these restrictions in the single-principal model would not substantively impact the analysis. In particular, equilibria would be identical except in the circumstance linkage model with $N>N^{*}$. In that regime there exist pure-strategy equilibria with asymmetric entry decisions, which exhibit the same comparative statics in effort and participation rates as the symmetric mixed equilibrium. Throughout, we maintain a restriction on out-of-equilibrium beliefs analogous to the refinement imposed in the single-principal model: at any information set in which agent $i$ participates with principal $k$, principal $k$ expects agent $i$ to choose the action $a_{i}$ satisfying $$MV\left(1+N^{k}_{-i}\right)=C^{\prime}(a_{i}),$$ where $N^{k}_{-i}$ is the number of agents $j\neq i$ who participate with principal $k$ under their equilibrium strategies. This refinement ensures that each principal expects every participating agent $i$ to choose the equilibrium action from a game with exogenous participation of $1+N^{k}_{-i}$ agents, even when participation by agent $i$ is out-of-equilibrium. We do not provide a full characterization of the equilibrium set, as there exists a large set of equilibria under proprietary data.282828For a given set of transfers, strategic substitutibility or complementarity between consumer participation decisions allow for existence of a multiplicity of participation patterns. The selection of participation patterns across subgames can then support a variety of equilibrium rewards by firms. Despite this fact, we can show that the shift from proprietary data to data sharing improves consumer welfare, no matter the equilibrium selection or the nature of linkages between consumers. Proposition 6.1. In both the quality linkage and circumstance linkage models, consumer welfare is higher under data sharing than under proprietary data. This result arises from the interplay of two forces---how data sharing impacts the total surplus generated from the market via participation and effort, and how it changes the split of this surplus between consumers and firms. Under data sharing, firms are identical to consumers, since all firms have access to the same outcomes regardless of the pattern of participation. This forces firm profits to zero and transfers all surplus to consumers. On the other hand, data sharing has a potentially ambiguous impact on total surplus. Total surplus is rising in effort,292929More precisely, total surplus is rising in effort on the interval $[0,a_{FB}]$, where $a_{FB}$ is the first-best action satisfying $C^{\prime}(a_{FB})=1$. We showed in Lemma 4.1 that equilibrium actions are bounded below first-best. Thus, on the relevant domain, total surplus is rising in effort. and effort is rising in the number of participating agents under circumstance linkages, but falling under quality linkages (Proposition 3.1). So while consumer welfare clearly rises in the circumstance linkage models, the result under quality linkages is more subtle. We establish the result for the quality linkage model by proving that under proprietary data, in every equilibrium agents endogenously choose to interact with a single firm (Lemma E.2). This means that data sharing does not increase the effective population size, and aggregate surplus is the same with or without data sharing. The impact of data sharing on consumer welfare is then completely determined by the split of surplus, which we already observed is maximized for consumers under data sharing. So consumer welfare must be at least as large under this regime. Proposition 6.1 indicates that under either kind of linkage across consumer outcomes, the introduction of data sharing is welfare-improving for consumers. This result does not imply that under data sharing, the identification of linkages always increases consumer welfare. As noted in Section 4.2, introduction of a quality linkage increases consumer welfare, but introduction of a circumstance linkage diminishes it. Thus, data sharing (the pooling of information across competitive firms) and data linkages (the identification of relationships among consumers that make one consumer’s outcomes predictive of another’s), while related, play very different roles: Data linkages determine how the size of a firm’s consumer base impacts the effort that each consumer exerts; while data sharing determines the pattern of participation across the firms and how surplus is divided between consumers and firms. The results of this section reveal that data linkages and data sharing interact in important ways. 7 Extensions 7.1 Robustness to Uncertainty We have so far supposed that consumers know the total population size $N$ and the structure of correlation across the outcomes $S_{i}$. In practice, consumers may not have this kind of detailed knowledge about their segment. We show next that our qualitative findings remain unchanged when agents have uncertainty about the strength of correlation across outcomes and about the population size, so long as agents know whether consumers in their segment are related by quality or circumstance. Formally, suppose that in the quality linkage model agents may be grouped into any of $K$ “quality linkage” segments, each of which corresponds to a different correlation structure across types; that is, $\overline{\theta}\sim F^{k}_{\overline{\theta}},$ $\theta^{\bot}_{i}\sim F^{k}_{\theta^{\bot}},$ and $\varepsilon_{i}\sim F^{k}_{\varepsilon}$ for segment $k=1,...,K.$ All agents share a common belief about the probability that they are in each segment. (The case of $K$ “circumstance linkage” segments may be similarly defined.) At the same time, suppose that the number of agents $N$ is a random variable, potentially dependent on the segment, with distribution $N\sim G^{k}_{\gamma}$, where $\gamma$ is a scale factor known to all agents such that for each segment $k,$ $G^{k}_{\gamma}$ first-order stochastically dominates $G^{k}_{\gamma^{\prime}}$ whenever $\gamma>\gamma^{\prime}.$ Under this specification, the first-order condition characterizing optimal effort when agents enter with probability $p$ may be written $$\mathbb{E}\left[MV(1+\widetilde{N},k)\right]=C^{\prime}(a^{\ast}),$$ where $MV(N^{\prime},k)$ is the marginal value of distortion when $N^{\prime}$ agents enter and the consumer is part of segment $k,$ $\widetilde{N}\sim\text{Bin}(N-1,p),$ and $N$ and $k$ are both random variables. Note that for each segment $k,$ $MV(N,k)$ changes with $N$ just as in Lemma 3.1. Then conditional on the segment $k,$ $\mathbb{E}[MV(1+\widetilde{N},k)\mid k]$ decreases with $p$ and $\gamma$ in the quality linkage model, and increases with $p$ and $\gamma$ in the circumstance linkage model. Since this property holds for every segment $k,$ it must also hold for the unconditional expected marginal value $\mathbb{E}\left[MV(1+\widetilde{N},k)\right]$. The reasoning of the previous paragraph yields the conclusion that the expected marginal value of distortion moves with the population scale factor $\gamma$ and the entry rate $p$ just as it does with respect to $N$ and $p$ in the baseline model. So the following corollary holds: Corollary. In the model with uncertainty over segment and population size, equilibrium effort and participation rates exhibit the same comparative statics in $\gamma$ as with respect to $N$ in Theorems 4.1 and 4.2. That is, an increase in $\gamma$—which shifts up the distribution for the number of participants no matter the realized segment---leads to higher effort under circumstance linkages and lower effort under quality linkage.303030The threshold $N^{*}$ at which participation rates begin to drop in the “circumstance linkage” case would, however, depend on details of their beliefs about the segment. 7.2 Multiple Linkages So far we have conducted our analysis supposing that each consumer is identified as part of a single segment. In practice a consumer may belong to several demographic and lifestyle segments, each of which may be used by an organization to improve predictions of the consumer’s type. We now show that aggregation of outcomes from multiple segments for prediction creates a natural amplification of the effort effect identified in Proposition 3.1: as the number of identifiable quality linkages for a consumer increases (e.g. because the organization has purchased data about additional covariates), his effort declines; and as the number of identifiable circumstance linkages for a consumer increases, his effort rises. To formally model variation in the number of segments, we focus on the effort exerted by a single agent, who we refer to as agent 0. We decompose the agent’s outcome $S_{0}$ as the sum of a number of components, some common and some idiosyncratic. In the quality linkage context, we write $$S_{0}=a_{0}+\sum_{j=1}^{J}\overline{\theta}^{j}+\theta^{\bot}_{0}+\varepsilon_% {0},$$ where $\theta^{\bot}_{0}$ and $\varepsilon_{0}$ are idiosyncratic persistent and transient components of the outcome. Each $\overline{\theta}^{j}$ is a persistent component of the outcome which is held in common with a segment $j$ consisting of $N_{j}$ agents. The outcomes of agents in segment $j$ are observed by the principal, and each agent $i$ in this segment has an outcome distributed as $$S^{j}_{i}=a^{j}_{i}+\overline{\theta}^{j}+\theta^{\bot,j}_{i}+\varepsilon^{j}_% {i}$$ where $\theta^{\bot,j}_{i}$ and $\varepsilon^{j}_{i}$ are idiosyncratic.313131For simplicity, we do not model agents in other groups as having multiple linkages. Extending the model to allow such linkages would not impact results in any way so long as no group $j$ is linked to another group $j^{\prime}$ also linked to agent 0. As usual, the principal wishes to predict $\theta_{0}=\sum_{j=1}^{J}\overline{\theta}^{j}+\theta^{\bot}_{i}.$ Analogously, in the circumstance linkage model we decompose the agent’s outcome as $$S_{0}=a_{0}+\theta_{0}+\sum_{j=1}^{J}\overline{\varepsilon}^{j}+\varepsilon^{% \bot}_{0},$$ where each agent $i$ from group $j$ has an outcome distributed as $$S^{j}_{i}=a^{j}_{i}+\theta^{j}_{i}+\overline{\varepsilon}^{j}+\varepsilon^{% \bot,j}_{i}.$$ As in the baseline model, all type and shock terms are mutually independent. In each model we impose analogs of the assumptions in Section 2.6 on the relevant densities and posterior means. Participation of all agents is exogenously given. Proposition 7.1 below demonstrates a comparative static in the number of linkages observed by the principal. A principal who observes $m$ linkages understands the correlation structure of each $\overline{\theta}^{j}$ (or $\overline{\varepsilon}^{j}$) with the segment-$j$ outcomes $(S^{j}_{1},...,S^{j}_{N_{j}})$ for $j=1,...,m,$ but believes that for $j=m+1,...,J$ each $\overline{\theta}^{j}$ (or $\overline{\varepsilon}^{j}$) term is idiosyncratic. This could, for example, correspond to the principal knowing which of their consumers are charitable givers, but not knowing which consumers are single parents. Let $a^{\dagger}_{Q}(m)$ be agent 0’s equilibrium action when the principal observes $m$ linkages in the quality linkage model, with $a^{\dagger}_{C}(m)$ similarly defined for the circumstance linkage model. The following result characterizes how agent 0’s equilibrium action changes with $m.$ Proposition 7.1. $a^{\dagger}_{Q}(m)$ is strictly decreasing in $m,$ while $a^{\dagger}_{C}(m)$ is strictly increasing in $m.$ For simplicity we have restricted attention to multiple linkages of the same type. However, the basic logic of Proposition 7.1 holds even when the agent may be linked to other segments via both quality and circumstance linkages. Given any initial set of linkages (each of which may be either a quality or circumstance linkage), identification of an additional quality linkage decreases equilibrium effort, while identification of an additional circumstance linkage increases equilibrium effort. (We omit the proof, which follows straightforwardly along the lines of the proof of Proposition 7.1.) 7.3 Forecast Prediction and Welfare So far we have considered prediction of an agent’s type relevant for social welfare only insofar as it generates incentives for the agent to exert effort to influence the prediction. However, in some applications, better tailoring of a service level to fit the agent’s type may involve changes in allocation which improve welfare. For instance, a bank extending loans to small businesses may increase total output if it is able to more accurately match loan amounts to the profitability of each business. When better prediction improves welfare, the social welfare results of Proposition 4.1 are qualitatively the same for circumstance linkages, but may change under a quality linkage. Identification of a circumstance linkage now has two positive forces on per-agent welfare, improving both the effort exerted and the forecast precision of each participating agent’s type (given a fixed entry rate). Since the participation rate still drops to zero when the population size becomes large, circumstance linkages improve welfare for small populations but decrease it for large populations, identical to the baseline model. Under a quality linkage, the impact of the linkage on effort and prediction accuracy have countervailing effects on welfare. For large populations the total effect is determined by the comparison between drop of effort from $a^{\ast}(1)$ to $\lim_{N\rightarrow\infty}a^{\ast}(N)$ versus the gains from accurate prediction of $\overline{\theta}.$ When the value to improved prediction is small, quality linkages decrease welfare for large populations (as in our baseline model), while the opposite is true when the value to improved prediction is large. 8 Conclusion As firms and governments move towards collecting large datasets of consumer transactions and behavior as inputs to decision-making, the question of whether and how to regulate the usage of consumer data has emerged as an important policy question. Recent regulations, such as the European Union’s General Data Protection Regulation (GDPR), have focused on protecting consumers’ privacy and improving transparency regarding what kind of data is being collected. An important complementary consideration when designing regulations is how data impacts social and economic behaviors. In the present paper, we analyze one such impact—the effect that consumer segmentations identified by novel datasets have on consumer incentives for socially valuable effort. We find that the behavioral and welfare consequences depend crucially on how consumers in a segment are linked. These results suggest that regulations should take into account not just whether individual data is informative about other consumers, but whether that data is primarily useful for inferring quality or denoising observations. In practice, the usage of a particular dataset is likely to differ across domains, and may have as much to do with the underlying correlation structure of the data as it does with the algorithms used to aggregate that data. We hope that even the reduced-form models of data aggregation that we have considered here make clear that regulation of the “amount” of data is too crude for many objectives—the structure of that data, and how it is used for prediction, can have important consequences. Finally, our analysis in Section 6 of the interaction between market forces and data linkages points to another interesting avenue for subsequent work. Since participation is a strategic complement under quality linkages but a strategic substitute under circumstance linkages, the former encourages the emergence of a single firm that serves all consumers, while the latter discourages it. This suggests that identification of linkages across consumers affects not just those consumers and their behavior, but can also have important implications for market structure and antitrust policy. References Acemoglu et al. (2019) Acemoglu, D., A. Makhdoumi, A. Malekian, and A. Ozdaglar (2019): “Too Much Data: Prices and Inefficiencies in Data Markets,” Working Paper. Acquisti et al. (2015) Acquisti, A., L. Brandimarte, and G. Loewenstein (2015): “Privacy and Human Behavior in the Age of Information,” Science, 347, 509–514. Agarwal et al. (2019) Agarwal, A., M. Dahleh, and T. Sarkar (2019): “A Marketplace for Data: An Algorithmic Solution,” in Proceedings of the 2019 ACM Conference on Economics and Computation, 701–726. Auriol et al. (2002) Auriol, E., G. Friebel, and L. Pechlivanos (2002): “Career Concerns in Teams,” Journal of Labor Economics, 20, 289–307. Ball (2019) Ball, I. (2019): “Scoring Strategic Agents,” Working Paper. Bergemann et al. (2019) Bergemann, D., A. Bonatti, and T. Gan (2019): “The Economics of Social Data,” Working Paper. Bergemann et al. (2018) Bergemann, D., A. Bonatti, and A. Smolin (2018): “The Design and Price of Information,” American Economic Review, 108, 1–48. Blackwell and Dubins (1962) Blackwell, D. and L. Dubins (1962): “Merging of Opinions with Increasing Information,” The Annals of Mathematical Statistics, 3, 882–886. Bonatti and Cisternas (2019) Bonatti, A. and G. Cisternas (2019): “Consumer Scores and Price Discrimination,” Review of Economic Studies, forthcoming. Chouldechova (2017) Chouldechova, A. (2017): ‘‘Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data, 5, 153–163. Dewatripont et al. (1999) Dewatripont, M., I. Jewitt, and J. Tirole (1999): “The Economics of Career Concerns, Part I: Comparing Information Structures,” Review of Economic Studies, 66, 183–198. Dwork and Roth (2014) Dwork, C. and A. Roth (2014): “The Algorithmic Foundations of Differential Privacy,” Found. Trends Theor. Comput. Sci., 9, 211–407. Eilat et al. (2019) Eilat, R., K. Eliaz, and X. Mu (2019): “Optimal Privacy-Constrained Mechanisms,” Working Paper. Eliaz and Spiegler (2018) Eliaz, K. and R. Spiegler (2018): “Incentive-Compatible Estimators,” Working Paper. Elliott and Galeotti (2019) Elliott, M. and A. Galeotti (2019): “Market Segmentation through Information,” Working Paper. European Commission (2020) European Commission (2020): “A European strategy for data,” . Fainmesser et al. (2019) Fainmesser, I. P., A. Galeotti, and R. Momot (2019): “Digital Privacy,” Working Paper. Federal Trade Commission (2014) Federal Trade Commission (2014): “Data Brokers: A Call for Transparency and Accountability,” . Frankel and Kartik (2019) Frankel, A. and N. Kartik (2019): “Muddled Information,” Journal of Political Economy, 127, 1739–1776. Frankel and Kartik (2020) ——— (2020): “Improving Information from Manipulable Data,” Working Paper. Georgiadis and Powell (2019) Georgiadis, G. and M. Powell (2019): “Optimal Incentives under Moral Hazard: From Theory to Practice,” Working Paper. Gomes and Pavan (2019) Gomes, R. and A. Pavan (2019): “Price Customization and Targeting in Matching Markets,” Working Paper. Green and Stokey (1983) Green, J. and N. Stokey (1983): “A Comparison of Tournaments and Contracts,” Journal of Political Economy, 91, 349–364. Hidir and Vellodi (2019) Hidir, S. and N. Vellodi (2019): “Personalization, Discrimination and Information Disclosure,” Working Paper. Holmström (1982) Holmström, B. (1982): “Moral Hazard in Teams,” Bell Journal in Economics, 13, 324–340. Holmström (1999) ——— (1999): “Managerial Incentive Problems: A Dynamic Perspective,” Review of Economic Studies, 66, 169–182. Hu et al. (2019) Hu, L., N. Immorlica, and J. W. Vaughan (2019): “The Disparate Effects of Strategic Manipulation,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, 259–268. Ichihashi (2019) Ichihashi, S. (2019): “Online Privacy and Information Disclosure by Consumers,” American Economic Review, 110, 569–595. Jullien et al. (2018) Jullien, B., Y. Lefouili, and M. H. Riordan (2018): “Privacy Protection and Consumer Retention,” Working Paper. Kartik et al. (2019) Kartik, N., F. X. Lee, and W. Suen (2019): “A Theorem on Bayesian Updating and Applications to Communication Games,” Working Paper. Kleinberg et al. (2017) Kleinberg, J., S. Mullainathan, and M. Raghavan (2017): “Inherent Trade-Offs in the Fair Determination of Risk Scores,” in 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), vol. 67, 43:1–43:23. Lazear and Rosen (1981) Lazear, E. and S. Rosen (1981): “Rank-Order Tournaments as Optimum Labor Contracts,” Journal of Political Economy, 89, 841–864. Meyer and Vickers (1997) Meyer, M. A. and J. Vickers (1997): “Performance Comparisons and Dynamic Incentives,” Journal of Political Economy, 105, 547–581. Milgrom (1981) Milgrom, P. (1981): “Good News and Bad News: Representation Theorems and Applications,” The Bell Journal of Economics, 12, 380–391. Olea et al. (2018) Olea, J. L. M., P. Ortoleva, M. M. Pai, and A. Prat (2018): “Competing Models,” Working Paper. Rodina (2017) Rodina, D. (2017): “Information Design and Career Concerns,” Working Paper. Saumard and Wellner (2014) Saumard, A. and J. A. Wellner (2014): “Log-Concavity and Strong Log-Concavity: A Review,” Statistics Surveys, 8, 45–114. Senate Committee on Commerce, Science, and Transportation (2013) Senate Committee on Commerce, Science, and Transportation (2013): “A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes,” . Shleifer (1985) Shleifer, A. (1985): “A Theory of Yardstick Competition,” RAND Journal of Economics, 16, 319–327. Yang (2019) Yang, K. H. (2019): “Selling Consumer Data for Profit: Optimal Market-Segmentation Design and its Consequences,” Working Paper. Appendix The appendices are structured as follows: Appendix A reports a list of actual consumer data segmentations sold by data brokers. Appendix B establishes technical results used in the proofs of the results in the body of the paper. The remaining appendices present proofs of all results in the body of the paper. \parttoc Appendix A Consumer Segments Provided by Data Brokers In this appendix we produce a list of examples of actual consumer segmentations produced by data brokers, as reported in Federal Trade Commission (2014) and Senate Committee on Commerce, Science, and Transportation (2013). We have informally categorized segments according to whether they might represent a quality linkage or a circumstance linkage; in practice, this categorization would depend also on the time frame for forecasting. For example, a segment of “consumers with children in college” during a particular observation cycle is a quality linkage segment while the children remain in college, but a circumstance linkage segment once the children have graduated. Besides these named categories, data brokers provide also segmentation based on numerous demographic, health, interest, financial, and social media indicators, including: miles traveled in the last 4 weeks, number of whiskey drinks consumed in the past 30 days, whether the individual or household is a pet owner, whether the individual donates to charitable causes, whether the individual enjoys reading romance novels, whether the individual participates in sweepstakes or contests, whether the individual suffers from allergies, whether the individual is a member of five or more social networks, whether individual is a heavy Twitter user, among countless others. Appendix B Preliminary Results In this section we establish a number of first-order stochastic dominance and monotonicity results used in proofs of results in the body of the paper. Throughout this appendix, fix a segment size $N$ and assume that all agents opt in. (All results extend immediately to any set of agents $I\subset\{1,...,N^{\prime}\}$ of size $N$ entering from a segment of size $N^{\prime}>N.$) Let $G^{M}_{i}$ denote the distribution function of agent $i$’s outcome in model $M\in\{Q,C\}$, with $M=Q$ the quality linkage model and $M=C$ the circumstance linkage model. We will write $g^{M}_{i}$ for the density function associated with $G^{M}_{i}.$ For the joint distribution of the outcomes of agents $i$ through $j$, we will write $G^{M}_{i:j},$ B.1 Smooth MLRP A classic result of Milgrom (1981) demonstrates that if a signal satisfies the monotone likelihood ratio property (MLRP), then posterior beliefs can be ordered by first-order stochastic dominance. For our results we desire not just that the posterior distribution is strictly decreasing in the conditioning variable, but that it be differentiable and that the derivative be strictly negative. We define a smooth form of the MLRP sufficient to achieve this result. Definition B.1 (Smooth MLRP). A family of conditional density functions $\{f(x\mid y)\}_{y\in Y}$ on $\mathbb{R}$ for some $Y\subset\mathbb{R}$ satisfies the smooth monotone likelihood ratio property (SMLRP) in $y$ if: • $f(x\mid y)$ is a strictly positive, $C^{1,0}$ function323232A function $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$ lies in the class $C^{1,0}$ if it is continuous everywhere and $\frac{\partial f}{\partial x}(x,y)$ exists and is continuous everywhere. of $(x,y),$ • $f(x\mid y)$ and $\frac{\partial}{\partial x}f(x\mid y)$ are both uniformly bounded for all $(x,y),$ • The likelihood ratio function $$\ell(x;y,y^{\prime})\equiv\frac{f(x\mid y)}{f(x\mid y^{\prime})}$$ satisfies $\frac{\partial\ell}{\partial x}(x;y,y^{\prime})>0$ for every $x$ and $y>y^{\prime}.$ This definition is a strengthening of the MLRP definition of Milgrom (1981). It requires not only that the likelihood ratio function be everywhere strictly increasing, but that it be differentiable with the derivative strictly positive. It also imposes regularity conditions on the likelihood and its derivative which will be necessary for the desired FOSD result to hold. One useful identity involving the likelihood ratio function is $$\frac{\partial\ell}{\partial x}(x;y,y^{\prime})=\frac{f(x|y)}{f(x|y^{\prime})}% \left(\frac{\partial}{\partial x}\log f(x|y)-\frac{\partial}{\partial x}\log f% (x|y^{\prime})\right).$$ Thus the condition on the likelihood ratio function imposed by SMLRP is equivalent to the condition that $\frac{\partial}{\partial x}\log f(x|y)$ be a strictly increasing function of $y$ for every $x.$ The following lemma establishes a very important class of random variables satisfying SMLRP. Lemma B.1. Let $X$ and $Y$ be two independent random variables with density functions $f_{X}$ and $f_{Y}$ which are each $C^{1}$, strictly positive, strictly log-concave functions, and which each have bounded first derivative. Let $Z=k+X+Y$ for a constant $k$. Then the conditional densities $f_{Z\mid X}(z\mid x)$ and $f_{Z\mid Y}(z\mid y)$ satisfy the SMLRP in $x$ and $y,$ respectively. Proof. First take $k=0.$ We prove the result for $f_{Z\mid X},$ with the result for $f_{Z\mid Y}$ following symmetrically. Note that $f_{Z\mid X}(z\mid x)=f_{Y}(z-x).$ By Lemma O.1, $f_{Y}$ is bounded. This result along with the additional assumptions on $f_{Y}$ ensure that $f_{Z\mid X}$ satisfies the first two conditions of SMLRP. As for the likelihood ratio condition, it is sufficient to establish that $\frac{\partial}{\partial z}\log f_{Z\mid X}(z\mid x)=\frac{\partial}{\partial z% }\log f_{Y}(z-x)$ is strictly increasing in $x$ for each $z.$ But since $f_{Y}$ is strictly log-concave, $\frac{\partial}{\partial z}\log f_{Y}(z-x)>\frac{\partial}{\partial z}\log f_{% Y}(z-x^{\prime})$ whenever $z-x<z-x^{\prime},$ i.e. whenever $x>x^{\prime}.$ So the likelihood ratio condition is satisfied as well. Now suppose $k\neq 0.$ Then the result applied to the random variable $X+Y$ establishes that $f_{X+Y\mid X}(z\mid x)$ and $f_{X+Y\mid Y}(z\mid y)$ satisfy the SMLRP in $x$ and $y,$ respectively. As $f_{Z\mid X}(z\mid x)=f_{X+Y\mid X}(z-k\mid x)$ and $f_{Z\mid Y}(z\mid y)=f_{X+Y\mid Y}(z-k\mid y)$, and since each of the conditions of the SMLRP are invariant to shifts in the first argument, these densities satisfy the SMLRP as well. ∎ The following lemma is the main result of this appendix. It strengthens the FOSD result of Milgrom (1981) to ensure that the posterior distribution function is smooth and has a strictly negative derivative wrt the conditioning variable. The sufficient conditions are that the likelihood function satisfy SMLRP and that the density function of the unobserved variable be continuous. The proof here establishes the sign of the derivative, with the proof of smoothness relegated to Lemma O.2 in the Online Appendix. Lemma B.2 (Smooth FOSD). Let $X$ and $Y$ be two random variables for which the density $g(y)$ for $Y$ and the conditional densities $f(x\mid y)$ for $X\mid Y$ exist. Suppose that $f(x\mid y)$ satisfies the SMLRP in $y$ and $g(y)$ is continuous. Then $H(x,y)\equiv\Pr(Y\leq y\mid X=x)$ is a $C^{1}$ function of $(x,y)$ and $\frac{\partial H}{\partial x}(x,y)<0$ everywhere. Proof. Lemma O.2 establishes that $H$ is a $C^{1}$ function. To sign its derivative wrt $x,$ note that the derivative of $\widehat{H}(x,y)\equiv H(x,y)^{-1}-1$ may manipulated to obtain the form $$\displaystyle\frac{\partial\widehat{H}}{\partial x}(x,y)=$$ $$\displaystyle\left(\int_{-\infty}^{y}f(x\mid y^{\prime\prime})\,dG(y^{\prime% \prime})\right)^{-2}$$ $$\displaystyle\times\int_{y}^{\infty}dG(y^{\prime})\int_{-\infty}^{y}dG(y^{% \prime\prime})\,\left(f(x\mid y^{\prime\prime})\frac{\partial}{\partial x}f(x% \mid y^{\prime})-f(x\mid y^{\prime})\frac{\partial}{\partial x}f(x\mid y^{% \prime\prime})\right).$$ (See the proof of Lemma O.2 for a detailed derivation.) The integrand may be rewritten $$\displaystyle f(x\mid y^{\prime\prime})\frac{\partial}{\partial x}f(x\mid y^{% \prime})-f(x\mid y^{\prime})\frac{\partial}{\partial x}f(x\mid y^{\prime\prime})$$ $$\displaystyle=$$ $$\displaystyle f(x\mid y^{\prime\prime})^{2}\left(\frac{\frac{\partial}{% \partial x}f(x\mid y^{\prime})}{f(x\mid y^{\prime\prime})}-\frac{f(x\mid y^{% \prime})\frac{\partial}{\partial x}f(x\mid y^{\prime\prime})}{f(x\mid y^{% \prime\prime})^{2}}\right)$$ $$\displaystyle=$$ $$\displaystyle f(x\mid y^{\prime\prime})^{2}\frac{\partial}{\partial x}\ell(x;y% ^{\prime},y^{\prime\prime}).$$ Now, as $y^{\prime}>y>y^{\prime\prime}$ on the interior of the domain of integration, $\frac{\partial}{\partial x}\ell(x;y^{\prime},y^{\prime\prime})>0$ everywhere and so $\frac{\partial\widehat{H}}{\partial x}(x,y)>0.$ Therefore $$\frac{\partial H}{\partial x}(x,y)=-\frac{\frac{\partial\widehat{H}}{\partial x% }(x,y)}{(\widehat{H}(x,y)+1)^{2}}<0,$$ as desired. ∎ B.2 SFOSD of Posterior Distributions We now develop smooth first-order stochastic dominance results regarding posterior distributions of various latent variables as outcomes shift. These results rely heavily on the SFOSD result established in Lemma B.2. Application of that lemma requires checking smoothness and boundedness conditions of the underlying likelihood functions, which are straightforward but tedious in our environment. We relegate proofs of these regularity conditions to Online Appendix O.1. The following result establishes that as an agent’s outcome increases, inferences about the common component of the outcome increase as well. Lemma B.3. For agent $i\in\{1,...,N\}$ and outcome-action profile $(\mathbf{S}_{-i},\mathbf{a}):$ • $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S};\mathbf{a})$ is a $C^{1}$ function of $(S_{i},\overline{\theta})$ satisfying $\frac{\partial}{\partial S_{i}}F^{Q}_{\overline{\theta}}(\overline{\theta}\mid% \mathbf{S};\mathbf{a})<0$ for all $(S_{i},\overline{\theta}),$ • $F^{C}_{\overline{\varepsilon}}(\overline{\varepsilon}\mid\mathbf{S};\mathbf{a})$ is a $C^{1}$ function of $(S_{i},\overline{\varepsilon})$ satisfying $\frac{\partial}{\partial S_{i}}F^{C}_{\overline{\varepsilon}}(\overline{% \varepsilon}\mid\mathbf{S};\mathbf{a})<0$ for all $(S_{i},\overline{\varepsilon})$, Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ in this proof. Fix $\mathbf{S}_{-i}.$ We will prove the first result, with the second following from nearly identical work by permuting the roles of $\theta$ and $\varepsilon$. The result follows from Lemma B.2 provided that 1) $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{-i})$ is continuous wrt $\overline{\theta},$ and 2) $g^{Q}_{i}(S_{i}\mid\overline{\theta},\mathbf{S}_{-i})$ satisfies SMLRP with respect to $\overline{\theta}$. As for the first condition, Bayes’ rule gives $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{-i})=\frac{f_{% \overline{\theta}}(\overline{\theta})\prod_{j\neq i}g_{j}(S_{j}\mid\overline{% \theta})}{g_{-i}(\overline{S}_{-i})}=\frac{f_{\overline{\theta}}(\overline{% \theta})\prod_{j\neq i}f_{\theta^{\bot}+\varepsilon}(S_{j}-\overline{\theta}-a% _{j})}{g_{-i}(\overline{S}_{-i})}.$$ Then as $f_{\overline{\theta}}$ and $f_{\theta^{\bot}+\varepsilon}$ are both continuous functions, $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{-i})$ is a continuous function of $\overline{\theta}.$ It therefore suffices to establish condition 2. Note that conditional on $\overline{\theta},$ $S_{i}$ is independent of $\mathbf{S}_{-i}$ in the quality linkage model; so $g^{Q}_{i}(S_{i}\mid\overline{\theta},\mathbf{S}_{-i})=g^{Q}_{i}(S_{i}\mid% \overline{\theta}).$ So it suffices to establish that $g^{Q}_{i}(S_{i}\mid\overline{\theta})$ satisfies SMLRP with respect to $\overline{\theta}$. Recall that in the quality linkage model, $S_{i}=a_{i}+\overline{\theta}+\theta^{\bot}_{i}+\varepsilon_{i},$ where by assumption $\overline{\theta},$ $\theta^{\bot}_{i}$ and $\varepsilon_{i}$ all have $C^{1},$ strictly positive, strictly log-concave density functions with bounded derivatives. Lemma O.1 ensures that these densities are additionally bounded. These properties are all inherited by the density function of the sum $\theta^{\bot}_{i}+\varepsilon_{i},$ which is just the convolution of the density functions for $\theta^{\bot}_{i}$ and $\varepsilon_{i}$. Lemma B.1 then implies that $g^{Q}_{i}(S_{i}\mid\overline{\theta})$ satisfies SMLRP with respect to $\overline{\theta},$ as desired. ∎ The following lemma establishes smooth stochastic dominance of a posterior distribution arising in analysis of the quality linkage model. While the property is the same one established by Lemma B.2, the boundedness conditions of that lemma cannot be guaranteed and so slightly different techniques are required to reach the result. Lemma B.4. For every outcome-action profile $(S_{1},a_{1})$ and type $\theta_{1},$ the function $F^{Q}_{\theta_{1}}(\theta_{1}\mid S_{1},\overline{\theta};a_{1})$ is continuously differentiable wrt $\overline{\theta}$ everywhere, and $\frac{\partial}{\partial\overline{\theta}}F^{Q}_{\theta_{1}}(\theta_{1}\mid S_% {1},\overline{\theta};a_{1})<0.$ Proof. For convenience, we suppress the dependence of distributions on $a_{1}$ in this proof. By Bayes’ rule, $$F^{Q}_{\theta_{1}}(t\mid S_{1},\overline{\theta})=\frac{\int_{-\infty}^{t}f_{% \overline{\theta}}(\overline{\theta}\mid\theta_{1}=t^{\prime},S_{1})f_{\theta_% {1}}(t^{\prime}\mid S_{1})\,dt^{\prime}}{\int_{-\infty}^{\infty}f_{\overline{% \theta}}(\overline{\theta}\mid\theta_{1}=t^{\prime},S_{1})f_{\theta_{1}}(t^{% \prime}\mid S_{1})\,dt^{\prime}}.$$ Note that $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1},S_{1})$ is independent of $S_{1},$ as $(\theta_{1},S_{1})$ contains the same information as $(\theta_{1},\varepsilon_{1})$ and $\overline{\theta}$ is independent of $\varepsilon_{1}.$ So $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1},S_{1})=f^{Q}_{% \overline{\theta}}(\overline{\theta}\mid\theta_{1}).$ Another application of Bayes’ rule reveals that $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1})=\frac{f^{Q}_{\theta% _{1}}(\theta_{1}\mid\overline{\theta})f_{\overline{\theta}}(\overline{\theta})% }{f_{\theta}(\theta_{1})}=\frac{f_{\theta^{\bot}}(\theta_{1}-\overline{\theta}% )f_{\overline{\theta}}(\overline{\theta})}{f_{\theta}(\theta_{1})},$$ while $$f_{\theta_{1}}(\theta_{1}\mid S_{1})=\frac{g_{1}(S_{1}\mid\theta_{1})f_{\theta% }(\theta_{1})}{g(S_{1})}=\frac{f_{\varepsilon}(S_{1}-\theta_{1}-a_{1})f_{% \theta}(\theta_{1})}{g(S_{1})}.$$ Inserting back into the previous expression for $F^{Q}_{\theta_{1}=t}(\theta_{1}\mid S_{1},\overline{\theta})$ yields $$F^{Q}_{\theta_{1}}(t\mid S_{1},\overline{\theta})=\frac{\int_{-\infty}^{t}f_{% \theta^{\bot}}(t^{\prime}-\overline{\theta})f_{\varepsilon}(S_{1}-t^{\prime}-a% _{1})\,dt^{\prime}}{\int_{-\infty}^{\infty}f_{\theta^{\bot}}(t^{\prime}-% \overline{\theta})f_{\varepsilon}(S_{1}-t^{\prime}-a_{1})\,dt^{\prime}}.$$ Using the change of variables $t^{\prime\prime}=S_{1}-t^{\prime}-a_{1}$ yields $$F^{Q}_{\theta_{1}}(t\mid S_{1},\overline{\theta})=\frac{\int_{S_{1}-t-a_{1}}^{% \infty}f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime})dF_{% \varepsilon}(t^{\prime\prime})}{\int_{-\infty}^{\infty}f_{\theta^{\bot}}(S_{1}% -a_{1}-\overline{\theta}-t^{\prime\prime})dF_{\varepsilon}(t^{\prime\prime})}.$$ Now, as $f^{\prime}_{\theta^{\bot}}$ exists and is bounded, the Leibniz integral rule ensures that derivatives of the numerator and denominator wrt $\overline{\theta}$ may be moved inside the integral sign. So $F^{Q}_{\theta_{1}}(t\mid S_{1},\overline{\theta})$ is differentiable wrt $\overline{\theta}.$ And as $f^{\prime}_{\theta^{\bot}}$ is additionally continuous, the dominated convergence theorem ensures that these derivatives are continuous. Meanwhile the numerator and denominator themselves are each continuous in $\overline{\theta}$ given that $f_{\theta^{\bot}}$ is continuous and bounded. Thus $F^{Q}_{\theta_{1}}(\theta_{1}\mid S_{1},\overline{\theta})$ is continuously differentiable wrt $\overline{\theta}.$ To sign the derivative, we may equivalently sign $$H(\overline{\theta})\equiv F^{Q}_{\theta_{1}}(t\mid S_{1},\overline{\theta})^{% -1}-1=\frac{\int_{-\infty}^{S_{1}-t-a_{1}}f_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime})dF_{\varepsilon}(t^{\prime})}{\int_{S_{1}-t-a_{1}% }^{\infty}f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime})dF_% {\varepsilon}(t^{\prime\prime})}.$$ Differentiating and re-arranging yields $$\displaystyle H^{\prime}(\overline{\theta})=$$ $$\displaystyle\left(\int_{S_{1}-t-a_{1}}^{\infty}f_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime\prime})dF_{\varepsilon}(t^{\prime\prime})\right)^{% -2}$$ $$\displaystyle\times\int_{-\infty}^{S_{1}-t-a_{1}}dF_{\varepsilon}(t^{\prime})% \int_{S_{1}-t-a_{1}}^{\infty}dF_{\varepsilon}(t^{\prime\prime})$$ $$\displaystyle\quad\quad\quad\quad\times\left(-f_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime\prime})f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime})\right.$$ $$\displaystyle\quad\quad\quad\quad\quad\quad\left.+f_{\theta^{\bot}}(S_{1}-a_{1% }-\overline{\theta}-t^{\prime})f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime\prime})\right).$$ The integrand may be rewritten $$\displaystyle-f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime}% )f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime})$$ $$\displaystyle+f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime})f^{% \prime}_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime})$$ $$\displaystyle=$$ $$\displaystyle f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime}% )f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime})$$ $$\displaystyle\times\left(-\frac{f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-% \overline{\theta}-t^{\prime})}{f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}% -t^{\prime})}+\frac{f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t% ^{\prime\prime})}{f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime% \prime})}\right).$$ Note that everywhere on the domain of integration $t^{\prime\prime}>t^{\prime},$ and so because $f_{\theta^{\bot}}$ is strictly log-concave, $$\frac{f^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime% })}{f_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime\prime})}>\frac{f% ^{\prime}_{\theta^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime})}{f_{\theta% ^{\bot}}(S_{1}-a_{1}-\overline{\theta}-t^{\prime})}.$$ Thus the integrand is strictly positive everywhere, meaning $H^{\prime}(\overline{\theta})>0$. In other words, $$\frac{\partial}{\partial\overline{\theta}}F^{Q}_{\theta_{1}}(\theta_{1}\mid S_% {1},\overline{\theta})=-\frac{H^{\prime}(\overline{\theta})}{(H(\overline{% \theta})+1)^{2}}<0,$$ as desired. ∎ The following lemma establishes how inferences about one agent’s quality change as another agent’s outcome changes. Note that the result depends critically on the model. For simplicity, the result is stated in terms of inferences about agent 1’s type as agent $N$’s outcome shifts. By symmetry analogous results hold for any other pair of agents. Lemma B.5. For every outcome-action profile $(\mathbf{S}_{-N},\mathbf{a}),$ $$\frac{\partial}{\partial S_{N}}F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S};% \mathbf{a})<0$$ and $$\frac{\partial}{\partial S_{N}}F^{C}_{\theta_{1}}(\theta_{1}\mid\mathbf{S};% \mathbf{a})>0$$ for every $(\theta_{1},S_{N}).$ Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ in this proof. Fix $\mathbf{S}_{-N}.$ Recall that Lemma O.4 established that $F^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ is a $C^{1}$ function of $(S_{N},\theta_{1})$ for each model $M\in\{Q,C\}.$ Consider first the quality linkage model. Then $$F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})=\int_{-\infty}^{\infty}F^{Q}_{% \theta_{1}}(\theta_{1}\mid\mathbf{S},\overline{\theta})\,dF^{Q}_{\overline{% \theta}}(\overline{\theta}\mid\mathbf{S}).$$ Conditional on $\overline{\theta},$ $\theta_{1}$ depends on $\mathbf{S}$ only through $S_{1},$ so this can be written $$F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})=\int_{-\infty}^{\infty}F^{Q}_{% \theta_{1}}(\theta_{1}\mid S_{1},\overline{\theta})\,dF^{Q}_{\overline{\theta}% }(\overline{\theta}\mid\mathbf{S}).$$ Lemma B.3 establishes that $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})$ is a $C^{1}$ function of $(S_{N},\overline{\theta})$ satisfying $\frac{\partial}{\partial S_{N}}F^{Q}_{\overline{\theta}}(\overline{\theta}\mid% \mathbf{S})<0$ everywhere. Then the function $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})-q$ is a $C^{1}$ function of $(S_{N},\overline{\theta},q)$, with Jacobian $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})$ wrt $\overline{\theta}$. By Bayes’ rule, $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})=\frac{f_{\overline{% \theta}}(\overline{\theta})\prod_{i=1}^{N}g_{i}(S_{i}\mid\overline{\theta})}{% \int d\overline{\theta}^{\prime}\,f_{\overline{\theta}}(\overline{\theta}^{% \prime})\prod_{i=1}^{N}g_{i}(S_{i}\mid\overline{\theta}^{\prime})}.$$ As $g_{i}(S_{i}\mid\overline{\theta})=f_{\theta^{\perp}+\varepsilon}(S_{i}-% \overline{\theta}-a_{i})$ and $f_{\overline{\theta}}$ and $f_{\theta^{\perp}+\varepsilon}$ are both strictly positive, $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})>0$ everywhere. Therefore by the implicit function theorem there exists a $C^{1}$ function $\phi(q,S_{N})$ such that $F^{Q}_{\overline{\theta}}(\phi(q,S_{N})\mid\mathbf{S})=q$ for all $(q,S_{N}),$ and further that $$\frac{\partial\phi}{\partial S_{N}}(q,S_{N})=-\left[\frac{1}{f^{Q}_{\overline{% \theta}}(t\mid\mathbf{S})}\frac{\partial}{\partial S_{N}}F^{Q}_{\overline{% \theta}}(t\mid\mathbf{S})\right]_{t=\phi(q,S_{N})}>0.$$ A change of variables allows $F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ to be integrated with respect to quantiles of $\overline{\theta}$ using the quantile function $\phi$, yielding $$F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})=\int_{0}^{1}F^{Q}_{\theta_{1}}(% \theta_{1}\mid S_{1},\overline{\theta}=\phi(q,S_{N}))\,dq.$$ Then for any $\Delta>0,$ $$\displaystyle-\frac{1}{\Delta}\left(F^{Q}_{\theta_{1}}(\theta_{1}\mid S_{N}=s_% {N}+\Delta,\mathbf{S}_{-N})-F^{Q}_{\theta_{1}}(\theta_{1}\mid S_{N}=s_{N},% \mathbf{S}_{-1})\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}-\frac{1}{\Delta}\left(F^{Q}_{\theta_{1}}(\theta_{1}% \mid S_{1},\overline{\theta}=\phi(q,s_{N}+\Delta))-F^{Q}_{\theta_{1}}(\theta_{% 1}\mid S_{1},\overline{\theta}=\phi(q,s_{N}))\right)\,dq.$$ Since $F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ is differentiable wrt $S_{N},$ the limit of both sides as $\Delta\downarrow 0$ must be well-defined. Lemma B.4 establishes that $\frac{\partial}{\partial\overline{\theta}}F^{Q}_{\theta_{1}}(\theta_{1}\mid S_% {1},\overline{\theta})$ exists, is continuous in $\overline{\theta},$ and is strictly negative everywhere. Meanwhile we showed above that $\phi(q,S_{N})$ is strictly increasing in $S_{N}.$ This means that the interior of the integrand is strictly positive for every $q$ and $\Delta>0,$ implying by Fatou’s lemma and the chain rule that $$-\frac{\partial}{\partial S_{N}}F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})% \geq-\int_{0}^{1}\left.\frac{\partial}{\partial\overline{\theta}}F^{Q}_{\theta% _{1}}(\theta_{1}\mid S_{1},\overline{\theta})\right|_{\overline{\theta}=\phi(q% ,S_{N})}\frac{\partial\phi}{\partial S_{N}}(q,S_{N})\,dq.$$ As the first term in the integrand is strictly negative while the second is strictly positive, this inequality in turn implies $$\frac{\partial}{\partial S_{N}}F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})<0.$$ Now consider the circumstance linkage model. Virtually all of the work for the quality linkage model goes through with $\overline{\varepsilon}$ exchanged for $\overline{\theta},$ with the key exception that the existence, continuity, and sign of $\frac{\partial}{\partial\overline{\varepsilon}}F^{C}_{\theta_{1}}(\theta_{1}% \mid S_{1},\overline{\varepsilon})$ must be established separately. (Lemma B.4 applies only to the quality linkage model.) Note that $F^{C}_{\theta_{1}}(\theta_{1}\mid S_{1}=s,\overline{\varepsilon}=t)=F^{C}_{% \theta_{1}}(\theta_{1}\mid\widetilde{S}_{1}=s-t)$, where $\widetilde{S}_{1}\equiv a_{1}+\theta_{1}+\varepsilon^{\bot}_{1}.$ It is therefore sufficient to analyze $\frac{\partial}{\partial\widetilde{S}_{1}}F^{C}_{\theta_{1}}(\theta_{1}\mid% \widetilde{S}_{1}).$ Let $\widetilde{g}_{1}(\widetilde{S}_{1}\mid\theta_{1})$ be the density function of $\widetilde{S}_{1}$ conditional on $\theta_{1}.$ We invoke Lemma B.1 to conclude that $\widetilde{g}_{1}(\widetilde{S}_{1}\mid\theta_{1})$ satisfies SMLRP in $\theta_{1}.$ As additionally $f_{\theta}(\theta_{1})$ is continuous by assumption, Lemma B.2 ensures that $\frac{\partial}{\partial\widetilde{S}_{1}}F^{C}_{\theta_{1}}(\theta_{1}\mid% \widetilde{S}_{1})$ exists, is continuous, and is strictly negative everywhere. Thus $\frac{\partial}{\partial\overline{\varepsilon}}F^{C}_{\theta_{1}}(\theta_{1}% \mid S_{1},\overline{\varepsilon})$ exists, is continuous, and is strictly positive everywhere. In light of this result, the final steps of the proof from the quality linkage case adapted to the circumstance linkages model show that $$\displaystyle\frac{1}{\Delta}\left(F^{C}_{\theta_{1}}(\theta_{1}\mid S_{N}=s_{% N}+\Delta,\mathbf{S}_{-N})-F^{C}_{\theta_{1}}(\theta_{1}\mid S_{N}=s_{N},% \mathbf{S}_{-1})\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\frac{1}{\Delta}\left(F^{C}_{\theta_{1}}(\theta_{1}% \mid S_{1},\overline{\varepsilon}=\phi(q,s_{N}+\Delta))-F^{C}_{\theta_{1}}(% \theta_{1}\mid S_{1},\overline{\varepsilon}=\phi(q,s_{N}))\right)\,dq,$$ where the interior of the right-hand side is strictly positive for all $\Delta>0$. Then by Fatou’s lemma and the chain rule $$\frac{\partial}{\partial S_{N}}F^{C}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})% \geq\int_{0}^{1}\left.\frac{\partial}{\partial\overline{\varepsilon}}F^{Q}_{% \theta_{1}}(\theta_{1}\mid S_{1},\overline{\varepsilon})\right|_{\overline{% \varepsilon}=\phi(q,S_{N})}\frac{\partial\phi}{\partial S_{N}}(q,S_{N})\,dq>0.$$ ∎ B.3 Monotonicity of Posterior Expectations This appendix establishes a series of monotonicity results about how posterior expectations of various latent variables change as some agent’s outcome shifts. These results are consequences of the SFOSD results derived in Appendix B.2. Several of the results require smoothness or positivity conditions on underlying distribution and density functions, which are straightforward but tedious to check in our environment. We relegate proofs of these properties to Online Appendix O.1. We first establish that the posterior expectation of an agent’s type increases in his own signal, and that the rate of increase is bounded strictly between 0 and 1. Lemma B.6 (Forecast sensitivity). For each agent $i\in\{1,...,N\}$ and outcome-action profile $(\mathbf{S},\mathbf{a}),$ $$0<\frac{\partial}{\partial S_{i}}\mathbb{E}[\theta_{i}\mid\mathbf{S};\mathbf{a% }]<1.$$ Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ throughout this proof. Also wlog consider agent $i=1.$ We establish the result for the quality linkage model, with the result for the circumstance linkage model following by nearly identical work. Fix a vector of signal realizations $\mathbf{S}_{-1}.$ First note that $g^{Q}_{1}(S_{1}\mid\theta_{1},\mathbf{S}_{-1})=g^{Q}_{1}(S_{1}\mid\theta_{1}),$ and $S_{1}$ is the sum of a constant plus the independent random variables $\theta_{1}$ and $\varepsilon_{1},$ each of which has a $C^{1},$ strictly positive, strictly log-concave density function with bounded derivative. Thus by Lemma B.1 $g^{Q}_{1}(S_{1}\mid\theta_{1},\mathbf{S}_{-1})$ satisfies SMLRP with respect to $\theta_{1}.$ Further, $f^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}_{-1})$ is continuous in $\theta_{1}$ by Lemma O.3. Lemma B.2 then ensures that $F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ is a $C^{1}$ function of $(\theta_{1},S_{1})$ and $\frac{\partial}{\partial S_{1}}F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})<0$ everywhere. Meanwhile conditional on $\mathbf{S}_{-1}$, $S_{1}$ can be written $$S_{1}=a_{1}+\widetilde{\theta}+\theta^{\bot}_{1}+\varepsilon_{1},$$ where $\widetilde{\theta}$ is independent of $\theta^{\bot}_{1}$ and $\varepsilon_{1}$ and has density function $f_{\widetilde{\theta}}$ defined by $f_{\widetilde{\theta}}(t)\equiv f_{\overline{\theta}}^{Q}(\overline{\theta}=t% \mid\mathbf{S}_{-1}).$ We first show that $f_{\widetilde{\theta}}$ is a $C^{1},$ strictly positive, strictly log-concave function with bounded derivative. By Bayes’ rule, $$f_{\widetilde{\theta}}(t)=\frac{f_{\overline{\theta}}(t)\prod_{i>1}g^{Q}_{i}(S% _{i}\mid\overline{\theta}=t)}{g^{Q}(\mathbf{S}_{-1})}=\frac{f_{\overline{% \theta}}(t)\prod_{i>1}f_{\varepsilon+\theta^{\bot}}(S_{i}-t-a_{i})}{g^{Q}(% \mathbf{S}_{-1})},$$ where $f_{\varepsilon+\theta^{\bot}}$ is the convolution of $f_{\theta^{\bot}}$ and $f_{\varepsilon}.$ Since $f_{\theta^{\bot}}$ and $f_{\varepsilon}$ are both $C^{1},$ strictly positive, strictly log-concave functions with bounded derivatives, so is $f_{\varepsilon+\theta^{\bot}}.$ It follows immediately that $f_{\widetilde{\theta}}$ is a strictly positive, $C^{1}$ function with bounded derivative. Further, taking logarithms yields $$\log f_{\widetilde{\theta}}(t)=\log f_{\overline{\theta}}(t)-\log g^{Q}(% \mathbf{S}_{-1})+\sum_{i>1}\log f_{\varepsilon+\theta^{\bot}}(S_{i}-t-a_{i}).$$ Hence $\log f_{\widetilde{\theta}}$ is a sum of constant and strictly concave functions, meaning it is strictly concave. Thus $f_{\widetilde{\theta}}$ is strictly log-concave. This means that conditional on $\mathbf{S}_{-1},$ $S_{1}$ is the sum of a constant plus the independent random variables $\varepsilon_{1}$ and $\widetilde{\theta}+\theta^{\bot}_{1},$ each of which has a $C^{1},$ strictly positive, strictly log-concave density function with bounded derivative. So by Lemma B.1, $g^{Q}_{1}(S_{1}\mid\varepsilon_{1},\mathbf{S}_{-1})$ satisfies SMLRP with respect to $\varepsilon_{1}.$ Further, $f^{Q}_{\theta_{1}}(\varepsilon_{1}\mid\mathbf{S}_{-1})=f_{\varepsilon}(% \varepsilon_{1})$ is continuous in $\varepsilon_{1}$ by assumption. Lemma B.2 then ensures that $F^{Q}_{\varepsilon_{1}}(\varepsilon_{1}\mid\mathbf{S})$ is a $C^{1}$ function of $(\varepsilon_{1},S_{1})$ and $\frac{\partial}{\partial S_{1}}F^{Q}_{\varepsilon_{1}}(\varepsilon_{1}\mid% \mathbf{S})<0$ everywhere. By definition, $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ is equal to $$\mathbb{E}[\theta_{1}\mid\mathbf{S}]=\int_{-\infty}^{\infty}\theta_{1}\,dF^{Q}% _{\theta_{1}}(\theta_{1}\mid\mathbf{S}).$$ We will perform a change of measure to expect over quantiles of $\theta_{1}$ rather than $\theta_{1}$ itself. Fix $\mathbf{S}_{-1}.$ The previous paragraphs ensure that $F^{Q}_{\theta_{1}}(t\mid\mathbf{S})-q$ is a $C^{1}$ function of $(t,S_{1},q)$ everywhere, while Lemma O.3 ensures that the Jacobian of this function wrt to $t$ is $f^{Q}_{\theta_{1}}(t\mid\mathbf{S})>0$. Then by the implicit function theorem there exists a continuously differentiable quantile function $\phi(q,S_{1})$ such that $F^{Q}_{\theta_{1}}(\phi(q,S_{1})\mid\mathbf{S})=q$ and $$\frac{\partial\phi}{\partial S_{1}}(q,S_{1})=-\left[\frac{1}{f^{Q}_{\theta_{1}% }(t\mid\mathbf{S})}\frac{\partial}{\partial S_{1}}F^{Q}_{\theta_{1}}(t\mid% \mathbf{S})\right]_{t=\phi(q,S_{1})}>0$$ for every $q\in(0,1)$ and $S_{1}\in\mathbb{R}$. Changing measure, $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ may be expressed as an expectation over quantiles of $\theta_{1}$, yielding $$\mathbb{E}[\theta_{1}\mid\mathbf{S}]=\int_{0}^{1}\phi(q,S_{1})\,dq.$$ Then for any $\Delta>0,$ $$\frac{1}{\Delta}\mathbb{E}[\theta_{1}\mid S_{1}+\Delta,\mathbf{S}_{-1}]-% \mathbb{E}[\theta_{1}\mid\mathbf{S}]=\int_{0}^{1}\frac{1}{\Delta}(\phi(q,S_{1}% +\Delta)-\phi(q,S_{1}))\,dq.$$ By Assumption 2.3, $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ is differentiable wrt $S_{1}$ everywhere. So the limit of each side is well-defined as $\Delta\downarrow 0.$ Further, as $\phi(q,S_{1})$ is strictly increasing in $S_{1}$ for each $q,$ the interior of the integrand is everywhere positive. Then by Fatou’s lemma $$\frac{\partial}{\partial S_{1}}\mathbb{E}[\theta_{1}\mid\mathbf{S}]\geq\int_{0% }^{1}\frac{\partial\phi}{\partial S_{1}}(q,S_{1})\,dq>0.$$ Now, recall that $$S_{1}=a_{1}+\theta_{1}+\varepsilon_{1},$$ so that $$S_{1}=\mathbb{E}[S_{1}\mid\mathbf{S}]=a_{1}+\mathbb{E}[\theta_{1}\mid\mathbf{S% }]+\mathbb{E}[\varepsilon_{1}\mid\mathbf{S}].$$ Then in particular $\mathbb{E}[\varepsilon_{1}\mid\mathbf{S}]$ must be differentiable wrt $S_{1}$ given that the remaining terms in the identity are. Very similar work to the previous paragraph then implies that $$\frac{\partial}{\partial S_{1}}\mathbb{E}[\varepsilon_{1}\mid\mathbf{S}]>0.$$ Finally, differentiate the identity relating $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ and $\mathbb{E}[\varepsilon_{1}\mid\mathbf{S}]$ to obtain $$1=\frac{\partial}{\partial S_{1}}\mathbb{E}[\theta_{1}\mid\mathbf{S}]+\frac{% \partial}{\partial S_{1}}\mathbb{E}[\varepsilon_{1}\mid\mathbf{S}].$$ Since each term on the right-hand side is strictly positive, each much also be strictly less than 1. ∎ We next establish that the posterior expectation of the common component of the outcome in each model is increasing in each agent’s outcome, with the rate of increase bounded strictly above 0. Lemma B.7. For each agent $i\in\{1,...,N\}$ and outcome-action profile $(\mathbf{S},\mathbf{a}):$ • In the quality linkage model, $\frac{\partial}{\partial S_{i}}\mathbb{E}[\overline{\theta}\mid\mathbf{S}]>0$, • In the circumstance linkage model, $\frac{\partial}{\partial S_{i}}\mathbb{E}[\overline{\varepsilon}\mid\mathbf{S}% ]>0$. Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ in this proof. We establish the result for the quality linkage model, with the proof for the circumstance linkage model following by nearly identical work. By definition of $\mathbb{E}[\overline{\theta}\mid\mathbf{S}],$ $$\mathbb{E}[\overline{\theta}\mid\mathbf{S}]=\int\overline{\theta}\,dF^{Q}_{% \overline{\theta}}(\overline{\theta}\mid\mathbf{S}).$$ Now, Lemma B.3 established that $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})$ is a $C^{1}$ function of $(\overline{\theta},S_{i}),$ and $\frac{\partial}{\partial S_{i}}F^{Q}_{\overline{\theta}}(\overline{\theta}\mid% \mathbf{S})<0$ everywhere. Then the function $F^{G}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})-q$ is a $C^{1}$ function of $(q,\overline{\theta},S_{i})$ with Jacobian $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})$ wrt $Q.$ By Bayes’ rule $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})=\frac{f_{\overline{% \theta}}(\overline{\theta})\prod_{i=1}^{N}g_{i}(S_{i}\mid\overline{\theta})}{% \int d\overline{\theta}^{\prime}f_{\overline{\theta}}(\overline{\theta}^{% \prime})\prod_{i=1}^{N}g_{i}(S_{i}\mid\overline{\theta}^{\prime})},$$ and as $f_{\overline{\theta}}(\overline{\theta})$ and $g_{i}(S_{i}\mid\overline{\theta})=f_{\theta^{\bot}+\varepsilon}(S_{i}-% \overline{\theta}-a_{i})$ are all strictly positive by assumption, $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})>0$ everywhere. So fix $\mathbf{S}_{-i}.$ Then by the implicit function theorem there exists a $C^{1}$ quantile function $\phi(q,S_{i})$ such that $F^{Q}_{\overline{\theta}}(\phi(q,S_{i})\mid\mathbf{S})=q$ everywhere, and $$\frac{\partial\phi}{\partial S_{i}}(q,S_{i})=-\left[\frac{1}{f^{Q}_{\overline{% \theta}}(\overline{\theta}\mid\mathbf{S})}\frac{\partial}{\partial S_{i}}F^{Q}% _{\overline{\theta}}(\overline{\theta}\mid\mathbf{S})\right]_{\overline{\theta% }=\phi(q,S_{i})}>0.$$ By a change of measure, $\mathbb{E}[\overline{\theta}\mid\mathbf{S}]$ may be expressed as an integral with respect to quantiles of $\overline{\theta}$ as $$\mathbb{E}[\overline{\theta}\mid\mathbf{S}]=\int_{0}^{1}\phi(q,S_{i})\,dq.$$ Then for every $\Delta>0,$ $$\displaystyle\frac{1}{\Delta}\left(\mathbb{E}[\overline{\theta}\mid\mathbf{S}_% {-i},S_{i}=s_{i}+\Delta]-\mathbb{E}[\overline{\theta}\mid\mathbf{S}_{-i},S_{i}% =s_{i}]\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\frac{1}{\Delta}\left(\phi(q,s_{i}+\Delta)-\phi(q,s_{% i})\right)\,dq.$$ Assumption 2.3 guarantees that $\frac{\partial}{\partial S_{i}}\mathbb{E}[\overline{\theta}\mid\mathbf{S}]$ exists. So the limit of each side as $\Delta\downarrow 0$ is well-defined. Further, since $\phi(q,S_{i})$ is strictly increasing, the integrand on the rhs is well-defined. Then by Fatou’s lemma, $$\frac{\partial}{\partial S_{i}}\mathbb{E}[\overline{\theta}\mid\mathbf{S}]\geq% \int_{0}^{1}\frac{\partial\phi}{\partial S_{i}}(q,S_{i})\,dq>0.$$ ∎ We next establish how the posterior expectation of each agent’s type changes as some other agent’s outcome shifts. Note that the result depends critically on the model. For simplicity we consider how agent 1’s variables shift as agent $N$’s outcome changes. By symmetry an analogous result holds for any pair of agents. Lemma B.8. For every outcome-action profile $(\mathbf{S},\mathbf{a})$, $$\frac{\partial}{\partial S_{N}}\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a}]>0$$ in the quality linkage model, while $$\frac{\partial}{\partial S_{N}}\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a}]<0$$ in the circumstance linkage model. Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ in this proof. By definition $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ is given by $$\mathbb{E}[\theta_{1}\mid\mathbf{S}]=\int_{-\infty}^{\infty}\theta_{1}\,dF^{M}% _{\theta_{1}}(\theta_{1}\mid\mathbf{S}).$$ Fix $\mathbf{S}_{-N}.$ By Lemma O.4, $F^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ is a $C^{1}$ function of $(S_{N},\theta_{1}),$ and so $F^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})-q$ is a $C^{1}$ function of $(S_{N},\theta_{1},q)$ with Jacobian $f^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})$ wrt $\theta_{1}.$ By Lemma O.3 the Jacobian is strictly positive everywhere, hence by the implicit function theorem there exists a $C^{1}$ quantile function $\phi(q,S_{N})$ satisfying $F^{M}_{\theta_{1}}(\phi(q,S_{1})\mid\mathbf{S})=q$ everywhere, with derivative $$\frac{\partial\phi}{\partial S_{N}}(q,S_{N})=-\left[\frac{1}{f^{M}_{\theta_{1}% }(\theta_{1}\mid\mathbf{S})}\frac{\partial}{\partial S_{N}}F^{M}_{\theta_{1}}(% \theta_{1}\mid\mathbf{S})\right]_{\theta_{1}=\phi(q,S_{N})}.$$ By Lemma B.5, $\frac{\partial}{\partial S_{N}}F^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})<0$ everywhere while $\frac{\partial}{\partial S_{N}}F^{C}_{\theta_{1}}(\theta_{1}\mid\mathbf{S})>0$ everywhere. Hence $\frac{\partial\phi}{\partial S_{N}}(q,S_{N})>0$ everywhere in the quality linkage model, while $\frac{\partial\phi}{\partial S_{N}}(q,S_{N})<0$ everywhere in the circumstance linkage model. By a change of variables, $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ may be expressed as an integral over quantiles of $\theta_{1}$ as $$\mathbb{E}[\theta_{1}\mid\mathbf{S}]=\int_{0}^{1}\phi(q,S_{N})\,dq.$$ Consider first the quality linkage model. For every $\Delta>0$ we have $$\displaystyle\frac{1}{\Delta}\left(\mathbb{E}[\theta_{1}\mid\mathbf{S}_{-N},S_% {N}=s_{N}+\Delta]-\mathbb{E}[\theta_{1}\mid\mathbf{S}_{-N},S_{N}=s_{N}]\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\frac{1}{\Delta}(\phi(q,s_{N}+\Delta)-\phi(s_{N}))\,dq,$$ where the integrand is strictly positive for every $\Delta>0$ given that $\frac{\partial\phi}{\partial S_{N}}(q,S_{N})>0$ everywhere. By Assumption 2.3, $\mathbb{E}[\theta_{1}\mid\mathbf{S}]$ is differentiable wrt $S_{N}$ everywhere, so the limits of both sides must exist as $\Delta\downarrow 0.$ Then by Fatou’s lemma, $$\frac{\partial}{\partial S_{N}}\mathbb{E}[\theta_{1}\mid\mathbf{S}]\geq\int_{0% }^{1}\frac{\partial\phi}{\partial S_{N}}(q,S_{N})\,dq>0.$$ Analogously, in the circumstance linkage model $$\displaystyle-\frac{1}{\Delta}\left(\mathbb{E}[\theta_{1}\mid\mathbf{S}_{-N},S% _{N}=s_{N}+\Delta]-\mathbb{E}[\theta_{1}\mid\mathbf{S}_{-N},S_{N}=s_{N}]\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}-\frac{1}{\Delta}(\phi(q,s_{N}+\Delta)-\phi(s_{N}))\,dq,$$ where the integrand is again positive and the limits of both sides exist. Then by Fatou’s lemma $$-\frac{\partial}{\partial S_{N}}\mathbb{E}[\theta_{1}\mid\mathbf{S}]\geq-\int_% {0}^{1}\frac{\partial\phi}{\partial S_{N}}(q,S_{N})\,dq>0,$$ or equivalently $$\frac{\partial}{\partial S_{N}}\mathbb{E}[\theta_{1}\mid\mathbf{S}]<0.$$ ∎ Appendix C Proofs for Section 3 (Exogenous Entry) C.1 Equilibrium Characterization In this section we establish that there exists a unique equilibrium to the exogenous-entry model, which is characterized by the first-order condition described in the body of the paper. Fix a population size $N,$ and assume all agents in the segment enter in the first period. For every $\mathbf{\alpha}\in\mathbb{R}^{N}_{+}$ and $\Delta\geq-\alpha_{1},$ define $$\mu(\Delta;\mathbf{\alpha})\equiv\mathbb{E}[\mathbb{E}[\theta_{1}\mid\mathbf{S% };\mathbf{a}=\mathbf{\alpha}]\mid\mathbf{a}=(\alpha_{1}+\Delta,\mathbf{\alpha}% _{-1})]$$ to be agent 1’s expected second-period payoff from exerting effort $\alpha_{1}+\Delta$ when the principal expects each agent $i\in\{1,...,N\}$ to exert effort $\alpha_{i}.$ Lemma C.1. The value function $\mu(\Delta;\mathbf{\alpha})$ and its derivatives satisfy the following properties: (a) $\mu(\Delta;\mathbf{\alpha})$ is independent of $\mathbf{\alpha}$ and is continuous and strictly increasing in $\Delta.$ (b) $\mu^{\prime}(\Delta;\mathbf{\alpha})$ exists, is continuous in $\Delta,$ and satisfies $0<\mu^{\prime}(\Delta;\mathbf{\alpha})<1$ for every $\Delta$. (c) $D^{+}\mu^{\prime}(\Delta;\mathbf{\alpha})\leq K$for every $\Delta$.333333Given a function $f:\mathbb{R}\rightarrow\mathbb{R},$ the Dini derivative $D^{+}$ is a generalization of the derivative existing for arbitrary functions and defined by $D^{+}f(x)=\limsup_{h\downarrow 0}(f(x+h)-f(x))/h$. When $f$ is differentiable at a point $x,$ $D^{+}f(x)=f^{\prime}(x).$ Proof. Fix a model $M\in\{Q,C\}$. The quantity $\mu(\Delta;\mathbf{\alpha})$ can be written explicitly as $$\mu(\Delta;\mathbf{\alpha})=\int dG^{M}(\mathbf{S}=\mathbf{s}\mid\mathbf{a}=(% \alpha_{1}+\Delta,\mathbf{\alpha}_{-1}))\,\mathbb{E}[\theta\mid\mathbf{S}=% \mathbf{s};\mathbf{a}=\mathbf{\alpha}].$$ Further, $$\mathbb{E}[\theta\mid\mathbf{S}=\mathbf{s};\mathbf{a}=\mathbf{\alpha}]=\int% \theta_{1}\,dF^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}=\mathbf{s};\mathbf{a}% =\mathbf{\alpha}),$$ and by Bayes’ rule $$f^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}=\mathbf{s};\mathbf{a}=\mathbf{% \alpha})=\frac{g^{M}(\mathbf{S}=\mathbf{s}\mid\theta_{1};\mathbf{a}=\mathbf{% \alpha})f_{\theta}(\theta)}{g^{M}(\mathbf{S}=\mathbf{s}\mid\mathbf{a}=\mathbf{% \alpha})}.$$ Since effort affects the outcome as an additive shift, $g^{M}(\mathbf{S}=\mathbf{s}\mid\mathbf{a}=\mathbf{\alpha})=g^{M}(\mathbf{S}=% \mathbf{s}-\mathbf{\alpha}\mid\mathbf{a}=\mathbf{0})$ and $g^{M}(\mathbf{S}=\mathbf{s}\mid\theta_{1};\mathbf{a}=\mathbf{\alpha})=g^{M}(% \mathbf{S}=\mathbf{s}-\mathbf{\alpha}\mid\theta_{1};\mathbf{a}=\mathbf{0}).$ So $$\displaystyle f^{M}_{\theta_{1}}(\theta\mid\mathbf{S}=\mathbf{s};\mathbf{a}=% \mathbf{\alpha})=$$ $$\displaystyle\frac{g^{M}(\mathbf{S}=\mathbf{s}-\mathbf{\alpha}\mid\theta;% \mathbf{a}=\mathbf{0})f_{\theta}(\theta)}{g^{M}(\mathbf{S}=\mathbf{s}-\mathbf{% \alpha}\mid\mathbf{a}=\mathbf{0})}$$ $$\displaystyle=$$ $$\displaystyle f^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}=\mathbf{s}-\mathbf{% \alpha};\mathbf{\alpha}=\mathbf{0}).$$ Thus $$\mathbb{E}[\theta_{1}\mid\mathbf{S}=\mathbf{s};\mathbf{a}=\mathbf{\alpha}]=% \int\theta_{1}\,dF^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}=\mathbf{s}-% \mathbf{\alpha};\mathbf{a}=\mathbf{0})=\mathbb{E}[\theta_{1}\mid\mathbf{S}=% \mathbf{s}-\mathbf{\alpha};\mathbf{a}=\mathbf{0}].$$ Then $\mu(\Delta;\alpha)$ may be equivalently written $$\mu(\Delta;\mathbf{\alpha})=\int dG^{M}(\mathbf{S}=\mathbf{s}-\mathbf{\alpha}% \mid\mathbf{a}=(\Delta,\mathbf{0}))\,\mathbb{E}[\theta_{1}\mid\mathbf{S}=% \mathbf{s}-\mathbf{\alpha};\mathbf{a}=\mathbf{0}].$$ Using the change of variables $\mathbf{s}^{\prime}=\mathbf{s}-\mathbf{\alpha}$ then reveals that $\mu(\Delta;\mathbf{\alpha})=\mu(\Delta;\mathbf{0}),$ so $\mu$ is indeed independent of $\mathbf{\alpha}.$ Now fix $\Delta$ and $\Delta^{\prime}<\Delta.$ Since effort affects the outcome as an additive shift, $G^{M}(\mathbf{S}=\mathbf{s}\mid\mathbf{a}=(\alpha_{1}+\Delta,\mathbf{\alpha}_{% -1}))=G^{M}(\mathbf{S}=(s_{1}-(\Delta-\Delta^{\prime}),\mathbf{s}_{-1})\mid% \mathbf{a}=(\alpha_{1}+\Delta,\mathbf{\alpha}_{-1}))$ for every $s_{1}.$ Then defining a change of variables via $s^{\prime}_{1}=s_{1}-(\Delta-\Delta^{\prime})$ and $\mathbf{s}^{\prime}_{-i}=\mathbf{s}_{-i},$ the previous integral expression for $\mu(\Delta;\mathbf{\alpha})$ may be equivalently written $$\mu(\Delta;\mathbf{\alpha})=\int dG^{M}(\mathbf{S}=\mathbf{s}^{\prime}\mid% \mathbf{a}=(\alpha_{1}+\Delta^{\prime},\mathbf{\alpha}_{-1}))\,\mathbb{E}[% \theta_{1}\mid\mathbf{S}=(s^{\prime}_{1}+(\Delta-\Delta^{\prime}),\mathbf{s}^{% \prime}_{-1});\mathbf{a}=\mathbf{\alpha}].$$ Now, by Assumption 2.3 $\frac{\partial}{\partial S_{1}}\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a}]$ exists and is continuous everywhere, and Lemma B.6 established that $0<\frac{\partial}{\partial S_{1}}\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a% }]<1$ everywhere. Hence by the Leibniz integral rule $\mu^{\prime}(\Delta;\mathbf{\alpha})$ exists and $$\mu^{\prime}(\Delta;\mathbf{\alpha})=\int dG^{M}(\mathbf{S}=\mathbf{s}^{\prime% }\mid\mathbf{a}=\mathbf{\alpha})\,\frac{\partial}{\partial\Delta}\mathbb{E}[% \theta_{1}\mid\mathbf{S}=(s^{\prime}_{1}+\Delta,\mathbf{s}^{\prime}_{-1});% \mathbf{a}=\mathbf{\alpha}],$$ and in particular $0<\mu^{\prime}(\Delta;\mathbf{\alpha})<1.$ An immediate corollary is that $\mu(\Delta;\mathbf{\alpha})$ is continuous and strictly increasing everywhere. Further, the dominated convergence theorem implies that $\mu^{\prime}(\Delta;\mathbf{\alpha})$ is continuous in $\Delta$ everywhere. Next, by Assumption 2.6 $$\frac{\partial^{2}}{\partial\Delta^{2}}\mathbb{E}[\theta_{1}\mid\mathbf{S}=(s^% {\prime}_{1}+\Delta,\mathbf{s}^{\prime}_{-1});\mathbf{a}=\mathbf{\alpha}]$$ exists and is bounded in the interval $(-\infty,K]$ everywhere. Then for each $\delta>0$ and $(\mathbf{s},\mathbf{a},\Delta),$ the mean value theorem implies that $$\displaystyle\frac{1}{\delta}\left(\frac{\partial}{\partial\Delta}\mathbb{E}[% \theta_{1}\mid\mathbf{S}=(s^{\prime}_{1}+\Delta+\delta,\mathbf{s}^{\prime}_{-1% });\mathbf{a}=\mathbf{\alpha}]-\frac{\partial}{\partial\Delta}\mathbb{E}[% \theta_{1}\mid\mathbf{S}=(s^{\prime}_{1}+\Delta,\mathbf{s}^{\prime}_{-1});% \mathbf{a}=\mathbf{\alpha}]\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{\partial^{2}}{\partial\Delta^{2}}\mathbb{E}[\theta_{1}\mid% \mathbf{S}=(s^{\prime}_{1}+\Delta+\delta^{\prime},\mathbf{s}^{\prime}_{-1});% \mathbf{a}=\mathbf{\alpha}]\leq K$$ for some $\delta^{\prime}\in[0,\delta].$ Reverse Fatou’s lemma then implies that $D^{+}\mu^{\prime}(\Delta;\mathbf{\alpha})\leq K$. ∎ Lemma C.2. $\mu(\Delta;\mathbf{\alpha})-C(\alpha_{1}+\Delta)$ is a strictly concave function of $\Delta$ for any $\mathbf{\alpha}.$ Proof. Fix an $\mathbf{\alpha},$ and define $\phi(\Delta)\equiv\mu(\Delta;\mathbf{\alpha})-C(\alpha_{1}+\Delta).$ By Lemma C.1, $\phi^{\prime}$ exists and is continuous everywhere. We establish the necessary and sufficient condition for strict concavity that $\phi^{\prime}$ is strictly decreasing. We invoke the basic monotonicity theorem from analysis that any function $f$ which is continuous and satisfies $D^{+}f\geq 0$ everywhere is nondecreasing everywhere. We apply this result to $-\mu^{\prime}(\Delta;\mathbf{\alpha})+K\Delta.$ Using basic properties of the Dini derivatives $D^{+}$ and $D_{+},$ we have $D^{+}(-\mu^{\prime}(\Delta;\mathbf{\alpha}))=-D_{+}\mu^{\prime}(\Delta;\mathbf% {\alpha})\geq-D^{+}\mu^{\prime}(\Delta;\mathbf{\alpha}).$ Then since $K\Delta$ is differentiable and $D^{+}\mu^{\prime}(\Delta;\mathbf{\alpha})\leq K$ from Lemma C.1, we have $D^{+}(-\mu^{\prime}(\Delta;\mathbf{\alpha})+K\Delta)=D^{+}(-\mu^{\prime}(% \Delta;\mathbf{\alpha}))+K\geq 0.$ So $\mu^{\prime}(\Delta;\mathbf{\alpha})-K\Delta$ is nonincreasing everywhere. So choose any $\Delta$ and $\Delta^{\prime}>\Delta.$ Then $$\phi^{\prime}(\Delta^{\prime})=\mu^{\prime}(\Delta^{\prime};\mathbf{\alpha})-K% \Delta^{\prime}+K\Delta^{\prime}-C^{\prime}(\alpha_{1}+\Delta^{\prime})\leq\mu% ^{\prime}(\Delta;\mathbf{\alpha})+K(\Delta^{\prime}-\Delta)-C^{\prime}(\alpha_% {1}+\Delta^{\prime}).$$ But also by Assumption 2.6, $C^{\prime\prime}(\alpha_{1}+\Delta^{\prime\prime})>K$ for every $\Delta^{\prime\prime}\in(\Delta,\Delta^{\prime}),$ so $C^{\prime}(\alpha_{1}+\Delta^{\prime})>C^{\prime}(\alpha_{1}+\Delta)+K(\Delta^% {\prime}-\Delta).$ Thus $$\phi^{\prime}(\Delta^{\prime})<\mu^{\prime}(\Delta;\mathbf{\alpha})-C^{\prime}% (\alpha_{1}+\Delta)=\phi^{\prime}(\Delta),$$ as desired. ∎ Proposition C.1. There exists a unique equilibrium action profile characterized by $a_{i}=a^{\ast}_{i}(N)$ for each player $i,$ where $a^{\ast}_{i}(N)$ is the unique solution to $$\mu^{\prime}(0;\mathbf{a}^{\ast}(N))=C^{\prime}(a^{\ast}(N)).$$ Proof. Lemma C.1 established that $\mu^{\prime}(0;\mathbf{a}^{\ast}(N))$ is well-defined, independent of $a^{\ast}(N),$ and bounded in the interval $[0,1].$ Then as $C^{\prime}$ is continuous, strictly increasing, and satisfies $C^{\prime}(0)=0$ and $C^{\prime}(\infty)=\infty,$ there exists a unique solution to the stated first-order condition. This solution constitutes an equilibrium so long as $\Delta=0$ maximizes the objective function $\mu(\Delta;\mathbf{a}^{\ast}(N))-C(a^{\ast}(N)+\Delta)$, which is guaranteed by the fact, established in Lemma C.2, that this function is strictly concave in $\Delta.$ ∎ Define $\mu_{N}(\Delta)\equiv\mu(\Delta;\mathbf{a}^{\ast}(N))$ and $MV(N)\equiv\mu^{\prime}_{N}(0)$ for each $N.$ When we wish to make the model clear, we will write $MV_{M}(N)$ for $M\in\{Q,C\}.$ An immediate implication of Lemma C.1 is that $0<MV(N)<1$ for all $N$. We conclude this appendix by establishing that these bounds also hold strictly in the limit as $N\rightarrow\infty.$ Lemma C.3. $0<\lim_{N\rightarrow\infty}MV(N)<1.$ Proof. Consider first the quality linkage model. The proof of Lemma 3.1 establishes that $\lim_{N\rightarrow\infty}MV_{Q}(N)=MV_{Q}(\infty),$ where $MV_{Q}(\infty)$ is the equilibrium marginal value of effort in a one-agent model where the common component $\overline{\theta}$ is observed by the principal. In this case the agent’s equilibrium expected value of distortion is $$\mu_{\infty}(\Delta;a^{\ast}(\infty))=\mathbb{E}[\overline{\theta}]+\mathbb{E}% [\mathbb{E}[\theta^{\bot}_{1}\mid\widetilde{S}_{1};a_{1}=a^{\ast}(\infty)]\mid a% _{1}=a^{\ast}(\infty)+\Delta],$$ where $\widetilde{S}_{1}\equiv a_{1}+\theta^{\bot}_{1}+\varepsilon_{1}.$ Since the contribution of $\overline{\theta}$ to the agent’s payoff is not influenced by effort, it has no incentive effect. The marginal value of effort in this setting is then just the marginal value of effort in a one-agent model where the agent’s type has density $f_{\theta^{\bot}}.$ As this distribution satisfies the same regularity conditions as $f_{\theta},$ the reasoning establishing that $0<MV_{Q}(1)<1$ immediately implies that $0<MV_{Q}(\infty)<1$ as well. Now consider the circumstance linkage model. In this model the proof of Lemma 3.1 establishes that $\lim_{N\rightarrow\infty}MV_{C}(N)=MV_{C}(\infty),$ where $MV_{C}(\infty)$ is the equilibrium marginal value of effort in a one-agent model where the common component $\overline{\varepsilon}$ is observed by the principal. In this case the agent’s equilibrium expected value of distortion is $$\mu_{\infty}(\Delta;a^{\ast}(\infty))=\mathbb{E}[\mathbb{E}[\theta_{1}\mid% \widetilde{S}_{1};a_{1}=a^{\ast}(\infty)]\mid a_{1}=a^{\ast}(\infty)+\Delta],$$ where $\widetilde{S}_{1}\equiv a_{1}+\theta_{1}+\varepsilon^{\bot}_{1}.$ The marginal value of effort in this setting is then just the marginal value of effort in a one-agent model where the noise distribution has density $f_{\varepsilon^{\bot}}.$ As this distribution satisfies the same regularity conditions as $f_{\varepsilon},$ the reasoning establishing that $0<MV_{C}(1)<1$ immediately implies that $0<MV_{C}(\infty)<1$ as well. ∎ C.2 Proof of Lemma 3.1 Throughout this proof, we will without loss of generality consider agent 1’s problem. To compare results across segments of differing sizes, we will consider there to be a single underlying vector $\mathbf{S}=(S_{1},S_{2},...)$ of outcomes for a countably infinite set of agents, with the $N$-agent model corresponding to observation of the outcomes of the first $N$ agents. We will write $\mathbf{a}^{\ast}(N)$ to indicate the $N$-vector with entries $a^{\ast}(N),$ and similarly $\mathbf{a}^{\ast}(N+1)$ to indicate the $N+1$-vector with entries $a^{\ast}(N+1).$ Given any finite or countably infinite vector $\mathbf{x}$ with at least $j$ elements, we will use $\mathbf{x}_{i:j}$ to indicate the subvector of $\mathbf{x}$ consisting of elements $i$ through $j.$ For the distribution function of the outcome vector $\mathbf{S}_{i:j},$ we will write $G^{M}_{i:j}.$ C.2.1 Monotonicity in $N$ We first establish the monotonicity claims of the lemma. Fix a model $M\in\{Q,C\}$ and a segment size $N.$ By definition, the expected value of distortion $\mu_{N}(\Delta)$ is $$\displaystyle\mu_{N}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N}(\mathbf{S}_{1:N}=\mathbf{s}_{1:N}\mid\mathbf{a}% _{1:N}=\left(a^{\ast}(N)+\Delta,\mathbf{a}^{\ast}(N)_{2:N}\right))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N}=\mathbf{s}_{% 1:N};\mathbf{a}_{1:N}=\mathbf{a}^{\ast}(N)].$$ By Lemma C.1, the value of distortion is independent of the action vector expected by the principal, so we may equivalently write $$\displaystyle\mu_{N}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N}(\mathbf{S}_{1:N}=\mathbf{s}_{1:N}\mid\mathbf{a}% _{1:N}=\left(a^{\ast}(N+1)+\Delta,\mathbf{a}^{\ast}(N+1)_{2:N}\right))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N}=\mathbf{s}_{% 1:N};\mathbf{a}_{1:N}=\mathbf{a}^{\ast}(N+1)_{1:N}]$$ (C.1) replacing $a^{*}(N)$ everywhere with $a^{*}(N+1)$. Further, the additive structure of the model implies that the distribution function $G^{M}_{1:N}$ satisfies the identity $$G^{M}_{1:N}(\mathbf{S}_{1:N}=\mathbf{s}_{1:N}\mid\mathbf{a}_{1:N})=G^{M}_{1:N}% (\mathbf{S}_{1:N}=\mathbf{s}_{1:N}+\mathbf{b}_{1:N}\mid\mathbf{a}_{1:N}+% \mathbf{b}_{1:N})$$ for any outcome realization $\mathbf{s}_{1:N}$, action vector $\mathbf{a}_{1:N}$, and shift vector $\mathbf{b}_{1:N}.$ Then taking $\mathbf{b}_{1:N}=(-\Delta,\mathbf{0}_{1:N-1}),$ the representation of $\mu_{N}(\Delta)$ in (C.1) may be rewritten $$\displaystyle\mu_{N}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N}(\mathbf{S}_{1:N}=\mathbf{s}^{\prime}_{1:N}\mid% \mathbf{a}_{1:N}=\mathbf{a}^{\ast}(N+1)_{1:N})$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N}=(s^{\prime}_% {1}+\Delta,\mathbf{s}^{\prime}_{2:N});\mathbf{a}_{1:N}=\mathbf{a}^{\ast}(N+1)_% {1:N}],$$ where we have changed variables to the integrator $\mathbf{s}^{\prime}_{1:N}=\mathbf{s}_{1:N}+\mathbf{b}_{1:N}$. Meanwhile, the value of distortion with $N+1$ agents is $$\displaystyle\mu_{N+1}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N+1}(\mathbf{S}_{1:N+1}=\mathbf{s}_{1:N+1}\mid% \mathbf{a}_{1:N+1}=\left(a^{\ast}(N+1)+\Delta,\mathbf{a}^{\ast}(N+1)_{2:N+1}% \right))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=\mathbf{s}% _{1:N+1};\mathbf{a}_{1:N+1}=\mathbf{a}^{\ast}(N+1)].$$ Using the same transformation as in the $N$-agent model, this expression may be equivalently written $$\displaystyle\mu_{N+1}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N+1}(\mathbf{S}_{1:N+1}=\mathbf{s}^{\prime}_{1:N+1% }\mid\mathbf{a}_{1:N+1}=\mathbf{a}^{\ast}(N+1))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=(s^{\prime% }_{1}+\Delta,\mathbf{s}^{\prime}_{2:N+1});\mathbf{a}_{1:N+1}=\mathbf{a}^{\ast}% (N+1)].$$ For the remainder of the proof, all distributions will be conditioned on the action profile $a_{1:N+1}=\mathbf{a}^{\ast}(N+1),$ so conditioning of distributions on actions will be suppressed. To compare the expressions for $\mu_{N}(\Delta)$ and $\mu_{N+1}(\Delta)$ just derived, we use the law of iterated expectations. In the $N$-agent model we have $$\displaystyle\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N}=(s_{1}+\Delta,\mathbf{s% }_{2:N})]=$$ $$\displaystyle\int dG^{M}_{N+1}(S_{N+1}=s_{N+1}\mid\mathbf{S}_{1:N}=(s_{1}+% \Delta,\mathbf{s}_{2:N}))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=(s_{1}+% \Delta,\mathbf{s}_{2:N+1})].$$ So $$\displaystyle\mu_{N}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N}(\mathbf{S}_{1:N}=\mathbf{s}_{1:N})$$ $$\displaystyle\quad\times\int dG^{M}_{N+1}(S_{N+1}=s_{N+1}\mid\mathbf{S}_{1:N}=% (s_{1}+\Delta,\mathbf{s}_{2:N}))$$ $$\displaystyle\quad\quad\quad\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:% N+1}=(s_{1}+\Delta,\mathbf{s}_{2:N+1})].$$ Meanwhile in the $N+1$-agent model the law of iterated expectations may be applied to the unconditional expectation over $\mathbf{S}_{1:N+1}$ to obtain $$\displaystyle\mu_{N+1}(\Delta)=$$ $$\displaystyle\int dG^{M}_{1:N+1}(\mathbf{S}_{1:N}=\mathbf{s}_{1:N})$$ $$\displaystyle\quad\times\int dG^{M}_{N+1}(S_{N+1}=s_{N+1}\mid\mathbf{S}_{1:N}=% \mathbf{s}_{1:N})$$ $$\displaystyle\quad\quad\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=% (s_{1}+\Delta,\mathbf{s}_{2:N+1})].$$ So define a function $\psi$ by $$\displaystyle\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})\equiv$$ $$\displaystyle\int dG^{M}_{N+1}(S_{N+1}=s_{N+1}\mid\mathbf{S}_{1:N}=(s_{1}+% \delta_{1},\mathbf{s}_{2:N}))$$ $$\displaystyle\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=(s_{1}+% \delta_{2},\mathbf{s}_{2:N+1})].$$ Then the values of distortion with $N$ and $N+1$ agents may be written in the common form $$\mu_{N}(\Delta)=\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\,\psi(\Delta,\Delta,% \mathbf{S}_{1:N})$$ while $$\mu_{N+1}(\Delta)=\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\,\psi(0,\Delta,\mathbf{S% }_{1:N}).$$ Then for any $\Delta>0,$ $$\frac{1}{\Delta}(\mu_{N}(\Delta)-\mu_{N+1}(\Delta))=\int dG^{M}_{1:N}(\mathbf{% S}_{1:N})\,\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:N})-\psi(0,\Delta% ,\mathbf{S}_{1:N})).$$ Now, as $MV(N)=\mu^{\prime}_{N}(0)$ and $MV(N+1)=\mu^{\prime}_{N+1}(0)$ both exist and are finite by Lemma C.1, it follows that $$\displaystyle MV(N)-MV(N+1)=$$ $$\displaystyle\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\mu_{N}(\Delta)-\mu)-% \lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\mu_{N+1}(\Delta)-\mu)$$ $$\displaystyle=$$ $$\displaystyle\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\mu_{N}(\Delta)-\mu_{N+% 1}(\Delta))$$ exists, so that $$MV(N)-MV(N+1)=\lim_{\Delta\downarrow 0}\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\,% \frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:N})-\psi(0,\Delta,\mathbf{S}% _{1:N})),$$ and in particular the limit on the rhs also exists. To bound the right-hand side and complete the proof, we analyze the behavior of $\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:N})-\psi(0,\Delta,\mathbf{S}% _{1:N}))$ as $\Delta$ tends to zero. Consider first the quality linkage model. Using the law of total probability, we may re-write $\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})$ as $$\displaystyle\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})=\int$$ $$\displaystyle dF^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}% =(s_{1}+\delta_{1},\mathbf{s}_{2:N}))$$ $$\displaystyle\times\int dG^{Q}_{N+1}(S_{N+1}=s_{N+1}\mid\overline{\theta},% \mathbf{S}_{1:N}=(s_{1}+\delta,\mathbf{s}_{2:N}))$$ $$\displaystyle\quad\quad\quad\times\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=% (s_{1}+\delta_{2},\mathbf{s}_{2:N+1})].$$ As $S_{N+1}$ is independent of $\mathbf{S}_{1:N}$ conditional on $\overline{\theta},$ this is equivalently $$\displaystyle\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})=\int$$ $$\displaystyle dF^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}% =(s_{1}+\delta_{1},\mathbf{s}_{2:N}))$$ $$\displaystyle\times\int dG^{Q}_{N+1}(S_{N+1}=s_{N+1}\mid\overline{\theta})\,% \mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}=(s_{1}+\delta_{2},\mathbf{s}_{2:N+% 1})].$$ Inverting $$q=G^{Q}_{N+1}(S_{N+1}=s_{N+1}\mid\overline{\theta})=F_{\theta^{\bot}+% \varepsilon}(s_{N+1}-\overline{\theta}-a^{\ast}(N+1))$$ yields the quantile function $s_{N+1}=F^{-1}_{\theta^{\bot}+\varepsilon}(q)+\overline{\theta}+a^{\ast}(N+1),$ so by a change of variables $\psi$ may be equivalently written $$\displaystyle\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})=\int$$ $$\displaystyle dF^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}% =(s_{1}+\delta_{1},\mathbf{s}_{2:N}))$$ $$\displaystyle\times\int_{0}^{1}dq\,\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}% =(s_{1}+\delta_{2},\mathbf{s}_{2:N},F^{-1}_{\theta^{\bot}+\varepsilon}(q)+% \overline{\theta}+a^{\ast}(N+1))].$$ Now fix $\mathbf{s}_{1:N},$ and write the integrand of this representation as $$\zeta(\overline{\theta},\delta,q)\equiv\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:% N+1}=(s_{1}+\delta,\mathbf{s}_{2:N},F^{-1}_{\theta^{\bot}+\varepsilon}(q)+% \overline{\theta}+a^{\ast}(N+1))].$$ By Lemma B.5, $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}=(s_{1}+\delta,% \mathbf{s}_{2:N}))$ is a $C^{1}$ function of $(\overline{\theta},\delta)$ satisfying $\frac{\partial}{\partial\delta}F^{Q}_{\overline{\theta}}(\overline{\theta}\mid% \mathbf{S}_{1:N}=(s_{1}+\delta,\mathbf{s}_{2:N}))<0$ everywhere. Then $F^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}=(s_{1}+\delta,% \mathbf{s}_{2:N}))-q^{\prime}$ is a $C^{1}$ function of $(q^{\prime},\overline{\delta},\delta),$ with Jacobian $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}=(s_{1}+\delta,% \mathbf{s}_{2:N}))$ wrt $\overline{\theta}.$ By Lemma O.3 this Jacobian is strictly positive everywhere. Then by the implicit function theorem there exists a $C^{1}$ quantile function $\phi(q^{\prime},\delta)$ satisfying $F^{Q}_{\overline{\theta}}(\phi(q^{\prime},\delta)\mid\mathbf{S}_{1:N}=(s_{1}+% \delta,\mathbf{s}_{2:N}))=q^{\prime}$ for all $(q^{\prime},\delta_{1})$ and $$\frac{\partial\phi}{\partial\delta}(q^{\prime},\delta)=-\left[\frac{1}{f^{Q}_{% \overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{1:N}=(s_{1}+\delta,\mathbf% {s}_{2:N}))}\frac{\partial}{\partial\delta}F^{Q}_{\overline{\theta}}(\overline% {\theta}\mid\mathbf{S}_{1:N}=(s_{1}+\delta,\mathbf{s}_{2:N}))\right]_{% \overline{\theta}=\phi(q^{\prime},\delta)}>0.$$ Then by a further change of variables, $\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})$ may be written $$\displaystyle\psi(\delta_{1},\delta_{2},\mathbf{s}_{1:N})=\int_{0}^{1}dq^{% \prime}\int_{0}^{1}dq\,\zeta(\phi(q^{\prime},\delta_{1}),\delta_{2},q).$$ By Lemma B.8 $\frac{\partial}{\partial S_{N+1}}\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}]>0$ everywhere. Since further $\partial\phi/\partial\delta>0,$ it follows that $$\zeta(\phi(q^{\prime},\Delta),\Delta,q)>\zeta(\phi(q^{\prime},0),\Delta,q)$$ for all $(q,q^{\prime})$ and every $\Delta>0.$ Hence $\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{s}_{1:N})-\psi(0,\Delta,\mathbf{s}% _{1:N}))>0$ for every $\Delta>0.$ This argument holds independent of the choice of $\mathbf{s}_{1:N}.$ Thus Fatou’s lemma implies $$MV(N)-MV(N+1)\geq\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\,\liminf_{\Delta% \downarrow 0}\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:N})-\psi(0,% \Delta,\mathbf{S}_{1:N})).$$ A further application of Fatou’s lemma yields $$\displaystyle\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}(\psi(\Delta,\Delta,% \mathbf{S}_{1:N})-\psi(0,\Delta,\mathbf{S}_{1:N}))$$ $$\displaystyle\geq$$ $$\displaystyle\int_{0}^{1}dq^{\prime}\int_{0}^{1}dq\,\lim_{\Delta\downarrow 0}% \frac{1}{\Delta}(\zeta(\phi(q^{\prime},\Delta),\Delta,q)-\zeta(\phi(q^{\prime}% ,0),\Delta,q)).$$ Recall that by Assumption 2.3, $\frac{\partial}{\partial S_{i}}\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}]$ exists and is continuously differentiable in $\mathbf{S}_{1:N+1}$ for every $i.$ Thus $\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}]$ is totally differentiable wrt $\mathbf{S}_{1:N+1}$ everywhere. So write the integrand of the previous expression for $\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:% N})-\psi(0,\Delta,\mathbf{S}_{1:N}))$ as $$\displaystyle\frac{1}{\Delta}(\zeta(\phi(q^{\prime},\Delta),\Delta,q)-\zeta(% \phi(q^{\prime},0),\Delta,q))$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\Delta}(\zeta(\phi(q^{\prime},\Delta),\Delta,q)-\zeta(% \phi(q^{\prime},0),0,q))-\frac{1}{\Delta}(\zeta(\phi(q^{\prime},0),\Delta,q,q^% {\prime})-\zeta(\phi(q^{\prime},0),0,q)).$$ Taking $\Delta\downarrow 0$ and using the chain rule yields $$\displaystyle\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\zeta(\phi(q^{\prime},% \Delta),\Delta,q)-\zeta(\phi(q^{\prime},0),\Delta,q))$$ $$\displaystyle=$$ $$\displaystyle\frac{\partial\zeta}{\partial\overline{\theta}}(\phi(q^{\prime},0% ),0,q)\frac{\partial\phi}{\partial\delta}(q^{\prime},0)+\frac{\partial\zeta}{% \partial\delta}(\phi(q^{\prime},0),0,q)-\frac{\partial\zeta}{\partial\delta}(% \phi(q^{\prime},0),0,q)$$ $$\displaystyle=$$ $$\displaystyle\frac{\partial\zeta}{\partial\overline{\theta}}(\phi(q^{\prime},0% ),0,q)\frac{\partial\phi}{\partial\delta}(q^{\prime},0).$$ The fact that $\frac{\partial}{\partial S_{N+1}}[\theta_{1}\mid\mathbf{S}_{1:N+1}]>0$ implies that $\frac{\partial\zeta}{\partial\overline{\theta}}(\phi(q^{\prime},0),0,q)>0$, and as previously noted $\partial\phi/\partial\delta>0$. It follows that this limit is strictly positive. Thus $$\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}(\psi(\Delta,\Delta,\mathbf{S}_{1:% N})-\psi(0,\Delta,\mathbf{S}_{1:N}))>0$$ everywhere, meaning in turn $MV(N)>MV(N+1).$ The result for the circumstance linkage model follows from very similar work. The one difference in the analysis is that in the circumstance linkage model Lemma B.8 implies that $\frac{\partial}{\partial S_{N+1}}\mathbb{E}[\theta_{1}\mid\mathbf{S}_{1:N+1}]<0$ everywhere, so that in this model $$\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\zeta(\phi(q^{\prime},0),\Delta,q)-% \zeta(\phi(q^{\prime},\Delta),\Delta,q))>0$$ and $$\frac{1}{\Delta}(\psi(0,\Delta,\mathbf{s}_{1:N})-\psi(\Delta,\Delta,\mathbf{s}% _{1:N}))>0$$ everywhere. Then by Fatou’s lemma $$\displaystyle MV(N+1)-MV(N)=$$ $$\displaystyle\lim_{\Delta\downarrow 0}\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\,% \frac{1}{\Delta}(\psi(0,\Delta,\mathbf{S}_{1:N}-\psi(\Delta,\Delta,\mathbf{S}_% {1:N}))$$ $$\displaystyle\geq$$ $$\displaystyle\int dG^{M}_{1:N}(\mathbf{S}_{1:N})\liminf_{\Delta\downarrow 0}% \frac{1}{\Delta}(\psi(0,\Delta,\mathbf{S}_{1:N}-\psi(\Delta,\Delta,\mathbf{S}_% {1:N}))>0,$$ or $MV(N)<MV(N+1).$ C.2.2 The $N\rightarrow\infty$ limit Consider a limiting model in which the principal observes a countably infinite vector of outcomes $\mathbf{S}=(S_{1},S_{2},...)$. By the law of large numbers, in the quality linkage model this means that the principal perfectly infers $\overline{\theta}$, while in the circumstance linkage model the principal perfectly infers $\overline{\varepsilon}.$ Define $\mu(\Delta;\mathbf{\alpha})$ analogously to the finite-population case. In each model,reasoning very similar to the proof of Lemma C.1 implies that $\mu^{\prime}(0,\mathbf{\alpha})$ exists, is independent of $\mathbf{\alpha},$ and lies in $[0,1].$ So there exists a unique, finite $a^{\ast}(\infty)$ satisfying $\mu^{\prime}(0;\mathbf{a}^{\ast}(\infty))=C^{\prime}(a^{\ast}(\infty)).$ Define $\mu_{\infty}(\Delta)\equiv\mu(\Delta;\mathbf{a}^{\ast}(\infty))$ and $MV(\infty)\equiv\mu^{\prime}_{\infty}(0)$ in each model. Lemma C.3 establishes that $0<MV(\infty)<1.$ We will show that $\lim_{N\rightarrow\infty}MV(N)=MV(\infty).$ Lemma C.3 establishes that this result implies $0<\lim_{N\rightarrow\infty}MV(N)<1.$ To prove the result, we will need the ability to change measure between the distribution of outcomes at the equilibrium action profile, and one in which a single agent, without loss agent 1, deviates to a different action. For each model, define a reference probability space $(\Omega,\mathcal{F},\mathcal{P}^{\mathbf{a}}),$ containing all relevant random variables for arbitrary segment sizes. For the quality linkage model this space supports the latent types $\overline{\theta},\theta^{\bot}_{1},\theta^{\bot}_{2},...$ and shocks $\varepsilon_{1},\varepsilon_{2},...$ as well as the outcomes $S_{1},S_{2},...$ Similarly, in the circumstance linkage model the space supports the latent types $\theta_{1},\theta_{2},...$, shocks $\overline{\varepsilon},\varepsilon^{\bot}_{1},\varepsilon^{\bot}_{2},...,$ and outcomes $S_{1},S_{2},...$ In each model the probability measure $\mathcal{P}^{\mathbf{a}}$ depends on the vector of agent actions $\mathbf{a}=(a_{1},a_{2},...)$, as the distributions of the outcomes depend on the actions. We will use $\mathcal{F}^{\infty}$ to denote the $\sigma$-algebra generated by the full vector of outcomes $S_{1},S_{2},...$ Note that by the LLN all latent types may be taken to be measurable with respect to $\mathcal{F}^{\infty}.$ Finally, for each segment size $N,$ we will let $\mathcal{P}^{\ast N}$ denote the restriction of the measure $\mathcal{P}^{\mathbf{a}^{\ast}(N)}$ to $(\Omega,\mathcal{F}^{\infty}),$ and similarly let $\mathcal{P}^{\Delta,N}$ denote the restriction of the measure $\mathcal{P}^{(a^{\ast}(N)+\Delta,\mathbf{a}^{\ast}(N))}$ to $(\Omega,\mathcal{F}^{\infty}).$ These measures represent the distributions over outcomes induced when all agents take actions $a^{\ast}(N)$ and when agent 1 deviates to action $a^{\ast}(N)+\Delta,$ respectively. Lemma C.4. The Radon-Nikodym derivative for the change of measure from $(\Omega,\mathcal{F}^{\infty},\mathcal{P}^{\ast N})$ to $(\Omega,\mathcal{F}^{\infty},\mathcal{P}^{\Delta,N})$ is $$\frac{d\mathcal{P}^{\Delta,N}}{d\mathcal{P}^{\ast N}}=\frac{g^{Q}_{1}(S_{1}% \mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)}{g^{Q}_{1}(S_{1}\mid\overline{% \theta};a_{1}=a^{\ast}(N))}$$ in the quality linkage model and $$\frac{d\mathcal{P}^{\Delta,N}}{d\mathcal{P}^{\ast N}}=\frac{g^{C}_{1}(S_{1}% \mid\overline{\varepsilon};a_{1}=a^{\ast}(N)+\Delta)}{g^{C}_{1}(S_{1}\mid% \overline{\varepsilon};a_{1}=a^{\ast}(N))}$$ in the circumstance linkage model. Proof. For convenience we suppress the dependence of distributions on all actions other than $a_{1}$ in this proof. We derive the derivative for the quality linkage model, with the expression for the circumstance linkage model following from nearly identical work. Fix any $\mathcal{F}^{\infty}$-measurable random variable $X$. Then there exists a measurable function $x:\mathbb{R}^{\infty}\rightarrow\mathbb{R}$ such that $X=x(\mathbf{S})$ a.s. Thus $$\displaystyle\mathbb{E}[X\mid a_{1}=a^{\ast}(N)+\Delta]$$ $$\displaystyle=$$ $$\displaystyle\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(S_{1}% \mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)\,dG^{Q}_{-1}(\mathbf{S}_{-1}% \mid\overline{\theta},S_{1};a_{1}=a^{\ast}(N)+\Delta)$$ $$\displaystyle\quad\quad\times x(\mathbf{S}).$$ As $\mathbf{S}_{-1}$ is independent of $S_{1}$ conditional on $\overline{\theta}$ in the quality linkage model, $G^{Q}_{-1}(\mathbf{S}_{-1}\mid\overline{\theta},S_{1};a_{1}=a^{\ast}(N)+\Delta% )=G^{Q}_{-1}(\mathbf{S}_{-1}\mid\overline{\theta})$. So this expression may be equivalently written $$\displaystyle\mathbb{E}[X\mid a_{1}=a^{\ast}(N)+\Delta]$$ $$\displaystyle=$$ $$\displaystyle\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(S_{1}% \mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)\,dG^{Q}_{-1}(\mathbf{S}_{-1}% \mid\overline{\theta})\,x(\mathbf{S})$$ $$\displaystyle=$$ $$\displaystyle\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(S_{1}% \mid\overline{\theta};a_{1}=a^{\ast}(N))\,dG^{Q}_{-1}(\mathbf{S}_{-1}\mid% \overline{\theta})$$ $$\displaystyle\quad\quad\times\frac{g^{Q}_{1}(S_{1}\mid\overline{\theta};a_{1}=% a^{\ast}(N)+\Delta)}{g^{Q}_{1}(S_{1}\mid\overline{\theta};a_{1}=a^{\ast}(N))}x% (\mathbf{S})$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}\left[\frac{g^{Q}_{1}(S_{1}\mid\overline{\theta};a_{1}=% a^{\ast}(N)+\Delta)}{g^{Q}_{1}(S_{1}\mid\overline{\theta};a_{1}=a^{\ast}(N))}X% \mid a_{1}=a^{\ast}(N)\right].$$ As this argument holds for arbitrary $\mathcal{F}^{\infty}$-measurable $X,$ it must be that $$\frac{d\mathcal{P}^{\Delta,N}}{d\mathcal{P}^{\ast N}}=\frac{g^{Q}_{1}(S_{1}% \mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)}{g^{Q}_{1}(S_{1}\mid\overline{% \theta};a_{1}=a^{\ast}(N))}.$$ ∎ To establish the desired limiting result, we will prove that for any $\Delta$ and $N,$ $$|\mu_{N}(\Delta)-\mu_{\infty}(\Delta)|\leq\kappa_{N}(\Delta)\frac{\beta}{\sqrt% {N}},$$ where $$\kappa_{N}(\Delta)\equiv\left(\mathbb{E}\left[\left(\frac{d\mathcal{P}^{\Delta% ,N}}{d\mathcal{P}^{\ast N}}-1\right)^{2}\ \middle|\ \mathbf{a}=\mathbf{a}^{% \ast}(N)\right]\right)^{1/2}$$ and $\beta$ is a finite constant independent of $N$ and $\Delta$ whose value depends on the model. The following lemma establishes several important properties of $\kappa_{N}.$ Lemma C.5. $\kappa_{N}(\Delta)$ is independent of $N,$ $\kappa_{N}(0)=0,$ $\overline{\kappa}^{\prime}_{N,+}(0)=\limsup_{\Delta\downarrow 0}\kappa_{N}(% \Delta)/\Delta<\infty$. Proof. We prove the theorem for the quality linkage model, with nearly identical work establishing the result for the circumstance linkage model. Note that when $\Delta=0,$ $d\mathcal{P}^{\Delta,N}/d\mathcal{P}^{\ast N}=1,$ and so trivially $\kappa_{N}(0)=0.$ To see that $\kappa_{N}(\Delta)$ is independent of $N,$ note that the distribution of each outcome satisfies the translation invariance property $G^{Q}_{i}(S_{i}=s_{i}\mid\overline{\theta};a_{i}=\alpha)=G^{Q}_{i}(S_{i}=s_{i}% -\alpha\mid\overline{\theta};a_{i}=0)$ for any $s_{i}$ and $\alpha.$ So $\kappa_{N}(\Delta)$ may be written $$\displaystyle\kappa_{N}(\Delta)$$ $$\displaystyle=\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(S_{1}% =s_{1}\mid\overline{\theta};a_{1}=a^{\ast}(N))\left(\frac{g^{Q}_{1}(S_{1}=s_{1% }\mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)}{g^{Q}_{1}(S_{1}=s_{1}\mid% \overline{\theta};a_{1}=a^{\ast}(N))}-1\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(S_{1}=% s_{1}-a^{\ast}(N)\mid\overline{\theta};a_{1}=0)\left(\frac{g^{Q}_{1}(S_{1}=s_{% 1}-a^{\ast}(N)\mid\overline{\theta};a_{1}=\Delta)}{g^{Q}_{1}(S_{1}=s_{1}-a^{% \ast}(N)\mid\overline{\theta};a_{1}=0)}-1\right)^{2}$$ So perform a change of variables to $s^{\prime}_{1}\equiv s_{1}-a^{\ast}(N)$ to obtain the representation $$\kappa_{N}(\Delta)=\int dF_{\overline{\theta}}(\overline{\theta})\,dG^{Q}_{1}(% S_{1}=s^{\prime}_{1}\mid\overline{\theta};a_{1}=0)\left(\frac{g^{Q}_{1}(S_{1}=% s^{\prime}_{1}\mid\overline{\theta};a_{1}=\Delta)}{g^{Q}_{1}(S_{1}=s^{\prime}_% {1}\mid\overline{\theta};a_{1}=0)}-1\right)^{2},$$ which is independent of $N,$ as desired. Now, let $\xi\equiv\theta^{\bot}_{1}+\varepsilon_{1}.$ Let $f_{\xi}$ be the convolution of $f_{\theta^{\bot}}$ and $f_{\varepsilon}.$ Then for any $\Delta$, $g^{Q}_{1}(S_{1}\mid\overline{\theta};a_{1}=a^{\ast}(N)+\Delta)=f_{\xi}(S_{1}-% \overline{\theta}-a^{\ast}(N)-\Delta)=f_{\xi}(\xi-\Delta)$ under the measure $\mathcal{P}^{\ast N}.$ Hence $$\displaystyle\kappa_{N}(\Delta)=$$ $$\displaystyle\left(\mathbb{E}\left[\left(\frac{d\mathcal{P}^{\Delta,N}}{d% \mathcal{P}^{\ast N}}-1\right)^{2}\ \middle|\ \mathbf{a}=\mathbf{a}^{\ast}(N)% \right]\right)^{1/2}$$ $$\displaystyle=$$ $$\displaystyle\left(\mathbb{E}\left[\left(\frac{f_{\xi}(\xi-\Delta)}{f_{\xi}(% \xi)}-1\right)^{2}\ \middle|\ \mathbf{a}=\mathbf{a}^{\ast}(N)\right]\right)^{1% /2}$$ $$\displaystyle=$$ $$\displaystyle\int dF_{\xi}(\xi)\left(\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f% _{\xi}(\xi)}\right)^{2}$$ We must therefore show that the limit $$\displaystyle\limsup_{\Delta\downarrow 0}\frac{1}{\Delta}\kappa(\Delta)=$$ $$\displaystyle\limsup_{\Delta\downarrow 0}\frac{1}{\Delta}\left(\int dF_{\xi}(% \xi)\,\left(\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(\xi)}\right)^{2}% \right)^{1/2}$$ $$\displaystyle=$$ $$\displaystyle\left(\limsup_{\Delta\downarrow 0}\int dF_{\xi}(\xi)\,\frac{1}{% \Delta^{2}}\left(\frac{f_{\xi}(\xi-\Delta)-f_{\varepsilon}(\xi)}{f_{% \varepsilon}(\xi)}\right)^{2}\right)^{1/2}$$ exists and is finite. By Assumption 2.4, for $\Delta$ sufficiently close to 0 there exists a non-negative, integrable function $J(\cdot)$ such that $$\frac{1}{\Delta^{2}}\left(\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(\xi)% }\right)^{2}\leq J(\xi)$$ for all $\xi.$ Then by reverse Fatou’s lemma, $$\displaystyle\limsup_{\Delta\downarrow 0}\int dF_{\xi}(\xi)\,\frac{1}{\Delta^{% 2}}\left(\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(\xi)}\right)^{2}$$ $$\displaystyle\leq\int dF_{\xi}(\xi)\,\limsup_{\Delta\downarrow 0}\frac{1}{% \Delta^{2}}\left(\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(\xi)}\right)^% {2}$$ $$\displaystyle\leq\int dF_{\xi}(\xi)\,J(\xi)<\infty,$$ as desired. ∎ The bound on $|\mu_{N}(\Delta)-\mu_{\infty}(\Delta)|$ just claimed impies the desired result because for $\Delta>0$ it may be rewritten $$|(\mu_{N}(\Delta)-\mu)/\Delta-(\mu_{\infty}(\Delta)-\mu)/\Delta|\leq\frac{% \kappa_{N}(\Delta)-\kappa_{N}(0)}{\Delta}\frac{\beta}{\sqrt{N}},$$ and thus by taking $\Delta\downarrow 0$ the inequality $$|\mu^{\prime}_{N}(0)-\mu^{\prime}_{\infty}(0)|\leq\overline{\kappa}^{\prime}_{% N,+}(0)\frac{\beta}{N}$$ must hold. Then as $\overline{\kappa}^{\prime}_{N,+}(0)$ is finite and independent of $N,$ $\mu^{\prime}_{N}(0)\rightarrow\mu^{\prime}_{\infty}(0)$ as $N\rightarrow\infty$, as desired. We now derive the claimed bound. To streamline notation, we will write $\mathbb{E}^{\ast N}$ to represent expectations conditioning on $\mathbf{a}=\mathbf{a}^{\ast}(N),$ and $\mathbb{E}^{\Delta,N}$ to represent expectations conditioning on $a_{1}=a^{\ast}(N)+\Delta$ and $\mathbf{a}_{2:N}=\mathbf{a}^{\ast}(N)_{1:N-1}.$ Note first that the expected value of the principal’s posterior estimate of $\theta_{1}$ is a function only of the size of agent 1’s distortion $\Delta,$ but not of the equilibrium action inference. Thus $$\displaystyle\mu_{\infty}(\Delta)=$$ $$\displaystyle\mathbb{E}[\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a}=\mathbf% {a}^{\ast}(\infty)]\mid\mathbf{a}=(a^{\ast}(\infty)+\Delta,\mathbf{a}^{\ast}(% \infty))]$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}[\mathbb{E}[\theta_{1}\mid\mathbf{S};\mathbf{a}=\mathbf% {a}^{\ast}(N)]\mid\mathbf{a}=(a^{\ast}(N)+\Delta,\mathbf{a}^{\ast}(N))]=% \mathbb{E}^{\Delta,N}[\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]].$$ So we may write $$\mu_{N}(\Delta)-\mu_{\infty}(\Delta)=\mathbb{E}^{\Delta,N}[\mathbb{E}^{\ast N}% [\theta_{1}\mid\mathbf{S}_{1:N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]].$$ Now, performing a change of measure, $$\displaystyle\mathbb{E}^{\Delta,N}[\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S% }_{1:N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]]$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}^{\ast N}\left[\frac{d\mathcal{P}^{\Delta,N}}{d\mathcal% {P}^{\ast N}}\left(\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}_{1:N}]-\mathbb% {E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)\right]$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}^{\ast N}\left[\left(\frac{d\mathcal{P}^{\Delta,N}}{d% \mathcal{P}^{\ast N}}-1\right)\left(\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{% S}_{1:N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)\right]$$ $$\displaystyle+\mathbb{E}^{\ast N}[\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}% _{1:N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]]$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}^{\ast N}\left[\left(\frac{d\mathcal{P}^{\Delta,N}}{d% \mathcal{P}^{\ast N}}-1\right)\left(\mathbb{E}^{\ast N}[\theta\mid\mathbf{S}_{% 1:N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)\right],$$ with the last line following by the law of iterated expectations. Then by an application of the Cauchy-Schwarz inequality, $$|\mu_{N}(\Delta)-\mu_{\infty}(\Delta)|\leq\kappa_{N}(\Delta)\left(\mathbb{E}^{% \ast N}\left[\left(\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}_{1:N}]-\mathbb% {E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)^{2}\right]\right)^{1/2}.$$ We will bound the right-hand side for the quality linkage model, with the result for the circumstance linkage model following by nearly identical work. Define the family of random variables $\widehat{\theta}_{N}(z)\equiv\mathbb{E}^{\ast N}[\theta_{1}\mid S_{1},% \overline{\theta}=z]$ for $z\in\mathbb{R}.$ Note that $\widehat{\theta}_{1}(\overline{\theta})=\mathbb{E}^{\ast N}[\theta_{1}\mid% \mathbf{S}],$ as $\mathbf{S}$ allows the principal to perfectly infer $\overline{\theta},$ and $\theta_{1}$ is independent of the vector of outcomes $\mathbf{S}_{-1}$ conditional on $\overline{\theta}.$ Further, $\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}_{1:N}]=\mathbb{E}^{\ast N}[% \mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\mid\mathbf{S}_{1:N}]$ is the mean-square minimizing estimator of $\widehat{\theta}_{N}(\overline{\theta})$ conditional on the performance vector $\mathbf{S}_{1:N}.$ Another estimator of $\widehat{\theta}_{N}(\overline{\theta})$ is $\widehat{\theta}_{N}\left(\widetilde{\theta}_{N}\right),$ where $$\widetilde{\theta}_{N}\equiv\frac{1}{N}\sum_{i=1}^{N}(S_{i}-\mu^{\bot}),$$ with $\mu^{\bot}=\mathbb{E}[\theta^{\bot}_{i}].$ So $$\mathbb{E}^{\ast N}\left[\left(\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}_{1% :N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)^{2}\right]\leq% \mathbb{E}^{\ast N}\left[\left(\widehat{\theta}_{N}\left(\widetilde{\theta}_{N% }\right)-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)^{2}\right].$$ Given that shifts in $\overline{\theta}$ affect the outcome $S_{i}$ additively, $\mathbb{E}^{\ast N}[\theta_{1}\mid S_{1}=s_{1},\overline{\theta}=z]=\mathbb{E}% ^{\ast N}[\theta_{1}\mid S_{1}=s_{1}-z,\overline{\theta}=0]$ for every $s_{1}$ and $z.$ The proof of Lemma C.3 establishes that $\mathbb{E}^{\ast N}[\theta_{1}\mid S_{1},\overline{\theta}]$ is differentiable with respect to $S_{1}$ and uniformly bounded in $(0,1)$ everywhere. Hence $\widehat{\theta}_{N}(z)$ is differentiable and $\widehat{\theta}^{\prime}_{N}(z)\in(-1,0)$ for all $z$. Thus by the fundamental theorem of calculus, $$|\widehat{\theta}_{N}(\widetilde{\theta}_{N})-\widehat{\theta}_{N}(\overline{% \theta})|=\left|\int_{\overline{\theta}}^{\widetilde{\theta}_{N}}\widehat{% \theta}^{\prime}_{N}(z)\,dt\right|\leq\int_{\overline{\theta}}^{\widetilde{% \theta}_{N}}|\widehat{\theta}^{\prime}_{N}(z)|\,dz\leq|\widetilde{\theta}_{N}-% \overline{\theta}|.$$ Further note that $$\widetilde{\theta}_{N}-\overline{\theta}=\frac{1}{N}\sum_{i=1}^{N}(\theta^{% \bot}_{i}-\mu^{\bot}+\varepsilon_{i}),$$ which has mean 0 and variance $(\sigma_{\theta^{\bot}}^{2}+\sigma_{\varepsilon}^{2})/N$ given that $\theta^{\bot}_{i}$ and $\varepsilon_{i}$ are independent. So $$\mathbb{E}^{\ast N}\left[\left(\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}_{1% :N}]-\mathbb{E}^{\ast N}[\theta_{1}\mid\mathbf{S}]\right)^{2}\right]\leq\frac{% \sigma_{\theta^{\bot}}^{2}+\sigma_{\varepsilon}^{2}}{N},$$ implying the desired bound with $\beta=\sqrt{\sigma_{\theta^{\bot}}^{2}+\sigma_{\varepsilon}^{2}}$. Appendix D Proofs for Section 4 (Main Results) D.1 Proofs of Theorems 4.1 and 4.2 Opt-In Equilibrium. In any pure-strategy equilibrium in which all agents opt-in, the equilibrium effort level $a^{*}$ must satisfy two conditions: $$MV(N)=C^{\prime}(a^{*})$$ (D.1) $$R+\mu-C(a^{*})\geq 0$$ (D.2) The expression in (D.1) guarantees that an agent who opts-in cannot strictly gain by deviating to a different effort choice. This is identical to the condition used in the exogenous entry model to solve for equilibrium. The expression in (D.2) guarantees that agents cannot profitably deviate to opting-out. The marginal value $MV(N)$ is independent of $a^{*}$, and $C^{\prime}$ is strictly monotone. Thus (D.1) pins down a unique effort level $a^{*}=C^{\prime-1}(MV(N))$. Since $C$ is everywhere increasing, the conditions in (D.1) and (D.2) can be simultaneously satisfied if and only if $0\leq C^{\prime-1}[MV(N)]\leq a^{**}\equiv C^{-1}(R+\mu)$, or equivalently, $$0=C^{\prime}(0)\leq MV(N)\leq C^{\prime}(a^{**})$$ noting that $C^{\prime-1}$ is everywhere increasing. By Assumption 4.1, $R+\mu>C(a^{*}(1))$. Since the cost function $C$ has positive first and second derivatives, $R+\mu>C(a^{*}(1))$ and $R+\mu=C(a^{**})$ imply that $a^{*}(1)<a^{**}$, which further implies $C^{\prime}(a^{*}(1))<C^{\prime}(a^{**})$. By Lemma 3.1, $MV(1)=MV_{Q}(1)\geq MV_{Q}(N)$. Thus $$MV_{Q}(N)\leq MV_{Q}(1)=C^{\prime}(a^{*}(1))\leq C^{\prime}(a^{**}),$$ and a symmetric all opt-in equilibrium exists in the quality linkage model. In contrast, in the circumstance linkage model, $$MV_{C}(N)\geq MV_{C}(1)=C^{\prime}(a^{*}(1))$$ (D.3) so the inequality $MV_{C}(N)\leq C^{\prime}(a^{**})$ is not guaranteed to hold. An opt-in equilibrium exists if and only if $N$ is sufficiently small; specifically, $N\leq N^{*}$ where $$N^{\ast}\equiv\sup\{N\ :\ MV_{C}(N)\leq C^{\prime}(a^{**})\}.$$ (It is possible that $N^{\ast}$ is infinite if $MV_{C}(N)\leq C^{\prime}(a^{\ast\ast})$ for all $N.$) Finally, for the parameters $N\leq N^{*}$ where an opt-in equilibrium exists in both models, it is possible to rank equilibrium effort levels as follows: Define $a_{C}^{*}$ and $a_{Q}^{*}$ to be the respective equilibrium effort levels. Then, since $MV_{C}(N)\geq MV(1)\geq MV_{Q}(N)$ for all $N$, $$a_{C}^{*}=C^{\prime-1}(MV_{C}(N))\geq C^{-1}(MV_{Q}(N))=a_{Q}^{*}$$ so equilibrium effort is higher in the circumstance linkage model. Opt-Out Equilibrium. Under the imposed refinement on the principal’s off-equilibrium belief about the agent’s action, the optimal action conditional on entry is $a^{*}(1)$. Thus in an all opt-out equilibrium, the equilibrium action $a^{*}$ must satisfy $$R+\mu-C(a^{*}(1))<0$$ (D.4) which violates Assumption 4.1. There are no pure-strategy equilibria in either model in which all agents choose to opt-out. Mixed Equilibrium. For any probability $p\in[0,1]$ and $M\in\{T,C\}$, let $$MV_{M}(p,N)=\mathbb{E}\left[\left(MV_{M}(\widetilde{N}+1\right)\mid\widetilde{% N}\sim\text{Binomial}(N-1,p)\right]$$ be the expected marginal impact for agent $i$ of exerting additional effort beyond the principal’s expectation, when agent $i$ opts-in and all other agents opt-in with independent probability $p$. Note that because $MV_{C}(N)$ is increasing in $N,$ and increasing $p$ shifts up the distribution of $\widetilde{N}$ in the FOSD sense, $MV_{C}(p,N)$ is increasing in $p.$ Further, because increasing $p$ shifts $\Pr(\widetilde{N}\leq n)$ strictly downward for every $n<N-1,$ this monotonicity is strict whenever $MV_{C}(n)$ is not constant over the range $\{1,..,N\}$. For the same reasons, $MV_{C}(p,N)$ is increasing in $N$ for fixed $p,$ and strictly increasing whenever $p\in(0,1)$ and $MV_{C}(n)$ is not constant over $\{1,...,N\}.$ In a mixed equilibrium, the equilibrium effort level $a^{*}$ and probability $p$ assigned to opting-in must jointly satisfy $$R+\mu-C(a^{*})=0.$$ (D.5) $$MV(p,N)=C^{\prime}(a^{*}).$$ (D.6) The expression in (D.5) pins down the equilibrium action, which is identical to the action defined as $a^{**}$ above. Moreover, $C^{\prime}(a)$ is independent of both the mixing probability $p$ and also the fixed segment size $N$. Therefore an equilibrium exists if and only if $MV(p,N)=C^{\prime}(a^{**})$ for some $p\in[0,1]$. But for all $p\in[0,1]$, $$MV_{Q}(p,N)\leq\max_{1\leq N^{\prime}\leq N}MV_{Q}(N^{\prime})=MV_{Q}(1)=C^{% \prime}(a^{*}(1))<C^{\prime}(a^{**})$$ using that $MV_{Q}$ is a decreasing function of $N$ (Lemma 3.1). Thus the quality linkage model does not admit a strictly mixed equilibrium. Similarly if $MV_{C}(N)<C^{\prime}(a^{**})$, then $$MV_{C}(p,N)\leq\max_{1\leq N^{\prime}\leq N}MV_{C}(N^{\prime})=MV_{C}(N)<C^{% \prime}(a^{**})$$ since $MV_{C}$ is a strictly increasing function of $N$ (Lemma 3.1). So there does not exist a strictly mixed equilibrium in the circumstance linkage model either. Indeed, this is exactly the range for $N$ that supports the symmetric all opt-in equilibrium in the circumstance linkage model. If however $MV(N)\geq C^{\prime}(a^{**})$, then $$MV_{C}(1)=MV_{C}(0,N)<C^{\prime}(a^{**})\leq MV_{C}(1,N)=MV_{C}(N).$$ This implies in particular that $MV_{C}$ is not constant over the range $\{1,...,N\},$ so that $MV_{C}(p,N)$ is strictly increasing in $p.$ Since $MV_{C}(p,N)$ is also continuous in $p$, the intermediate value theorem yields existence of a unique $p^{*}(N)\in(0,1]$ satisfying $MV_{C}(p^{*}(N),N)=C^{\prime}(a^{**})$. If $N\leq N^{\ast},$ i.e. $MV(N)=C^{\prime}(a^{\ast\ast}),$ then it must be that $p^{\ast}(N)=1.$ Thus in particular the opt-in equilibrium is unique whenever it exists. Otherwise $p^{\ast}(N)<1,$ in which case the fact that $MV_{C}(p,N)$ is strictly increasing in $N$ for fixed $p\in(0,1)$ further implies that $p^{\ast}(N)$ must be strictly decreasing in $N.$ Finally, the effort level $a^{**}$ chosen in this equilibrium weakly exceeds the effort level $a_{C}^{*}$ chosen in the symmetric opt-in equilibrium in the circumstance linkage model, since $R+\mu\geq C(a_{C}^{*})$ by (D.2), while $R+\mu=C(a^{**})$ by (D.5). D.2 Proof of Lemma 4.1 Comparisons between equilibrium actions correspond directly to comparisons of marginal values of effort. It is therefore sufficient to establish that $MV(N)<1$ for all $N,$ and that $MV_{Q}(N)$ is decreasing while $MV_{C}(N)$ is increasing in $N,$ with $\lim_{N\rightarrow\infty}MV_{C}(N)<1.$ These facts in particular imply that $MV_{Q}(N)\leq MV(1)\leq MV_{C}(N),$ with $MV(1)$ dictating equilibrium effort in the no-data linkages benchmark. Lemma 3.1 establishes the desired monotonicity of the marginal value of effort, while the upper bound on $MV$ and the limiting value of $MV_{C}$ are established in Appendix C. D.3 Proof of Proposition 4.1 Suppose all agents in a segment of size $N$ enter and choose action $a$. Social welfare $$W(1,a,N)=N\cdot(2\mu+a-C(a))$$ is strictly increasing on $a\in[0,a_{FB})$. Thus the comparison $a_{Q}^{*}(N)\leq a_{NDL}<a_{FB}$ immediately implies that for all $N$, welfare is ranked $$W_{Q}(N)\leq W_{NDL}(N)$$ where the inequality is strict for all $N>1$. For segment sizes $N<N^{*}$, the equilibrium action in the circumstance linkage model satisfies $a_{C}^{*}(N)\in[a_{NDL},a_{FB})$ (Theorem 4.2), so the same argument implies $$W_{NS}(N)\leq W_{C}(N)$$ with the inequality strict for $N>1$. When the segment size $N>N^{*}$, $$W_{C}(N)=N\cdot p(N)\cdot\left[a^{**}-C(a^{**})+2\mu\right].$$ Since $p(N)\rightarrow 0$ as $N\rightarrow\infty$, it follows that for $N$ sufficiently large, $$W_{C}(N),W_{Q}(N)<W_{NDL}(N).$$ Appendix E Proofs for Section 6 (Data Sharing, Markets, and Consumer Welfare) E.1 Proof of Proposition 6.1 Definition E.1. The competitive transfer for a segment of $n$ consumers is $$\overline{R}^{*}(n)=a^{*}(n)+\mu$$ (E.1) while the monopolist transfer is $$\underline{R}^{*}(n)=C[a^{*}(n)]-\mu$$ (E.2) We first show that in any equilibrium under data sharing, consumers must receive all of the generated surplus. Lemma E.1. Consider either the quality linkage or circumstance linkage model. In any equilibrium under data sharing, firms receive zero payoffs, and consumer welfare is $$N\times\left(2\mu+a^{*}(N)-C(a^{*}(N))\right).$$ Proof. Fix any subset of firms $F$ where $|F|\geq 2$. Suppose each firm $f\in F$ sets the competitive transfer $\overline{R}^{*}(N)$ (as defined in (E.1)), while each firm $f\notin F$ chooses a transfer weakly below $\overline{R}^{*}(N)$. Consumers opt-out if no firm offers a transfer above $\underline{R}^{*}(N)$. Otherwise, consumers participate with the firm offering the highest transfer, and exert effort $a^{*}(N)$. We now show that this is an equilibrium. By choosing the transfer $\overline{R}^{*}(N)$, firms $f\in F$ receive a payoff of $-\overline{R}^{*}(N)+\mu+a^{*}(N)=0$ per consumer. They cannot profitably deviate, since reducing their transfer would lose all of their consumers, while increasing their transfer would result in a negative payoff. Firms $f\notin F$ acquire consumers only by setting a transfer strictly above $\overline{R}^{*}(N)$, which leads to a negative payoff. So there are no profitable deviations for firms. Consumers also have no profitable deviations: participation with any firm in $F$ leads to the same (strictly positive) payoff, while participation with any firm $f\notin F$ involves the same equilibrium effort but a lower transfer. So the described strategies constitute an equilibrium. Moreover these equilibria are the only equilibria under data sharing. Suppose towards contradiction that some firm $f$ receiving consumers sets $R_{f}<\overline{R}^{*}(N)$. If another firm $f^{\prime}$ offers a transfer $R_{f^{\prime}}\in(R_{f},\overline{R}^{*}(N)]$, then consumers participating with firm $f$ can profitably deviate to participating with firm $f^{\prime}$. If no firms $f^{\prime}$ offer transfers in the interval $(R_{f},\overline{R}^{*}(N)]$, then firm $f$ can profitably deviate by raising its transfer. So transfers below $\overline{R}^{*}(N)$ are ruled out for firms receiving consumers. If any firm receiving consumers sets a transfer exceeding $\overline{R}^{*}(N)$ (which yields a negative payoff), then that firm can strictly profit by deviating to $\overline{R}^{*}(N)$ (which yields a payoff of zero). So transfers above $\overline{R}^{*}(N)$ are ruled out as well. In equilibrium, it must therefore be that all firms that receive consumers set transfer $\overline{R}^{*}(N)$. Firms not receiving consumers must set transfers weakly below $\overline{R}^{*}(N)$, or consumers could profitably deviate to one of these firms. Finally, since in equilibrium agents know the number of agents participating with their firm, uniqueness of agent effort follows from arguments given already in the proofs for exogenous transfers (see Section 4). ∎ We show next that (E.1) is an upper bound on achievable consumer welfare under proprietary data in the circumstance linkage setting. Consider any equilibrium, and let $N_{f}$ be the number of agents participating with firm $f$ in that equilibrium. We can obtain an upper bound on consumer welfare by evaluating consumer payoffs supposing that all firms set the competitive transfer. Then, each agent interacting with firm $f$ achieves a payoff of $\overline{R}^{*}(N_{f})+\mu-C(a_{C}^{*}(N_{f}))$. But $$\displaystyle\overline{R}^{*}(N_{f})+\mu-C(a_{C}^{*}(N_{f}))$$ $$\displaystyle=a^{*}_{C}(N_{f})+2\mu-C[a_{C}^{*}(N_{f})]$$ $$\displaystyle\leq a^{*}_{C}(N)+2\mu-C(a_{C}^{*}(N)),$$ since the function $\xi(n)=a^{*}_{C}(n)-C(a^{*}_{C}(n))$ is increasing, and $N\geq N_{f}$. Thus consumer welfare is bounded above by $N\times(a^{*}_{C}(N)+2\mu-C(a_{C}^{*}(N)))$ as desired. Since this bound holds uniformly across all allocations of consumers to firms, welfare must be weakly higher under data sharing than in any equilibrium with proprietary data. Now consider the quality linkage model. We first show that in equilibrium, all consumers must be served by a single firm. Lemma E.2. In the quality linkage model under proprietary data, in every equilibrium exactly one firm receives consumers. Proof. Suppose towards contradiction that there is an equilibrium in which two firms $f=1,2$ set transfers $R_{f}$ and receive $N_{f}>0$ agents. Then, each agent interacting with firm $f$ must choose the effort level $a_{Q}^{*}(N_{f})$. Agents’ IC constraints are described as follows: First, $$R_{1}+\mu-C(a_{Q}^{*}(N_{1}))\geq R_{2}+\mu-C(a_{Q}^{*}(N_{2}+1))$$ or agents participating with firm 1 could profitably deviate to participating with firm 2. Likewise it must be that $$R_{2}+\mu-C(a_{Q}^{*}(N_{2}))\geq R_{1}+\mu-C(a_{Q}^{*}(N_{1}+1))$$ or agents participating with firm 2 could profitably deviate to participating with firm 1. These displays simplify to $$\displaystyle R_{1}-R_{2}$$ $$\displaystyle\geq C(a_{Q}^{*}(N_{1}))-C(a_{Q}^{*}(N_{2}+1))$$ $$\displaystyle R_{2}-R_{1}$$ $$\displaystyle\geq C(a_{Q}^{*}(N_{2}))-C(a_{Q}^{*}(N_{1}+1)).$$ Summing these inequalities, we have $$0\geq C(a_{Q}^{*}(N_{1}))+C(a_{Q}^{*}(N_{2}))-C(a_{Q}^{*}(N_{1}+1))-C(a_{Q}^{*% }(N_{2}+1))$$ But $C(a_{Q}^{*}(n))$ is strictly decreasing in $n$, so the right-hand side of the above display must be strictly positive, leading to a contradiction. Now suppose no firms receive consumers in equilibrium. If there exists a firm offering a transfer $R>\underline{R}^{*}(1)$, then it is strictly optimal for a consumer to deviate to interaction with that firm at effort $a^{*}(1)$. Otherwise, it is strictly optimal for a firm to deviate to any transfer $R\in(\underline{R}_{M}^{*}(1),\overline{R}^{*}(1))$ and receive consumers. ∎ The lemma says that only one firm receives a strictly positive number of consumers in equilibrium; without loss, let this be firm 1. Consumer welfare is maximized when firm 1 sets the competitive transfer $\overline{R}^{*}(N)$, in which case consumers receive (E.1), so consumer welfare under proprietary data must be weakly lower than under data sharing, completing our proof. Appendix O For Online Publication O.1 Distributional Regularity Results To establish our main results we rely heavily on boundedness and smoothness of various likelihood and posterior distribution functions. In this section we prove a number of technical lemmas ensuring sufficient smoothness of functions invoked in proofs elsewhere. We first prove a general result showing that log-concave density functions are necessarily bounded. Lemma O.1. Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be any strictly positive, strictly log-concave function satisfying343434Since $\log f$ is strictly concave everywhere, it is continuous everywhere. Then so is $f,$ meaning that $f$ is a measurable function. $\int_{-\infty}^{\infty}f(x)\,dx<\infty$. Then $f$ is bounded. Proof. As $f$ is bounded below by 0, it suffices to show that it is bounded above. Since $\log f$ is strictly concave everywhere, it is either a strictly monotone function, or else has a global maximizer. Suppose that $\log f$ is strictly increasing everywhere. Then $f$ must be strictly increasing everywhere as well. But then as $f>0,$ $$\int_{-\infty}^{\infty}f(x)\,dx\geq\int_{0}^{\infty}f(x)\,dx\geq\int_{0}^{% \infty}f(0)\,dx=\infty,$$ a contradiction of our assumption. So $\log f$ cannot be strictly increasing everywhere. Suppose instead that $\log f$ is strictly decreasing everywhere. Then $f$ must be strictly decreasing everywhere as well. But then as $f>0,$ $$\int_{-\infty}^{\infty}f(x)\,dx\geq\int_{-\infty}^{0}f(x)\,dx\geq\int_{0}^{% \infty}f(0)\,dx=\infty,$$ another contradiction. So $f$ must have a global maximizer, meaning that it is bounded above as desired. ∎ Corollary O.1. $f_{\theta},f_{\overline{\theta}},f_{\theta^{\bot}},f_{\varepsilon},f_{% \overline{\varepsilon}},f_{\varepsilon^{\bot}}$ are each bounded. The following lemma establishes a set of regularity conditions on a likelihood function sufficient to ensure that its associated posterior distribution function is $C^{1}$ in both its arguments. Note that these conditions amount to the regularity conditions imposed in SMLRP, plus a continuity condition on the density of the unobserved variable. Lemma O.2. Let $X$ and $Y$ be two random variables for which the density $g(y)$ for $Y$ and the conditional densities $f(x\mid y)$ for $X\mid Y$ exist. Suppose that: • $f(x\mid y)$ is a $C^{1,0}$ function and $g(y)$ is continuous, • $f(x,y)$ and $\frac{\partial}{\partial x}f(x\mid y)$ are both uniformly bounded for all $(x,y)$. Then $H(x,y)\equiv\Pr(Y\leq y\mid X=x)$ is a $C^{1}$ function of $(x,y)$. Proof. Let $G$ be the distribution function for $y.$ By Bayes’ rule, $$H(x,y)=\frac{\int_{-\infty}^{y}f(x\mid y^{\prime})\,dG(y^{\prime})}{\int_{-% \infty}^{\infty}f(x\mid y^{\prime\prime})\,dG(y^{\prime\prime})}.$$ We first establish continuity of this function. It is sufficient to establish continuity of the numerator and denominator separately. As for the denominator, $f(x\mid y^{\prime\prime})$ is continuous in $x$ and uniformly bounded for all $(x,y^{\prime\prime}),$ so by the dominated convergence theorem the denominator is continuous in $x,$ thus also in $(x,y)$ given its independence of $y.$ As for the numerator, write $$\int_{-\infty}^{y}f(x\mid y^{\prime})\,dG(y^{\prime})=\int_{-\infty}^{\infty}% \mathbf{1}\{y^{\prime}\leq y\}f(x\mid y^{\prime})\,dG(y^{\prime}).$$ Consider any sequence converging to $(x_{0},y_{0})$. Given the continuity of $f(x\mid y),$ the integrand converges pointwise $G$-a.e. to $\mathbf{1}\{y^{\prime}\leq y_{0}\}f(x_{0}\mid y^{\prime})$. (The only point of potential nonconvergence is at $y^{\prime}=y_{0},$ but since $Y$ is a continuous distribution this point is assigned measure zero under $G.$) As the integrand is also uniformly bounded above for all $(x,y,y^{\prime}),$ the dominated convergence theorem ensures that the numerator is continuous in $(x,y).$ Next, note that $\partial H/\partial y$ exists and is given by $$\frac{\partial H}{\partial y}(x,y)=\frac{f(x\mid y)g(y)}{\int_{-\infty}^{% \infty}f(x\mid y^{\prime\prime})\,dG(y^{\prime\prime})},$$ which is continuous everywhere given that the denominator is continuous by the argument of the previous paragraph while $f(x\mid y)$ and $g(y)$ are continuous by assumption. Finally, consider $\partial H/\partial x.$ Let $\widehat{H}(x,y)\equiv H(x,y)^{-1}-1.$ Then $\frac{\partial H}{\partial x}(x,y)$ exists and satisfies $\frac{\partial H}{\partial x}(x,y)<0$ iff $\frac{\partial\widehat{H}}{\partial x}(x,y)$ exists and satisfies $\frac{\partial\widehat{H}}{\partial x}(x,y)>0$. Note that $\widehat{H}(x,y)$ may be written $$\widehat{H}(x,y)=\frac{\int_{y}^{\infty}f(x\mid y^{\prime})\,dG(y^{\prime})}{% \int_{-\infty}^{y}f(x\mid y^{\prime\prime})\,dG(y^{\prime\prime})}.$$ Because $\frac{\partial}{\partial x}f(x\mid y)$ exists and is uniformly bounded for all $x$ and $y$, the Leibniz integral rule ensures that this expression is differentiable with respect to $x$ with derivative $$\frac{\partial\widehat{H}}{\partial x}(x,y)=\frac{\int_{y}^{\infty}\frac{% \partial}{\partial x}f(x\mid y^{\prime})\,dG(y^{\prime})}{\int_{-\infty}^{y}f(% x\mid y^{\prime\prime})\,dG(y^{\prime\prime})}-\frac{\left(\int_{y}^{\infty}f(% x\mid y^{\prime})\,dG(y^{\prime})\right)\left(\int_{-\infty}^{y}\frac{\partial% }{\partial x}f(x\mid y^{\prime\prime})\,dG(y^{\prime\prime})\right)}{\left(% \int_{-\infty}^{y}f(x\mid y^{\prime\prime})\,dG(y^{\prime\prime})\right)^{2}}.$$ With some rearrangement, this may be equivalently written $$\displaystyle\frac{\partial\widehat{H}}{\partial x}(x,y)=$$ $$\displaystyle\left(\int_{-\infty}^{y}f(x\mid y^{\prime\prime})\,dG(y^{\prime% \prime})\right)^{-2}$$ $$\displaystyle\times\int_{y}^{\infty}dG(y^{\prime})\int_{-\infty}^{y}dG(y^{% \prime\prime})\,\left(f(x\mid y^{\prime\prime})\frac{\partial}{\partial x}f(x% \mid y^{\prime})-f(x\mid y^{\prime})\frac{\partial}{\partial x}f(x\mid y^{% \prime\prime})\right).$$ This function is continuous if both $$\int_{-\infty}^{\infty}\mathbf{1}\{y^{\prime\prime}\leq y\}f(x\mid y^{\prime% \prime})\,dG(y^{\prime\prime})$$ and $$\int_{-\infty}^{\infty}dG(y^{\prime})\int_{-\infty}^{\infty}dG(y^{\prime\prime% })\,\mathbf{1}\{y^{\prime}\geq y\}\mathbf{1}\{y^{\prime\prime}\leq y\}\left(f(% x\mid y^{\prime\prime})\frac{\partial}{\partial x}f(x\mid y^{\prime})-f(x\mid y% ^{\prime})\frac{\partial}{\partial x}f(x\mid y^{\prime\prime})\right)$$ are continuous. We have already seen that the former is continuous, so consider the latter expression. By assumption $f(x\mid y)$ and $\frac{\partial}{\partial x}f(x\mid y)$ are both continuous in $(x,y).$ Thus for any sequence converging to $(x_{0},y_{0}),$ the integrand converges to $$\mathbf{1}\{y^{\prime}\geq y_{0}\}\mathbf{1}\{y^{\prime\prime}\leq y_{0}\}% \left(f(x_{0}\mid y^{\prime\prime})\left.\frac{\partial}{\partial x}f(x\mid y^% {\prime})\right|_{x=x_{0}}-f(x_{0}\mid y^{\prime})\left.\frac{\partial}{% \partial x}f(x\mid y^{\prime\prime})\right|_{x=x_{0}}\right)$$ except possibly at points $(y^{\prime},y^{\prime\prime})$ such that $y^{\prime}=y_{0}$ or $y^{\prime\prime}=y_{0},$ a set which is assigned zero measure under $G\times G$ given the continuity of the distribution of $Y.$ Further, since $f(x\mid y)$ and $\frac{\partial}{\partial x}f(x\mid y)$ are both uniformly bounded for all $(x,y)$, so is $$f(x\mid y)\frac{\partial}{\partial x}f(x\mid y^{\prime})-f(x\mid y^{\prime})% \frac{\partial}{\partial x}f(x\mid y)$$ for all $x,y,y^{\prime}.$ Then the dominated convergence theorem ensures that the entire expression converges to its value at $(x_{0},y_{0}),$ as desired. ∎ The next lemma establishes that the density functions of $\theta_{i}$ and $\varepsilon_{i}$ remain continuous when conditioned on a set of outcomes. Lemma O.3. For each model $M\in\{Q,C\},$ agent $i\in\{1,...,N\}$, and outcome-action profile $(\mathbf{S},\mathbf{a})$: • The conditional densities $f^{M}_{\theta_{i}}(\theta_{i}\mid\mathbf{S};\mathbf{a})$ and $f^{M}_{\theta_{i}}(\theta_{i}\mid\mathbf{S}_{-j};\mathbf{a})$ for each $j\in\{1,...,N\}$ are strictly positive and continuous in $\theta_{i}$ everywhere, • The conditional densities $f^{M}_{\varepsilon_{i}}(\varepsilon_{i}\mid\mathbf{S};\mathbf{a})$ and $f^{M}_{\varepsilon_{i}}(\varepsilon_{i}\mid\mathbf{S}_{-j};\mathbf{a})$ for each $j\in\{1,...,N\}$ are strictly positive and continuous in $\varepsilon_{i}$ everywhere. Proof. Throughout the proof we suppress explicit dependence of distributions on the action profile $\mathbf{a}.$ We prove the result for the quality linkage model, with the circumstance linkage model following by permuting the roles of $\theta_{i}$ and $\varepsilon_{i}$. Consider first the density of $\theta_{i}$ conditional on $\mathbf{S}.$ By Bayes’ rule $$f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S})=\frac{g^{Q}_{1:N}(\mathbf{S}\mid% \theta_{i})f_{\theta}(\theta_{i})}{g^{Q}_{1:N}(\mathbf{S})},$$ where $$g^{Q}_{1:N}(\mathbf{S}\mid\theta_{i})=g^{Q}_{i}(S_{i}\mid\theta_{i})\int dF^{Q% }_{\overline{\theta}}(\overline{\theta}\mid\theta_{i})\prod_{j\neq i}g^{Q}_{j}% (S_{j}\mid\overline{\theta})$$ and $$g^{Q}_{1:N}(\mathbf{S})=\int dF_{\overline{\theta}}(\overline{\theta})\prod_{j% =1}^{N}g^{Q}_{j}(S_{j}\mid\overline{\theta}).$$ As $g^{Q}_{1:N}(\mathbf{S}\mid\theta_{i}),$ $g^{Q}_{1:N}(\mathbf{S}),$ and $f_{\theta}(\theta_{i})$ are all strictly positive, so is $f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S}).$ Further, $g^{Q}_{i}(S_{i}\mid\theta_{i})=f_{\varepsilon}(S_{i}-\theta_{i}-a_{i})$ is continuous in $\theta_{i}$ given the continuity of $f_{\varepsilon}.$ Then $f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S})$ is continuous in $\theta_{i}$ so long as $$f_{\theta}(\theta_{i})\int dF^{Q}_{\overline{\theta}}(\overline{\theta}\mid% \theta_{i})\prod_{j\neq i}g^{Q}_{j}(S_{j}\mid\overline{\theta})=\int dF_{% \overline{\theta}}(\overline{\theta})f_{\theta^{\bot}}(\theta_{1}-\overline{% \theta})\prod_{j\neq i}g^{Q}_{j}(S_{j}\mid\overline{\theta})$$ is. As $f_{\theta^{\bot}}$ is bounded and continuous and $\int dF_{\overline{\theta}}(\overline{\theta})\prod_{j\neq i}g^{Q}_{j}(S_{j}% \mid\overline{\theta})=g_{1:N}(\mathbf{S})$ is finite, the dominated convergence theorem ensures that this final term is continuous, as desired. The result for the density of $\theta_{i}$ conditional on $\mathbf{S}_{-j}$ for any $j\neq i$ follows from nearly identical work. Next consider the density of $\theta_{i}$ conditional on $\mathbf{S}_{-i}.$ Now Bayes’ rule gives $$f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S}_{-i})=\frac{g^{Q}_{-i}(\mathbf{S}_% {-i}\mid\theta_{i})f_{\theta}(\theta_{i})}{g^{Q}_{-i}(\mathbf{S}_{-i})},$$ where $$g^{Q}_{-i}(\mathbf{S}_{-i}\mid\theta_{i})=\int dF^{Q}_{\overline{\theta}}(% \overline{\theta}\mid\theta_{i})\prod_{j\neq i}g^{Q}_{j}(S_{j}\mid\overline{% \theta})$$ and $$g^{Q}_{-i}(\mathbf{S}_{-i})=\int dF_{\overline{\theta}}(\overline{\theta})% \prod_{j\neq i}g^{Q}_{j}(S_{j}\mid\overline{\theta}).$$ As each of these terms is strictly positive, so is $f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S}_{-i}).$ Further, $g^{Q}_{-i}(\mathbf{S}_{-i}\mid\theta_{i})f_{\theta}(\theta_{i})$ was already shown to be continuous in the previous paragraph. So $f^{Q}_{\theta_{i}}(\theta_{i}\mid\mathbf{S}_{-i})$ is continuous in $\theta_{i},$ as desired. Next, consider the density of $\varepsilon_{i}$ conditional on $\mathbf{S}.$ Bayes’ rule gives $$f^{Q}_{\varepsilon_{i}}(\varepsilon_{i}\mid\mathbf{S})=\frac{g^{Q}_{1:N}(% \mathbf{S}\mid\varepsilon_{i})f_{\varepsilon}(\varepsilon_{i})}{g^{Q}_{1:N}(% \mathbf{S})},$$ where $$g^{Q}_{1:N}(\mathbf{S}\mid\varepsilon_{i})=g^{Q}_{i}(S_{i}\mid\varepsilon_{i})% \prod_{j\neq i}g^{Q}_{j}(S_{j}).$$ Then as $g^{Q}_{i}(S_{i}\mid\varepsilon_{i})=f_{\theta}(S_{i}-\varepsilon_{i}-a_{i})$ is continuous in $\varepsilon_{i}$ given the continuity of $f_{\theta},$ so is $g^{Q}_{1:N}(\mathbf{S}\mid\varepsilon_{i}).$ The result for the density of $\varepsilon_{i}$ conditional on $\mathbf{S}_{-j}$ for any $j\neq i$ follows by nearly identical work. Finally, consider the density of $\varepsilon_{i}$ conditional on $\mathbf{S}_{-i}.$ In the quality linkage model $\varepsilon_{i}$ is independent of $\mathbf{S}_{-i},$ so $g^{Q}_{\varepsilon_{i}}(\varepsilon_{i}\mid\mathbf{S}_{-i})=f_{\varepsilon}(% \varepsilon_{i}),$ which is strictly positive and continuous by assumption. ∎ The following pair of lemmas establishes that the posterior distribution functions of the agent’s type conditional on the vector of outcomes satisfies a smoothness condition. To economize on notation, the lemma is established with respect to agent 1’s latent variables, as the signal of agent $N$ moves. By symmetry an analogous result applies to all other pairs of agents. Lemma O.4. For each model $M\in\{Q,C\}$ and outcome-action profile $(\mathbf{S}_{-N},\mathbf{a}),$ $F^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S};\mathbf{a})$ is a $C^{1}$ function of $(S_{N},\theta_{1})$. Proof. For convenience, we suppress the dependence of distributions on $\mathbf{a}$ in this proof. Fix $\mathbf{S}_{-N}.$ The result follows from Lemma O.2 so long as 1) $f^{M}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}_{-N})$ is continuous in $\theta_{1}$, and 2) $g^{M}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{-N})$ is a $C^{1,0}$ function of $(S_{N},\theta_{1})$ and both it and its derivative wrt $S_{N}$ are uniformly bounded. Lemma O.3 ensures that condition 1 holds, so we need only establish condition 2. Consider first the quality linkage model. In this case $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{-N})=g^{Q}_{N}(S_{N}\mid\theta_{1},% \mathbf{S}_{2:N-1}),$ as $S_{N}$ is independent of $S_{1}$ conditional on $\theta_{1}.$ And by the law of total probability, $$g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})=\int g^{Q}_{N}(S_{N}\mid% \overline{\theta},\theta_{1},\mathbf{S}_{2:N-1})\,dF^{Q}_{\overline{\theta}}(% \overline{\theta}\mid\theta_{1},\mathbf{S}_{2:N-1}).$$ As $S_{N}$ is independent of $(\theta_{1},\mathbf{S}_{2:N-1})$ conditional on $\overline{\theta},$ this is equivalently $$g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})=\int g^{Q}_{N}(S_{N}\mid% \overline{\theta})\,dF^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1}% ,\mathbf{S}_{2:N-1}).$$ Since $g^{Q}_{N}(S_{N}\mid\overline{\theta})=f_{\theta^{\bot}+\varepsilon}(S_{N}-% \overline{\theta}-a_{N}),$ which is uniformly bounded by some $M$ for all $(S_{N},\overline{\theta}),$ $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})$ is uniformly bounded by $M$ as well for all $(S_{N},\theta_{1}).$ Further, by Bayes’ rule $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1},\mathbf{S}_{2:N-1})=% \frac{f^{Q}_{\theta_{1}}(\theta_{1}\mid\overline{\theta},\mathbf{S}_{2:N-1})f_% {\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{2:N-1})}{f^{Q}_{\theta_{1% }}(\theta_{1}\mid\mathbf{S}_{2:N-1})}.$$ Now, $\theta_{1}$ is independent of $\mathbf{S}_{2:N-1}$ conditional on $\overline{\theta},$ and so $f^{Q}_{\theta_{1}}(\theta_{1}\mid\overline{\theta},\mathbf{S}_{2:N-1})=f^{Q}_{% \theta_{1}}(\theta_{1}\mid\overline{\theta})=f_{\theta^{\bot}}(\theta_{1}-% \overline{\theta}).$ Then $f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1},\mathbf{S}_{2:N-1})$ is equivalently $$f^{Q}_{\overline{\theta}}(\overline{\theta}\mid\theta_{1},\mathbf{S}_{2:N-1})=% \frac{f_{\theta^{\bot}}(\theta_{1}-\overline{\theta})f_{\overline{\theta}}(% \overline{\theta}\mid\mathbf{S}_{2:N-1})}{f^{Q}_{\theta_{1}}(\theta_{1}\mid% \mathbf{S}_{2:N-1})}.$$ Inserting this into the previous expression for $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})$ yields $$g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})=\frac{1}{f^{Q}_{\theta_{1}}(% \theta_{1}\mid\mathbf{S}_{2:N-1})}\int f_{\theta^{\bot}+\varepsilon}(S_{N}-% \overline{\theta}-a_{N})f_{\theta^{\bot}}(\theta_{1}-\overline{\theta})\,dF^{Q% }_{\overline{\theta}}(\overline{\theta}\mid\mathbf{S}_{2:N-1}).$$ Applying Lemma O.3 to an $(N-1)$-agent model implies that $f^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}_{2:N-1})$ is continuous in $\theta_{1}$. Meanwhile by assumption $f_{\theta^{\bot}+\varepsilon}(S_{N}-\overline{\theta}-a_{N})$ and $f_{\theta^{\bot}}(\theta_{1}-\overline{\theta})$ are both continuous in $(S_{N},\theta_{1})$ for every $\overline{\theta}$, and are uniformly bounded above for every $(\theta_{1},S_{N},\overline{\theta}).$ Then by the dominated convergence theorem the integral is also continuous in $(S_{N},\theta_{1}),$ ensuring that $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})$ is a continuous function of $(S_{N},\theta_{1}).$ Finally, consider differentiating wrt $S_{N}.$ As $f^{\prime}_{\theta^{\bot}+\varepsilon}$ exists and is uniformly bounded, and $f_{\theta^{\bot}}$ is also uniformly bounded, the Leibniz integral rule ensures that $$\frac{\partial}{\partial S_{N}}g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1% })=\frac{1}{f^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}_{2:N-1})}\int f^{% \prime}_{\theta^{\bot}+\varepsilon}(S_{N}-\overline{\theta}-a_{N})f_{\theta^{% \bot}}(\theta_{1}-\overline{\theta})\,dF^{Q}_{\overline{\theta}}(\overline{% \theta}\mid\mathbf{S}_{2:N-1}).$$ Since $f^{\prime}_{\theta^{\bot}+\varepsilon}$ is also continuous, this expression is continuous in $(S_{N},\theta_{1})$ following the same logic which ensured that $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})$ is continuous. Finally, let $M$ be an upper bound on $|f^{\prime}_{\theta^{\bot}+\varepsilon}|.$ Then as $$f^{Q}_{\theta_{1}}(\theta_{1}\mid\mathbf{S}_{2:N-1})=\int f_{\theta^{\bot}}(% \theta_{1}-\overline{\theta})\,dF^{Q}_{\overline{\theta}}(\overline{\theta}% \mid\mathbf{S}_{2:N-1}),$$ it follows that $\left|\frac{\partial}{\partial S_{N}}g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_% {2:N-1})\right|$ is uniformly bounded above by $M$ as well. So $g^{Q}_{N}(S_{N}\mid\theta_{1},\mathbf{S}_{2:N-1})$ satisfies condition 2. Now consider the circumstance linkage model. In this model $$g^{C}_{N}(S_{N}\mid\theta_{1}=t,S_{1}=s,\mathbf{S}_{2:N-1})=g^{C}_{N}(S_{N}% \mid\varepsilon_{1}=s-t-a_{1},\mathbf{S}_{2:N-1}),$$ as $\varepsilon_{1}=S_{1}-\theta_{1}-a_{1}$ and $S_{N}$ is independent of $S_{1}$ conditional on $\varepsilon_{1}$. It is therefore enough to establish that $g^{C}_{N}(S_{N}\mid\varepsilon_{1},\mathbf{S}_{2:N-1})$ is a $C^{1,0}$ function of $(S_{N},\varepsilon_{1})$ with uniform bounds on it and its derivative wrt $S_{N}.$ This follows from work nearly identical to the previous paragraph, substituting $\varepsilon_{1}$ for $\theta_{1}$ and $\overline{\varepsilon}$ for $\overline{\theta}.$ ∎ O.2 Proofs for the Gaussian Setting O.2.1 Verification of Assumptions in 2.6 Here we verify that Gaussian uncertainty satisfies the stated assumptions. Assumptions 2.1, 2.3, and 2.5 are immediate. Assumption 2.6 is satisfied for any strictly convex cost function, since the second derivative of the posterior expectation in each signal realization is zero. Assumption 2.4 is verified in the following lemma: Lemma O.5. Suppose $\xi\sim\mathcal{N}(0,\sigma^{2}).$ Then for any $\overline{\Delta}>0,$ the function $$J^{\ast}(\xi)=\frac{1}{\overline{\Delta}^{2}}\left(\exp\left(\frac{\overline{% \Delta}^{2}}{2\sigma^{2}}\right)+\exp\left(\frac{\overline{\Delta}|\xi|}{% \sigma^{2}}\right)-2\right)^{2}$$ satisfies $|J(\xi,\Delta)|\leq J^{\ast}(\xi)$ for every $\xi\in\mathbb{R}$ and $\Delta\in[-\overline{\Delta},\overline{\Delta}],$ and $\mathbb{E}[J^{\ast}(\xi)]<\infty.$ Proof. Under the distributional assumption on $\xi,$ the density function $f_{\xi}$ has the form $$f_{\xi}(\xi)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\xi^{2}}{2\sigma^% {2}}\right).$$ Therefore $$\frac{1}{\Delta}\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(\xi)}=\frac{% \exp\left(\frac{1}{\sigma^{2}}\Delta(\xi-\Delta/2)\right)-1}{\Delta}.$$ Now, we may equivalently write $$\displaystyle\frac{1}{\Delta}\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{\xi}(% \xi)}=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\Delta/2}^{\xi}\exp\left(\frac{1}{% \sigma^{2}}\Delta(\widetilde{\xi}-\Delta/2)\right)\,d\widetilde{\xi}$$ $$\displaystyle=$$ $$\displaystyle\frac{\exp\left(-\frac{\Delta^{2}}{2\sigma^{2}}\right)}{\sigma^{2% }}\int_{\Delta/2}^{\xi}\exp\left(\frac{\Delta\widetilde{\xi}}{\sigma^{2}}% \right)\,d\widetilde{\xi}.$$ Hence $$\displaystyle\left|\frac{1}{\Delta}\frac{f_{\xi}(\xi-\Delta)-f_{\xi}(\xi)}{f_{% \xi}(\xi)}\right|=$$ $$\displaystyle\frac{\exp\left(-\frac{\Delta^{2}}{2\sigma^{2}}\right)}{\sigma^{2% }}\int_{\min\{\Delta/2,\xi\}}^{\max\{\Delta/2,\xi\}}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\min\{\Delta/2,\xi\}}^{\max\{\Delta/2,% \xi\}}\exp\left(\frac{\Delta\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{% \xi}.$$ Let $$H(\xi,\Delta)\equiv\frac{1}{\sigma^{2}}\int_{\min\{\Delta/2,\xi\}}^{\max\{% \Delta/2,\xi\}}\exp\left(\frac{\Delta\widetilde{\xi}}{\sigma^{2}}\right)\,d% \widetilde{\xi}.$$ We will show that $H(\xi,\Delta)\leq\sqrt{J^{\ast}(\xi)}$ for all $\xi$ and $\Delta\in[-\overline{\Delta},\overline{\Delta}]$ in cases, depending on the signs of $\xi,\Delta,$ and $\xi-\Delta/2.$ Case 1: $\xi\geq\Delta/2\geq 0.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\Delta/2}^{\xi}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{0}^{\xi}\exp\left(\frac{\overline{% \Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}=\frac{1}{% \overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta}\xi}{\sigma^{2}}% \right)-1\right)\leq\sqrt{J^{\ast}(\xi)}.$$ Case 2: $\xi\geq 0>\Delta/2.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\Delta/2}^{\xi}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\left(\int_{0}^{\xi}\exp\left(\frac{\overline% {\Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}+\int_{-\overline% {\Delta}/2}^{0}\exp\left(-\frac{\overline{\Delta}\widetilde{\xi}}{\sigma^{2}}% \right)\,d\widetilde{\xi}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta% }\xi}{\sigma^{2}}\right)+\exp\left(\frac{\overline{\Delta}^{2}}{2\sigma^{2}}% \right)-2\right)=\sqrt{J^{\ast}(\xi)}.$$ Case 3: $\Delta/2>\xi\geq 0.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\xi}^{\Delta/2}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{0}^{\overline{\Delta}/2}\exp\left(\frac% {\overline{\Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}=\frac{% 1}{\overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta}^{2}}{2\sigma^{2}}% \right)-1\right)\leq\sqrt{J^{\ast}(\xi)}.$$ Case 4: $\Delta/2>0>\xi.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\xi}^{\Delta/2}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\left(\int_{0}^{\overline{\Delta}/2}\exp\left% (\frac{\overline{\Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}+% \int_{\xi}^{0}\exp\left(-\frac{\overline{\Delta}\widetilde{\xi}}{\sigma^{2}}% \right)\,d\widetilde{\xi}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta% }^{2}}{2\sigma^{2}}\right)+\exp\left(\frac{\overline{\Delta}|\xi|}{\sigma^{2}}% \right)-2\right)=\sqrt{J^{\ast}(\xi)}.$$ Case 5: $0\geq\Delta/2>\xi.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\xi}^{\Delta/2}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\xi}^{0}\exp\left(-\frac{\overline{% \Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}=\frac{1}{% \overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta}|\xi|}{\sigma^{2}}% \right)-1\right)\leq\sqrt{J^{\ast}(\xi)}.$$ Case 6: $0>\xi\geq\Delta/2.$ Then $$\displaystyle H(\xi,\Delta)=$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{\Delta/2}^{\xi}\exp\left(\frac{\Delta% \widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\sigma^{2}}\int_{-\overline{\Delta}/2}^{0}\exp\left(-% \frac{\overline{\Delta}\widetilde{\xi}}{\sigma^{2}}\right)\,d\widetilde{\xi}=% \frac{1}{\overline{\Delta}}\left(\exp\left(\frac{\overline{\Delta}^{2}}{2% \sigma^{2}}\right)-1\right)\leq\sqrt{J^{\ast}(\xi)}.$$ This establishes that $|J(\xi,\Delta)|\leq H(\xi,\Delta)^{2}\leq J^{\ast}(\xi)$ for every $\xi$ and $\Delta\in[-\overline{\Delta},\overline{\Delta}],$ as desired. It remains only to show that $J^{\ast}$ is $\mathcal{P}^{0}$-integrable. This follows because $$\displaystyle J^{\ast}(\xi)\leq$$ $$\displaystyle\frac{1}{\overline{\Delta}^{2}}\left(\exp\left(\frac{\overline{% \Delta}^{2}}{2\sigma^{2}}\right)+\exp\left(\frac{\overline{\Delta}|\xi|}{% \sigma^{2}}\right)\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\overline{\Delta}^{2}}\left(\exp\left(\frac{\overline{% \Delta}^{2}}{\sigma^{2}}\right)+2\exp\left(\frac{\overline{\Delta}^{2}}{\sigma% ^{2}}\right)\exp\left(\frac{\overline{\Delta}|\xi|}{\sigma^{2}}\right)+\exp% \left(\frac{2\overline{\Delta}|\xi|}{\sigma^{2}}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\overline{\Delta}^{2}}\left(\exp\left(\frac{\overline{% \Delta}^{2}}{\sigma^{2}}\right)+2\exp\left(\frac{\overline{\Delta}^{2}}{\sigma% ^{2}}\right)\left(\exp\left(\frac{\overline{\Delta}\xi}{\sigma^{2}}\right)+% \exp\left(-\frac{\overline{\Delta}\xi}{\sigma^{2}}\right)\right)\right.$$ $$\displaystyle\quad\quad\quad\left.+\exp\left(\frac{2\overline{\Delta}\xi}{% \sigma^{2}}\right)+\exp\left(-\frac{2\overline{\Delta}\xi}{\sigma^{2}}\right)\right)$$ The first term is a constant, while each of the remaining terms is proportional to a lognormal random variable. Thus each term has finite mean, and hence so does $J^{\ast}(\xi).$ ∎ O.2.2 Marginal Value of Effort Consider the quality linkage model, and suppose that agent $i$ chooses effort $a_{i}=a^{*}+\Delta$ while all agents $j\neq i$ choose the equilibrium effort level $a^{*}$. The principal’s posterior belief about $\overline{\theta}+\theta_{i}^{\perp}$ is independent of $\mathbf{S}_{-i}$ conditional on $\overline{\theta}$. Thus, using standard formulas for updating to normal signals, we can first update the principal’s belief about $\overline{\theta}$ to $\overline{\theta}\mid\mathbf{S}_{-i}\sim\mathcal{N}\left(\hat{\mu}_{\overline{% \theta}},\hat{\sigma}_{\overline{\theta}^{2}}\right)$, where $$\displaystyle\hat{\mu}_{\overline{\theta}}\equiv\frac{(N-1)\sigma_{\overline{% \theta}}^{2}\cdot(\overline{S}_{-i}-a^{*})+(\sigma_{\theta^{\perp}}^{2}+\sigma% _{\varepsilon}^{2})\cdot\mu}{(N-1)\sigma_{\overline{\theta}}^{2}+\sigma_{% \theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}},\quad\hat{\sigma}_{\overline{% \theta}}^{2}\equiv\frac{\sigma_{\overline{\theta}}^{2}}{(N-1)\sigma_{\overline% {\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}}.$$ and $\overline{S}_{-i}$ is the average outcome. The principal’s expectation of $\overline{\theta}+\theta_{i}^{\perp}$ after further updating to $S_{i}$ is $$\displaystyle\mathbb{E}(\overline{\theta}+\theta_{i}^{\perp}\mid\textbf{S})$$ $$\displaystyle=\frac{\sigma_{\varepsilon}^{2}}{\hat{\sigma}_{\overline{\theta}^% {2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}}\cdot(\overline{S}_{% -i}-a^{*})+\frac{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^% {2}}{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{% \varepsilon}^{2}}\cdot(S_{i}-a^{*}).$$ Taking an expectation with respect to the agent’s prior belief, we have: $$\displaystyle\mu_{N}(\Delta)=\mathbb{E}\left[\mathbb{E}(\overline{\theta}+% \theta_{i}^{\perp}\mid S)\right]$$ $$\displaystyle=\frac{\sigma_{\varepsilon}^{2}}{\hat{\sigma}_{\overline{\theta}^% {2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}}\cdot\mu+\frac{\hat{% \sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}}{\hat{\sigma}_{% \overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}}% \cdot(\mu+\Delta)$$ $$\displaystyle=\mu+\frac{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{% \perp}}^{2}}{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+% \sigma_{\varepsilon}^{2}}\cdot\Delta$$ and the marginal value of effort is $$\displaystyle\mu_{N}^{\prime}(\Delta)$$ $$\displaystyle=\frac{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp% }}^{2}}{\hat{\sigma}_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+% \sigma_{\varepsilon}^{2}}$$ $$\displaystyle=\left(\frac{\sigma_{\overline{\theta}}^{2}}{(N-1)\sigma_{% \overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}}+% \sigma_{\theta^{\perp}}^{2}\right)/\left(\frac{\sigma_{\overline{\theta}}^{2}}% {(N-1)\sigma_{\overline{\theta}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{% \varepsilon}^{2}}+\sigma_{\theta^{\perp}}^{2}+\sigma_{\varepsilon}^{2}\right).$$ (O.1) It is straightforward to verify that this expression is independent of $\Delta$, decreasing in $N$, and converges to $\sigma_{\theta^{\perp}}^{2}/\left(\sigma_{\theta^{\perp}}^{2}+\sigma_{% \varepsilon}^{2}\right)$ as $N\rightarrow\infty$. Consider now the circumstance linkage model. Using parallel arguments to those above, the principal’s posterior belief about the common part of the noise shock $\overline{\varepsilon}$ after updating to $\mathbf{S}_{-i}$ is $$\overline{\varepsilon}\mid\mathbf{S}_{-i}\sim\mathcal{N}\left(\frac{(N-1)% \sigma_{\overline{\varepsilon}}^{2}}{(N-1)\sigma_{\overline{\varepsilon}}^{2}+% \sigma_{\theta}^{2}+\sigma_{\varepsilon^{\perp}}^{2}}\cdot\left(\overline{S}_{% -i}-a^{*}-\mu\right),\frac{\sigma_{\overline{\varepsilon}}^{2}(\sigma_{% \varepsilon^{\perp}}^{2}+\sigma_{\theta}^{2})}{(N-1)\sigma_{\overline{% \varepsilon}}^{2}+\sigma_{\varepsilon^{\perp}}^{2}+\sigma_{\theta}^{2}}\right)% \equiv\mathcal{N}(\eta,\hat{\sigma}_{\overline{\varepsilon}}^{2})$$ and the principal’s posterior expectation of $\theta_{i}$ after further updating to $S_{i}$ is $$\displaystyle\mathbb{E}(\theta_{i}\mid\mathbf{S})=\frac{\sigma_{\theta}^{2}}{% \sigma_{\theta}^{2}+\hat{\sigma}_{\overline{\varepsilon}}^{2}+\sigma_{% \varepsilon^{\perp}}^{2}}\cdot\left(S_{i}-\eta\right)+\frac{\hat{\sigma}_{% \overline{\varepsilon}}^{2}+\sigma_{\varepsilon^{\perp}}^{2}}{\sigma_{\theta}^% {2}+\hat{\sigma}_{\overline{\varepsilon}}^{2}+\sigma_{\varepsilon^{\perp}}^{2}% }\cdot\mu$$ Since in the agent’s prior, $\mathbb{E}(S_{i})=\mu+\Delta$ and $\mathbb{E}(\eta)=0$, the agent’s expectation of the principal’s forecast is $$\mu_{N}(\Delta)=\mathbb{E}(\theta_{i}\mid\mathbf{S})=\mu+\frac{\sigma_{\theta}% ^{2}}{\sigma_{\theta}^{2}+\hat{\sigma}_{\overline{\varepsilon}}^{2}+\sigma_{% \varepsilon^{\perp}}^{2}}\cdot\Delta$$ implying that the marginal value of effort is $$\displaystyle\mu^{\prime}_{N}(\Delta)$$ $$\displaystyle=\sigma_{\theta}^{2}/(\sigma_{\theta}^{2}+\hat{\sigma}_{\overline% {\varepsilon}}^{2}+\sigma_{\varepsilon^{\bot}}^{2})$$ $$\displaystyle=\sigma_{\theta}^{2}/\left(\sigma_{\theta}^{2}+\frac{\sigma_{% \overline{\varepsilon}}^{2}(\sigma_{\varepsilon^{\perp}}^{2}+\sigma_{\theta}^{% 2})}{(N-1)\sigma_{\overline{\varepsilon}}^{2}+\sigma_{\varepsilon^{\perp}}^{2}% +\sigma_{\theta}^{2}}+\sigma_{\varepsilon^{\bot}}^{2}\right)$$ (O.2) This expression is constant in $\Delta$, increasing in $N$, and converges to $\sigma_{\theta}^{2}/(\sigma_{\theta}^{2}+\sigma_{\varepsilon^{\bot}}^{2})$ as $N$ grows large. O.3 Proofs for Section 7 (Extensions) O.3.1 Proof of Proposition 7.1 Consider first the quality linkage model. Let $\mu_{m}(\Delta)$ be the agent’s value of distortion when $m\in\{0,...J\}$ linkages have been identified. As in the main model, this value is differentiable and independent of the action the principal expects the agent to take. (See the proof of Lemma C.1.) Agent 0’s equilibrium effort is then determined by $$\mu^{\prime}_{m}(0)=C^{\prime}(a_{0}).$$ We prove that $\mu^{\prime}_{m}(0)>\mu^{\prime}_{m+1}(0)$ for every $m.$ Let $\mathbf{S}^{j}=(S^{j}_{1},...,S^{j}_{N_{j}})$ be the vector of signal realizations for each segment $j,$ and $\mathbf{S}^{1:m}$ for the matrix of signal realizations for all signal realizations from segments 1 through $m.$ We will write $G^{j}$ for the distribution function of each $\mathbf{S}^{j},$ and $G^{0:m}$ for the distribution function of $(S^{0},\mathbf{S}^{1:m}).$ Dropping explicit conditioning on actions for convenience, a change of variables as in the proof of Lemma 3.1 allows us to write $\mu_{m}(\Delta)$ and $\mu_{m+1}(\Delta)$ as $$\mu_{m}(\Delta)=\int dG^{0:m}(S_{0}=s_{0},\mathbf{S}^{1:m})\,\mathbb{E}[\theta% _{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m}]$$ and $$\mu_{m+1}(\Delta)=\int dG^{0:m+1}(S_{0}=s_{0},\mathbf{S}^{1:m+1})\,\mathbb{E}[% \theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m+1}]$$ for some common set of actions. The law of iterated expectations applied to $\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m}]$ allows the previous expression for $\mu_{m}(\Delta)$ to be expanded as $$\displaystyle\mu_{m}(\Delta)=\int$$ $$\displaystyle dG^{0:m}(S_{0}=s_{0},\mathbf{S}^{1:m})$$ $$\displaystyle\times\int dG^{m+1}(\mathbf{S}^{m+1}\mid S_{0}=s_{0}+\Delta,% \mathbf{S}^{0:m})\,\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:% m+1}].$$ Meanwhile the law of iterated expectations applied to the outer expectation allows $\mu_{m+1}(\Delta)$ to be expanded as $$\displaystyle\mu_{m+1}(\Delta)=\int$$ $$\displaystyle dG^{0:m}(S_{0}=s_{0},\mathbf{S}^{1:m})$$ $$\displaystyle\times\int dG^{m+1}(\mathbf{S}^{m+1}\mid S_{0}=s_{0},\mathbf{S}^{% 0:m})\,\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m+1}].$$ Each of these inner integrals may be further expanded using the law of total probability, yielding $$\displaystyle\mu_{m}(\Delta)=\int$$ $$\displaystyle dG^{0:m}(S_{0}=s_{0},\mathbf{S}^{1:m})$$ $$\displaystyle\times\int dF_{\overline{\theta}^{m+1}}(\overline{\theta}^{m+1}% \mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m})$$ $$\displaystyle\quad\quad\quad\times\int dG^{m+1}(\mathbf{S}^{m+1}\mid\overline{% \theta}^{m+1})\,\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m+1}]$$ and $$\displaystyle\mu_{m+1}(\Delta)=\int$$ $$\displaystyle dG^{0:m}(S_{0}=s_{0},\mathbf{S}^{1:m})$$ $$\displaystyle\times\int dF_{\overline{\theta}^{m+1}}(\overline{\theta}_{m+1}% \mid S_{0}=s_{0},\mathbf{S}^{1:m})$$ $$\displaystyle\quad\quad\quad\times\int dG^{m+1}(\mathbf{S}^{m+1}\mid\overline{% \theta}^{m+1})\,\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\Delta,\mathbf{S}^{1:m+1}]$$ where we have used the fact that $\mathbf{S}^{m+1}$ is independent of $(S_{0},\mathbf{S}^{1:m})$ conditional on $\overline{\theta}_{m+1}$ to drop extraneous conditioning in the inner expectation. So define a function $\psi(\delta_{1},\delta_{2},s_{0},\mathbf{S}^{1:m})$ by $$\displaystyle\psi(\delta_{1},\delta_{2},s_{0},\mathbf{S}^{1:m})\equiv\int$$ $$\displaystyle dF_{\overline{\theta}^{m+1}}(\overline{\theta}^{m+1}\mid S_{0}=s% _{0}+\delta_{1},\mathbf{S}^{1:m})$$ $$\displaystyle\times\int dG^{m+1}(\mathbf{S}^{m+1}\mid\overline{\theta}^{m+1})% \,\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\delta_{2},\mathbf{S}^{1:m+1}].$$ Then for every $\Delta>0$ we have $$\frac{1}{\Delta}\mu_{m}(\Delta)-\mu_{m+1}(\Delta)=\int dG^{0:m}(S_{0}=s_{0},% \mathbf{S}^{1:m})\,\frac{1}{\Delta}\left(\psi(\Delta,\Delta,s_{0},\mathbf{S}^{% 1:m})-\psi(\Delta,0,s_{0},\mathbf{S}^{1:m})\right).$$ Since $$\mu^{\prime}_{m}(0)-\mu^{\prime}_{m+1}(0)=\lim_{\Delta\downarrow 0}(\mu_{m}(% \Delta)-\mu)-\lim_{\Delta\downarrow 0}(\mu_{m+1}(\Delta)-\mu)=\lim_{\Delta% \downarrow 0}(\mu_{m}(\Delta)-\mu_{m+1}(\Delta),$$ It is therefore sufficient to determine the limiting behavior of $$\frac{1}{\Delta}\left(\psi(\Delta,\Delta,s_{0},\mathbf{S}^{1:m})-\psi(\Delta,0% ,s_{0},\mathbf{S}^{1:m})\right)$$ as $\Delta\downarrow 0.$ Note that $$S^{m+1}_{i}=\overline{\theta}^{m}+\theta^{\bot,m}_{i}+\varepsilon_{i},$$ where the densities of $\theta^{\bot,m}_{i}$ and $\varepsilon_{i}$ each exist and are bounded by assumption. Then there exists a differentiable distribution function $H$ with bounded derivative such that $G^{m+1}_{i}(S^{m+1}_{i}\mid\overline{\theta}^{m+1})=H(S^{m+1}_{i}-\overline{% \theta}^{m+1})$ for each agent $i$ in segment $m+1.$ Since the elements of $\mathbf{S}^{m+1}$ are independent conditional on $\overline{\theta}^{m+1},$ we may write $$G^{m+1}(\mathbf{S}^{m+1}\mid\mathbf{\theta}^{m+1})=\prod_{i=1}^{N_{m+1}}H(S^{m% +1}_{i}-\overline{\theta}^{m+1}).$$ A change of variables therefore yields $$\displaystyle\int dG^{m+1}(\mathbf{S}^{m+1}\mid\overline{\theta}^{m+1})\,% \mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\delta_{1},\mathbf{S}^{1:m+1}]$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}dq_{1}...\int_{0}^{1}dq_{N_{m+1}}\mathbb{E}[\theta_{0% }\mid S_{0}=s_{0}+\delta_{2},\mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+% \overline{\theta}^{m+1})_{i=1...,N_{m}}].$$ Now fix $s_{0}$ and $\mathbf{S}^{1:m},$ and denote the integrand of this representation $$\zeta(z,\delta,\mathbf{q})\equiv\mathbb{E}[\theta_{0}\mid S_{0}=s_{0}+\delta,% \mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+z)_{i=1...,N_{m}}],$$ where $\mathbf{q}\equiv(q_{1},...,q_{N_{m+1}}).$ Using techniques very similar to that used to prove Lemma B.3, it can be shown that there exists a $C^{1}$ quantile function $\phi(q,\delta)$ satisfying $\partial\phi/\partial\delta>0$ such that $F_{\overline{\theta}^{m+1}}(\phi(q,\delta)\mid S_{0}=s_{0}+\delta,\mathbf{S}^{% 1:m})=q$ for every $q_{0}$ and $\Delta.$ Then by a further change of variables, $\psi$ may be written $$\psi(\delta_{1},\delta_{2},s_{0},\mathbf{S}^{1:m})=\int_{0}^{1}dq_{0}...\int_{% 0}^{1}dq_{N_{m+1}}\,\zeta(\phi(q_{0},\delta_{1}),\delta_{2},\mathbf{q}).$$ By assumption, $\mathbb{E}[\theta_{0}\mid S_{0},\mathbf{S}^{1:m+1}]$ is differentiable wrt each $S^{m+1}_{i},$ and by arguments very similar to those used to prove Lemma B.8, it can be shown that $\frac{\partial}{\partial S^{m+1}_{i}}\mathbb{E}[\theta_{0}\mid S_{0},\mathbf{S% }^{1:m+1}]>0$ for every $i=1,...,m+1.$ Since additionally $\partial\phi/\partial\delta>0$ everywhere, it follows that $$\zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q})>\zeta(\phi(q_{0},0),\Delta,\mathbf% {q})$$ for every $\Delta>0$ and $(q_{0},\mathbf{q}),$ and thus that $$\displaystyle\frac{1}{\Delta}\left(\psi(\Delta,\Delta,s_{0},\mathbf{S}^{1:m})-% \psi(\Delta,0,s_{0},\mathbf{S}^{1:m})\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}dq_{0}...\int_{0}^{1}dq_{N_{m+1}}\,\frac{1}{\Delta}(% \zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q})-\zeta(\phi(q_{0},0),\Delta,\mathbf% {q}))$$ is strictly positive for every $\Delta>0.$ Since this result holds for every $(s_{0},\mathbf{S}^{1:N}),$ Fatou’s lemma therefore implies that $$\displaystyle\mu^{\prime}_{m}(0)-\mu^{\prime}_{m+1}(0)\geq\int dG^{0:m}(S_{0}=% s_{0},\mathbf{S}^{1:m})\,\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}\left(% \psi(\Delta,\Delta,s_{0},\mathbf{S}^{1:m})-\psi(\Delta,0,s_{0},\mathbf{S}^{1:m% })\right)$$ and $$\displaystyle\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}\left(\psi(\Delta,% \Delta,s_{0},\mathbf{S}^{1:m})-\psi(\Delta,0,s_{0},\mathbf{S}^{1:m})\right)$$ $$\displaystyle\geq$$ $$\displaystyle\int_{0}^{1}dq_{0}...\int_{0}^{1}dq_{N_{m+1}}\,\lim_{\Delta% \downarrow 0}\frac{1}{\Delta}(\zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q})-% \zeta(\phi(q_{0},0),\Delta,\mathbf{q})).$$ Further, the integrand of the previous expression can be equivalently written $$\displaystyle\frac{1}{\Delta}(\zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q})-% \zeta(\phi(q_{0},0),\Delta,\mathbf{q}))$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\Delta}(\zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q})-% \zeta(\phi(q_{0},0),0,\mathbf{q}))-\frac{1}{\Delta}(\zeta(\phi(q_{0},0),\Delta% ,\mathbf{q})-\zeta(\phi(q_{0},0),0,\mathbf{q})).$$ Now, by assumption $\mathbb{E}[\theta_{0}\mid S_{0},\mathbf{S}^{1:m+1}]$ is differentiable wrt $S_{0}$ and each $S^{m+1}_{i},$ and each derivative is continuous in $(S_{0},\mathbf{S}^{m+1}).$ Hence $\mathbb{E}[\theta_{0}\mid S_{0},\mathbf{S}^{1:m+1}]$ is a totally differentiable function of $(S_{0},\mathbf{S}^{1:m+1})$ everywhere. Thus by the chain rule $$\displaystyle\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\zeta(\phi(q_{0},\Delta% ),\Delta,\mathbf{q})-\zeta(\phi(q_{0},0),\Delta,\mathbf{q}))$$ $$\displaystyle=$$ $$\displaystyle\frac{\partial}{\partial S_{0}}\mathbb{E}[\theta_{0}\mid S_{0}=s_% {0},\mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+\phi(q_{0},0))_{i=1...,N_% {m}}]$$ $$\displaystyle+\sum_{i=1}^{N_{m}}\frac{\partial}{\partial S^{m+1}_{i}}\mathbb{E% }[\theta_{0}\mid S_{0}=s_{0},\mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+% \phi(q_{0},0))_{i=1...,N_{m}}]\frac{\partial\phi}{\partial\Delta}(q_{0},0)$$ $$\displaystyle-\frac{\partial}{\partial S_{0}}\mathbb{E}[\theta_{0}\mid S_{0}=s% _{0},\mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+\phi(q_{0},0))_{i=1...,N% _{m}}]$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{N_{m}}\frac{\partial}{\partial S^{m+1}_{i}}\mathbb{E}% [\theta_{0}\mid S_{0}=s_{0},\mathbf{S}^{1:m},\mathbf{S}^{m+1}=(H^{-1}(q_{i})+% \phi(q_{0},0))_{i=1...,N_{m}}]\frac{\partial\phi}{\partial\Delta}(q_{0},0).$$ As noted earlier, each of these derivatives is strictly positive, and so it follows that the entire limit is strictly positive. Thus $$\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}\left(\psi(\Delta,\Delta,s_{0},% \mathbf{S}^{1:m})-\psi(\Delta,0,s_{0},\mathbf{S}^{1:m})\right)>0$$ everywhere, meaning in turn that $\mu^{\prime}_{m}(0)-\mu^{\prime}_{m+1}(0)>0.$ In other words, the marginal value of effort is declining in $m$ in the quality linkage model. The result for the circumstance linkage model proceeds nearly identically, with the key difference that now an analog of Lemma B.8 implies that $\frac{\partial}{\partial S^{m+1}_{i}}\mathbb{E}[\theta_{0}\mid S_{0},\mathbf{S% }^{1:m+1}]<0$ for every $i.$ Thus $$\displaystyle\lim_{\Delta\downarrow 0}\frac{1}{\Delta}(\zeta(\phi(q_{0},0),% \Delta,\mathbf{q})-\zeta(\phi(q_{0},\Delta),\Delta,\mathbf{q}))>0$$ everywhere, so that $$\liminf_{\Delta\downarrow 0}\frac{1}{\Delta}\left(\psi(\Delta,0,s_{0},\mathbf{% S}^{1:m})-\psi(\Delta,\Delta,s_{0},\mathbf{S}^{1:m})\right)>0$$ everywhere and hence $\mu^{\prime}_{m+1}(0)-\mu^{\prime}_{m}(0)>0.$ So the marginal value of effort is rising in $m$ in the circumstance linkage model.
A linearly implicit structure-preserving scheme for the Camassa-Holm equation based on multiple scalar auxiliary variables approach Chaolong Jiang${}^{1}$, Yuezheng Gong${}^{2}$, Wenjun Cai${}^{3}$, Yushun Wang${}^{3}$111Correspondence author. Email: wangyushun@njnu.edu.cn. ${}^{1}$ School of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming 650221, P.R. China ${}^{2}$ College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, P.R. China ${}^{3}$ Jiangsu Provincial Key Laboratory for NSLSCS, School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, P.R. China () Abstract In this paper, we present a linearly implicit energy-preserving scheme for the Camassa-Holm equation by using the multiple scalar auxiliary variables approach, which is first developed to construct efficient and robust energy stable schemes for gradient systems. The Camassa-Holm equation is first reformulated into an equivalent system by utilizing the multiple scalar auxiliary variables approach, which inherits a modified energy. Then, the system is discretized in space aided by the standard Fourier pseudo-spectral method and a semi-discrete system is obtained, which is proven to preserve a semi-discrete modified energy. Subsequently, the linearized Crank-Nicolson method is applied for the resulting semi-discrete system to arrive at a fully discrete scheme. The main feature of the new scheme is to form a linear system with a constant coefficient matrix at each time step and produce numerical solutions along which the modified energy is precisely conserved, as is the case with the analytical solution. Several numerical results are addressed to confirm accuracy and efficiency of the proposed scheme. AMS subject classification: 65M06, 65M70 Keywords: Multiple scalar auxiliary variables approach, linearly implicit scheme, energy-preserving scheme, Camassa-Holm equation. 1 Introduction In this paper, we consider the Camassa-Holm (CH) equation [2, 3] $$\displaystyle\left\{\begin{aligned} &\displaystyle u_{t}-u_{xxt}+3uu_{x}-2u_{x% }u_{xx}-uu_{xxx}=0,\ a<x<b,\ 0<t\leq T,\\ &\displaystyle u(x,0)=u_{0}(x),\ a\leq x\leq b,\\ &\displaystyle u(x+L,t)=u(x,t),\ a\leq x\leq b,\ 0\leq t\leq T,\end{aligned}\right.$$ (1.1) where $t$ is time, $x$ is the spatial coordinate, $L=b-a$, and $u(x,t)$ represents the water’s free surface in non-dimensional variables. The CH equation models the unidirectional propagation of shallow water waves over a flat bottom and is completely integrable [2, 6]. Thus, it has infinitely many conservation laws. The first three are $$\displaystyle\frac{d}{dt}\mathcal{M}=0,\ \mathcal{M}=\int_{a}^{b}udx,$$ (1.2) $$\displaystyle\frac{d}{dt}\mathcal{I}=0,\ \mathcal{I}=\int_{a}^{b}(u^{2}+u_{x}^% {2})dx,$$ (1.3) $$\displaystyle\frac{d}{dt}\mathcal{H}=0,\ \mathcal{H}=-\frac{1}{2}\int_{a}^{b}% \Big{(}u^{3}+uu_{x}^{2}\Big{)}dx,$$ (1.4) where $\mathcal{M}$, $\mathcal{I}$ and $\mathcal{H}$ are the mass, momentum and energy of the CH equation (1.1), respectively. The aim of this paper is concerned with the numerical methods that preserve the energy. Because the energy is the most important first integral of the CH equation, designing of energy-preserving methods attracts a lot of interest. In Ref. [15], Matsuo et al. presented an energy-conserving Galerkin scheme for the CH equation. Further analysis for the energy-preserving $H^{1}$-Galerkin scheme was investigated in Ref. [16]. Later on, Cohen and Raynaud [5] derived a new energy-preserving scheme by the discrete gradient approach. Recently, Gong and Wang [9] proposed an energy-preserving wavelet collocation scheme for the CH equation (1.1). However, such energy-preserving schemes are fully implicit that typically need iterative solvers at each time step. This quickly becomes a computationally expensive procedure. To address this drawback and maintain the desired energy-preserving property, Eidnes et al. [8] constructed two linearly implicit energy-preserving schemes for the CH equation (1.1) using the Kahan’s method and the polarised discrete gradient methods, respectively. In Ref. [13], we proposed a novel linearly implicit energy-preserving scheme for the CH equation (1.1) using the invariant energy quadratization (IEQ) approach [10, 22, 23]. At each time step, the linearly implicit schemes only require to solve a linear system, which leads to considerably lower costs than the implicit one [7]. However, these schemes leads to a linear system with complicated variable coefficients at each time step that may be difficult or expensive to solve. More recently, inspired by the scalar auxiliary variable (SAV) approach [18, 19], Cai et al. developed a linearly implicit energy-conserving scheme for the sine-Gordon equation [1]. The resulting scheme leads to a linear system with constant coefficients that is easy to implement. The purpose of this paper is to apply the idea of the SAV approach to develop an efficient and energy-preserving scheme for the CH equation (1.1). However, the classical SAV approach can not be directly applied to develop energy-preserving schemes for the CH equation. Following the classical SAV approach, we need to introduce the auxiliary variable, as follows: $$\displaystyle q=\sqrt{\int_{a}^{b}\Big{(}u^{3}+uu_{x}^{2}\Big{)}dx+C_{0}},$$ where $C_{0}$ is a constant large enough to make $q$ well-defined. The energy is then rewritten as $$\displaystyle\mathcal{H}=-\frac{1}{2}q^{2}+\frac{1}{2}C_{0}.$$ (1.5) According to the energy variational, the CH equation (1.1) can be reformulated into an equivalent system, as follows: $$\displaystyle\left\{\begin{aligned} &\displaystyle\partial_{t}u=\mathcal{D}% \Bigg{(}-\frac{3u^{2}+u_{x}^{2}}{2\sqrt{(u^{3}+uu_{x}^{2},1)+C_{0}}}q+\frac{% \partial_{x}\big{(}2quu_{x}\big{)}}{2\sqrt{(u^{3}+uu_{x}^{2},1)+C_{0}}}\Bigg{)% },\\ &\displaystyle\partial_{t}q=0,\\ &\displaystyle u(x,0)=u_{0}(x),\ q(0)=\sqrt{\int_{a}^{b}\Big{(}u_{0}(x)^{3}+u_% {0}(x)\partial_{x}u_{0}(x)^{2}\Big{)}dx+C_{0}},\\ &\displaystyle u(x+L,t)=u(x,t),\end{aligned}\right.$$ (1.6) where $\mathcal{D}=(1-\partial_{xx})^{-1}\partial_{x}$ is a skew-adjoint operator. It is clearly demonstrated that the SAV approach is invalid. To meet the challenge, we first split the energy (1.4) as three parts, where two parts are bounded from below and the rest is quadratic. Then, we utilize the multiple scalar auxiliary variables (MSAV) approach [20] to transform the original system into an equivalent form, which inherits a modified energy. Subsequently, a novel linearly implicit energy-preserving scheme is proposed by applying the linearly implicit structure-preserving method in time and the standard Fourier pseudo-spectral method in space, respectively, for the equivalent system. We show that the proposed scheme can exactly preserve the discrete modified energy and only require to solve a linear system with a constant coefficient matrix at each time step that can be solved by FFT solvers efficiently. The MSAV approach is more recently proposed by Shen et al. [20] to deal with free energies with multiple disparate terms in the phase-field vesicle membrane and leads to robust energy stable schemes which enjoy the same computational advantages as the classical SAV approach. To the best of our knowledge, there is no result concerning the MSAV approach for the energy-conserving system. Taking the CH equation (1.1) for example, we first explore the feasibility of the MSAV approach and then devise a linearly implicit energy-preserving scheme. In addition, we also give the first example for the energy-conserving system that the classical SAV approach is invalid. The outline of this paper is organized as follows. In Section 2, based on the MSAV approach, the CH equation (1.1) is reformulated into an equivalent form. A semi-discrete system, which inherits the semi-discrete modified energy, is presented in Section 3. In Section 4, we will concentrate on the construction for the linearly implicit energy-preserving scheme. Several numerical experiments are reported in Section 5. We draw some conclusions in Section 6. 2 Model reformulation using the MSAV approach In this section, we first reformulate the CH equation into an equivalent form with a quadratic energy functional using the idea of the MSAV approach. The resulting reformulation provides an elegant platform for developing linearly implicit energy-preserving schemes. The energy functional (1.4) can be split as the following three parts $$\displaystyle\mathcal{H}$$ $$\displaystyle=-\frac{1}{2}\int_{a}^{b}(u+\frac{1}{2})^{2}(u^{2}+u_{x}^{2})dx+% \frac{1}{2}\int_{a}^{b}u^{2}(u^{2}+u_{x}^{2})dx+\frac{1}{8}\int_{a}^{b}(u^{2}+% u_{x}^{2})dx$$ $$\displaystyle:=-\frac{1}{2}\int_{a}^{b}g(u,u_{x})dx+\frac{1}{2}\int_{a}^{b}h(u% ,u_{x})dx+\frac{1}{8}\int_{a}^{b}(u^{2}+u_{x}^{2})dx.$$ (2.1) Subsequently, following the idea of the MSAV approach, we introduce two scalar auxiliary variables, as follows: $$\displaystyle q_{1}=\sqrt{(g(u,u_{x}),1)},\ q_{2}=\sqrt{(h(u,u_{x}),1)},$$ where $(v,w)$ is the inner product defined by $(v,w)=\int_{a}^{b}vwdx$. Eq. (2) can then be rewritten as $$\displaystyle\mathcal{H}=-\frac{1}{2}q_{1}^{2}+\frac{1}{2}q_{2}^{2}+\frac{1}{8% }\int_{a}^{b}(u^{2}+u_{x}^{2})dx.$$ (2.2) According to the energy variational, the system (1.1) can be reformulated into the following equivalent form $$\displaystyle\left\{\begin{aligned} &\displaystyle\partial_{t}u=\mathcal{D}% \Bigg{(}\frac{-q_{1}}{2\sqrt{(g(u,u_{x}),1)}}\Bigg{(}\frac{\partial g}{% \partial u}(u,u_{x})-\partial_{x}\frac{\partial g}{\partial u_{x}}(u,u_{x})% \Bigg{)}\\ &\displaystyle~{}~{}~{}~{}~{}+\frac{q_{2}}{2\sqrt{(h(u,u_{x}),1)}}\Bigg{(}% \frac{\partial h}{\partial u}(u,u_{x})-\partial_{x}\frac{\partial h}{\partial u% _{x}}(u,u_{x})\Bigg{)}+\frac{1}{4}(u-u_{xx})\Bigg{)},\\ &\displaystyle\partial_{t}q_{1}=\Bigg{(}\frac{1}{2\sqrt{(g(u,u_{x}),1)}}\Big{(% }\frac{\partial g}{\partial u}(u,u_{x})-\partial_{x}\frac{\partial g}{\partial u% _{x}}(u,u_{x})\Big{)},u_{t}\Bigg{)},\\ &\displaystyle\partial_{t}q_{2}=\Bigg{(}\frac{1}{2\sqrt{(h(u,u_{x}),1)}}\Big{(% }\frac{\partial h}{\partial u}(u,u_{x})-\partial_{x}\frac{\partial h}{\partial u% _{x}}(u,u_{x})\Big{)},u_{t}\Bigg{)},\\ &\displaystyle u(x,0)=u_{0}(x),\ q_{1}(t)|_{t=0}=\sqrt{(g(u_{0}(x),\partial_{x% }u_{0}(x)),1)},\\ &\displaystyle q_{2}(t)|_{t=0}=\sqrt{(h(u_{0}(x),\partial_{x}u_{0}(x)),1)},\\ &\displaystyle u(x+L,t)=u(x,t),\end{aligned}\right.$$ (2.3) where $$\displaystyle\frac{\partial g}{\partial u}(u,u_{x})=2(u+\frac{1}{2})(2u^{2}+u_% {x}^{2}+\frac{1}{2}u),\ \frac{\partial g}{\partial u_{x}}=2u_{x}(u+\frac{1}{2}% )^{2},$$ $$\displaystyle\frac{\partial h}{\partial u}(u,u_{x})=4u^{3}+2uu_{x}^{2},\ \frac% {\partial h}{\partial u_{x}}=2u_{x}u^{2}.$$ Theorem 2.1. The system (2.3) possesses the following modified energy. $$\displaystyle\frac{d}{dt}\mathcal{H}=0,\ \mathcal{H}=-\frac{1}{2}q_{1}^{2}+% \frac{1}{2}q_{2}^{2}+\frac{1}{8}\int_{a}^{b}(u^{2}+u_{x}^{2})dx.$$ Proof. We can deduce from (2.3) that $$\displaystyle\frac{d}{dt}\mathcal{H}$$ $$\displaystyle=-q_{1}\frac{d}{dt}q_{1}+q_{2}\frac{d}{dt}q_{2}+\frac{1}{4}(u-u_{% xx},u_{t})-\frac{1}{4}u_{x}u_{t}|_{a}^{b}$$ $$\displaystyle=\Bigg{(}\frac{-q_{1}}{2\sqrt{(g(u,u_{x}),1)}}\Big{(}\frac{% \partial g}{\partial u}(u,u_{x})-\partial_{x}\frac{\partial g}{\partial u_{x}}% (u,u_{x})\Big{)}$$ $$\displaystyle+\frac{q_{2}}{2\sqrt{(h(u,u_{x}),1)}}\Big{(}\frac{\partial h}{% \partial u}(u,u_{x})-\partial_{x}\frac{\partial h}{\partial u_{x}}(u,u_{x})% \Big{)}+\frac{1}{4}(u-u_{xx}),u_{t}\Bigg{)}-\frac{1}{4}u_{x}u_{t}|_{a}^{b}$$ $$\displaystyle=0,$$ where the last equality follows from the first equality of (2.3), the periodic boundary condition and the skew-adjoint property of $\mathcal{D}$. ∎ Remark 2.1. We should note that the splitting strategy used in (2) is not unique. The comparisons between splitting strategies will be the subject of future investigations. 3 Structure-preserving spatial semi-discretization In this section, the standard Fourier pseudo-spectral method is employed to approximate spatial derivatives of the system (2.3) and we prove that the resulting semi-discrete system can exactly preserve the semi-discrete modified energy. Choose the mesh size $h=L/N$ with $N$ an even positive integer, and denote the grid points by $x_{j}=jh$ for $j=0,1,2,\cdots,N$; let $U_{j}$ be the numerical approximation of $u(x_{j},t)$ for $j=0,1,\cdots,N$ and ${U}=(U_{0},U_{1},\cdots,U_{N-1})^{T}$ be the solution vector space, and define discrete inner product as $$\displaystyle\langle{U},{V}\rangle_{h}=h\sum_{j=0}^{N-1}U_{j}V_{j}.$$ Let $$\displaystyle S_{N}=\text{span}\{g_{j}(x),\ 0\leq j\leq N-1\}$$ be the interpolation space, where $g_{j}(x)$ is trigonometric polynomials of degree $N/2$ given by $$\displaystyle g_{j}(x)=\frac{1}{N}\sum_{l=-N/2}^{N/2}\frac{1}{a_{l}}e^{\text{i% }l\mu(x-x_{j})},$$ with $a_{l}=\left\{\begin{aligned} &\displaystyle 1,\ |l|<\frac{N}{2},\\ &\displaystyle 2,\ |l|=\frac{N}{2},\end{aligned}\right.,$ and $\mu=\frac{2\pi}{b-a}$. We define the interpolation operator $I_{N}:C(\Omega)\to S_{N}$, as follows: $$\displaystyle I_{N}u(x,t)=\sum_{j=0}^{N-1}u_{j}(t)g_{j}(x),$$ where $u_{j}(t)=u(x_{j},t)$. Taking the derivative with respect to $x$, and then evaluating the resulting expression at the collocation points $x_{j}$, we have $$\displaystyle\frac{\partial^{s}I_{N}u(x_{j},t)}{\partial x^{s}}$$ $$\displaystyle=\sum_{j_{1}=0}^{N-1}u_{j_{1}}\frac{d^{s}g_{j_{1}}(x_{j})}{dx^{s}% }=[{D}_{s}{u}]_{j},$$ where $u=(u_{0},u_{1},\cdots,u_{N-1})^{T}$ and ${D}_{s}$ is an $N\times N$ matrix, with elements given by $$\displaystyle({D}_{s})_{j_{1},j}=\frac{d^{s}g_{j}(x_{j_{1}})}{dx^{s}}.$$ In particular, the first and second order differential matrices read as [4] $$\displaystyle({D}_{1})_{j,l}=\left\{\begin{aligned} &\displaystyle\frac{1}{2}% \mu(-1)^{j+l}\cot(\mu\frac{x_{j}-x_{l}}{2}),&\displaystyle j\neq l,\\ &\displaystyle 0,&\displaystyle j=l,\end{aligned}\right.$$ $$\displaystyle({D}_{2})_{j,l}=\left\{\begin{aligned} &\displaystyle\frac{1}{2}% \mu^{2}(-1)^{j+l+1}\csc^{2}(\mu\frac{x_{j}-x_{l}}{2}),&\displaystyle j\neq l,% \\ &\displaystyle-\mu^{2}\frac{N^{2}+2}{12},&\displaystyle j=l.\end{aligned}\right.$$ Applying the standard Fourier pseudo-spectral method to the system (2.3) in space and we have $$\displaystyle\left\{\begin{aligned} &\displaystyle\frac{d}{dt}{U}={D}\Bigg{(}% \frac{-Q_{1}\Big{(}g_{1}({U},{D}_{1}{U})-{D}_{1}g_{2}({U},{D}_{1}{U})\Big{)}}{% 2\sqrt{\langle g({U},{D}_{1}{U}),{\bm{1}}\rangle_{h}}}\\ &\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}+\frac{Q_{2}\Big{(}h_{1}({U},{D}_{1}{% U})-{D}_{1}h_{2}({U},{D}_{1}{U})\Big{)}}{2\sqrt{\langle h({U},{D}_{1}{U}),{\bm% {1}}\rangle_{h}}}+\frac{1}{4}\big{(}{U}-{D}_{2}{U}\big{)}\Bigg{)},\\ &\displaystyle\frac{d}{dt}Q_{1}=\Bigg{\langle}\frac{\Big{(}g_{1}({U},{D}_{1}{U% })-{D}_{1}g_{2}({U},{D}_{1}{U})\Big{)}}{2\sqrt{\langle g({U},{D}_{1}{U}),{\bm{% 1}}\rangle_{h}}},\frac{d}{dt}{U}\Bigg{\rangle}_{h},\\ &\displaystyle\frac{d}{dt}Q_{2}=\Bigg{\langle}\frac{\Big{(}h_{1}({U},{D}_{1}{U% })-{D}_{1}h_{2}({U},{D}_{1}{U})\Big{)}}{2\sqrt{\langle h({U},{D}_{1}{U}),{\bm{% 1}}\rangle_{h}}},\frac{d}{dt}{U}\Bigg{\rangle}_{h},\\ \end{aligned}\right.$$ (3.1) where ${D}=({I}-{D}_{2})^{-1}{D}_{1}$, $g_{1}=\frac{\partial g}{\partial u},\ g_{2}=\frac{\partial g}{\partial u_{x}}$, $h_{1}=\frac{\partial h}{\partial u}$, and $h_{2}=\frac{\partial h}{\partial u_{x}}$. Theorem 3.1. The semi-discrete system (3.1) admits the semi-discrete modified energy, as follows: $$\displaystyle\frac{d}{dt}E_{h}=0,\ E_{h}=-\frac{1}{2}Q_{1}^{2}+\frac{1}{2}Q_{2% }^{2}+\frac{1}{8}\langle{U}-{D}_{2}{U},{U}\rangle_{h}.$$ Proof. It follows from the semi-discrete system (3.1) that $$\displaystyle\frac{d}{dt}E_{h}$$ $$\displaystyle=-Q_{1}\frac{d}{dt}Q_{1}+Q_{2}\frac{d}{dt}Q_{2}+\frac{1}{4}% \langle{U}-{D}_{2}{U},\frac{d}{dt}{U}\rangle_{h}$$ $$\displaystyle=\Big{\langle}\frac{-Q_{1}}{2\sqrt{\langle g({U},{D}_{1}{U}),{\bm% {1}}\rangle_{h}}}\Big{(}g_{1}({U},{D}_{1}{U})-{D}_{1}g_{2}({U},{D}_{1}{U})\Big% {)}$$ $$\displaystyle~{}~{}~{}~{}+\frac{Q_{2}}{2\sqrt{\langle h({U},{D}_{1}{U}),{\bm{1% }}\rangle_{h}}}\Big{(}h_{1}({U},{D}_{1}{U})-{D}_{1}h_{2}({U},{D}_{1}{U})\Big{)}$$ $$\displaystyle~{}~{}~{}~{}+\frac{1}{4}({U}-{D}_{2}{U}),\frac{d}{dt}{U}\Big{% \rangle}_{h}$$ $$\displaystyle=0,$$ where the last equality follows from the first equality (3.1) and the skew-symmetry of ${D}$. ∎ 4 Construction of the linearly implicit energy-preserving scheme In this section, we present a linearly implicit energy-preserving scheme by utilizing the linearized Crank-Nicolson method to the semi-discrete system (3.1) in time. Choose $\tau=T/M$ be the time step with $M$ a positive integer, and denote $t_{n}=n\tau$ for $n=0,1,2\cdots,M$; let $U_{j}^{n}$ be the numerical approximation of $u(x_{j},t_{n})$ for $j=0,1,\cdots,N$ and $n=0,1,2,\cdots,M$; denote $U^{n}$ as the solution vector at $t=t_{n}$ and define $$\displaystyle\delta_{t}{U}_{j}^{n}=\frac{{U}_{j}^{n+1}-{U}_{j}^{n}}{\tau},\ {U% }_{j}^{n+\frac{1}{2}}=\frac{{U}_{j}^{n+1}+{U}_{j}^{n}}{2},\ \hat{U}_{j}^{n+% \frac{1}{2}}=\frac{3{U}_{j}^{n}-{U}_{j}^{n-1}}{2},0\leq j\leq N-1.$$ Applying the linearized Crank-Nicolson method to the semi-discrete system (3.1) in time, and we obtain a fully discretized scheme, as follows: $$\displaystyle\left\{\begin{aligned} &\displaystyle\delta_{t}{U}^{n}={D}\Bigg{(% }\frac{-Q_{1}^{n+\frac{1}{2}}\Big{(}g_{1}(\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{% U}^{n+\frac{1}{2}})-{D}_{1}g_{2}(\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+% \frac{1}{2}})\Big{)}}{2\sqrt{\langle g(\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^% {n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}}\\ &\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}+\frac{Q_{2}^{n+\frac{1}{2}}\Big{(}h_% {1}(\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}h_{2}(\hat{% U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})\Big{)}}{2\sqrt{\langle h(% \hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}}% \\ &\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}+\frac{1}{4}\big{(}{U}^{n+\frac{1}{2}% }-{D}_{2}{U}^{n+\frac{1}{2}}\big{)}\Bigg{)},\\ &\displaystyle\delta_{t}Q_{1}^{n}=\Bigg{\langle}\frac{\Big{(}g_{1}(\hat{U}^{n+% \frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}g_{2}(\hat{U}^{n+\frac{1}{% 2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})\Big{)}}{2\sqrt{\langle g(\hat{U}^{n+\frac{% 1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}},\delta_{t}{U}^{n}% \Bigg{\rangle}_{h},\\ &\displaystyle\delta_{t}Q_{2}^{n}=\Bigg{\langle}\frac{\Big{(}h_{1}(\hat{U}^{n+% \frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}h_{2}(\hat{U}^{n+\frac{1}{% 2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})\Big{)}}{2\sqrt{\langle h(\hat{U}^{n+\frac{% 1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}},\delta_{t}{U}^{n}% \Bigg{\rangle}_{h},\\ \end{aligned}\right.$$ (4.1) for $n=1,\cdots,M-1$. The initial and boundary conditions in (2.3) are discretized as $$\displaystyle U_{j}^{0}=u_{0}(x_{j}),\ Q_{1}^{0}=\sqrt{\langle g(U^{0},D_{1}U^% {0}),{\bm{1}}\rangle_{h}},\ Q_{2}^{0}=\sqrt{\langle h(U^{0},D_{1}U^{0}),{\bm{1% }}\rangle_{h}},$$ (4.2) $$\displaystyle U_{j\pm N}^{n}=U_{j}^{n},\ j=0,1,2,\cdots,N.$$ (4.3) Remark 4.1. Note that the proposed scheme (4.1) is a three level and we obtain ${U}^{1},Q_{1}^{1}$ and $Q_{2}^{1}$ by using ${U}^{n}$ instead of $\hat{U}^{n+\frac{1}{2}}$ for the first step. Theorem 4.1. The proposed scheme (4.1) can preserve the following discrete modified energy $$\displaystyle E_{h}^{n+1}=E_{h}^{n},\ E_{h}^{n}=-\frac{1}{2}(Q_{1}^{n})^{2}+% \frac{1}{2}(Q_{2}^{n})^{2}+\frac{1}{8}\langle{U}^{n}-{D}_{2}{U}^{n},{U}^{n}% \rangle_{h},$$ (4.4) for $n=0,1,\cdots,M-1.$ Proof. It is readily to obtain from (4.1) that $$\displaystyle\delta_{t}E_{h}^{n}$$ $$\displaystyle=-Q_{1}^{n+\frac{1}{2}}\delta_{t}Q_{1}^{n}+Q_{2}^{n+\frac{1}{2}}% \delta_{t}Q_{2}^{n}+\frac{1}{4}\langle{U}^{n+\frac{1}{2}}-{D}_{2}{U}^{n+\frac{% 1}{2}},\delta_{t}{U}^{n}\rangle_{h}$$ $$\displaystyle=\Big{\langle}\frac{-Q_{1}^{n+\frac{1}{2}}}{2\sqrt{\langle g(\hat% {U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}}\Big{% (}g_{1}(\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}g_{2}(% \hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})\Big{)}$$ $$\displaystyle~{}~{}+\frac{Q_{2}^{n+\frac{1}{2}}}{2\sqrt{\langle h(\hat{U}^{n+% \frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}}),{\bm{1}}\rangle_{h}}}\Big{(}h_{1}% (\hat{U}^{n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}h_{2}(\hat{U}^% {n+\frac{1}{2}},{D}_{1}\hat{U}^{n+\frac{1}{2}})\Big{)}$$ $$\displaystyle~{}~{}+\frac{1}{4}({U}^{n+\frac{1}{2}}-{D}_{2}{U}^{n+\frac{1}{2}}% ),\delta_{t}{U}^{n}\Big{\rangle}_{h}$$ $$\displaystyle=0,$$ (4.5) which further implies $$\displaystyle E_{h}^{n+1}=E_{h}^{n},\ n=1,2,\cdots,M-1,$$ where the last equality of (4) follows from the first equality of (4.1) and the skew-symmetry of ${D}$. An argument similar to the first step used in (4) shows that $$\displaystyle E_{h}^{1}=E_{h}^{0}.$$ This completes the proof. ∎ Besides its energy-preserving property, a most remarkable thing about the above scheme is that it can be solved efficiently. Let $$\displaystyle{G}_{1}=\frac{1}{2\sqrt{\langle g(\hat{U}^{n+\frac{1}{2}},{D}_{1}% \hat{U}^{n+\frac{1}{2}}),{1}\rangle_{h}}}\Big{(}g_{1}(\hat{U}^{n+\frac{1}{2}},% {D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}g_{2}(\hat{U}^{n+\frac{1}{2}},{D}_{1}% \hat{U}^{n+\frac{1}{2}})\Big{)},$$ $$\displaystyle{G}_{2}=\frac{1}{2\sqrt{\langle h(\hat{U}^{n+\frac{1}{2}},{D}_{1}% \hat{U}^{n+\frac{1}{2}}),{1}\rangle_{h}}}\Big{(}h_{1}(\hat{U}^{n+\frac{1}{2}},% {D}_{1}\hat{U}^{n+\frac{1}{2}})-{D}_{1}h_{2}(\hat{U}^{n+\frac{1}{2}},{D}_{1}% \hat{U}^{n+\frac{1}{2}})\Big{)}.$$ Eq. (4.1) can then rewritten as $$\displaystyle\left\{\begin{aligned} &\displaystyle{U}^{n+\frac{1}{2}}={U}^{n}+% \frac{\tau}{2}{D}\Bigg{(}-Q_{1}^{n+\frac{1}{2}}{G}_{1}+Q_{2}^{n+\frac{1}{2}}{G% }_{2}+\frac{1}{4}\big{(}{U}^{n+\frac{1}{2}}-{D}_{2}{U}^{n+\frac{1}{2}}\big{)}% \Bigg{)},\\ &\displaystyle Q_{1}^{n+\frac{1}{2}}=Q_{1}^{n}+\Big{\langle}{G}_{1},{U}^{n+% \frac{1}{2}}-{U}^{n}\Big{\rangle}_{h},\\ &\displaystyle Q_{2}^{n+\frac{1}{2}}=Q_{2}^{n}+\Big{\langle}{G}_{2},{U}^{n+% \frac{1}{2}}-{U}^{n}\Big{\rangle}_{h}.\\ \end{aligned}\right.$$ (4.6) Next, by eliminating $Q_{1}^{n+\frac{1}{2}}$ and $Q_{2}^{n+\frac{1}{2}}$ from (4.6), we have $$\displaystyle\Big{[}{I}-\frac{\tau}{8}{D}_{1}$$ $$\displaystyle\Big{]}{U}^{n+\frac{1}{2}}=-\frac{\tau}{2}{D}{G}_{1}\langle{G}_{1% },{U}^{n+\frac{1}{2}}\rangle_{h}+\frac{\tau}{2}{D}{G}_{2}\langle{G}_{2},{U}^{n% +\frac{1}{2}}\rangle_{h}+{r}^{n},$$ (4.7) where $$\displaystyle{r}^{n}$$ $$\displaystyle={U}^{n}-\frac{\tau}{2}{D}{G}_{1}Q_{1}^{n}+\frac{\tau}{2}{D}{G}_{% 2}Q_{2}^{n}+\frac{\tau}{2}{D}{G}_{1}\langle{G}_{1},{U}^{n}\rangle_{h}-\frac{% \tau}{2}{D}{G}_{2}\langle{G}_{2},{U}^{n}\rangle_{h}.$$ Denote ${A}^{-1}=({I}-\frac{\tau}{8}{D}_{1})^{-1}$ and $$\displaystyle{\gamma}_{1}^{n}=-\frac{\tau}{2}{A}^{-1}{D}{G}_{1},\ {\gamma}_{2}% ^{n}=\frac{\tau}{2}{A}^{-1}{D}{G}_{2},\ {b}^{n}={A}^{-1}{r}^{n},$$ the above equation is equivalent to $$\displaystyle{U}^{n+\frac{1}{2}}$$ $$\displaystyle={\gamma}_{1}^{n}\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h}+{% \gamma}_{2}^{n}\langle{G}_{2},{U}^{n+\frac{1}{2}}\rangle_{h}+{b}^{n}.$$ (4.8) We take the inner product of (4.8) with ${G}_{1}$ and have $$\displaystyle\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h}=\langle{G}_{1},{% \gamma}_{1}^{n}\rangle_{h}\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h}+% \langle{G}_{1},{\gamma}_{2}^{n}\rangle_{h}\langle{G}_{2},{U}^{n+\frac{1}{2}}% \rangle_{h}+\langle{G}_{1},{b}^{n}\rangle_{h}.$$ (4.9) Taking the inner product of (4.8) with ${G}_{2}$, we then obtain $$\displaystyle\langle{G}_{2},{U}^{n+\frac{1}{2}}\rangle_{h}=\langle{G}_{2},{% \gamma}_{1}^{n}\rangle_{h}\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h}+% \langle{G}_{2},{\gamma}_{2}^{n}\rangle_{h}\langle{G}_{2},{U}^{n+\frac{1}{2}}% \rangle_{h}+\langle{G}_{2},{b}^{n}\rangle_{h}.$$ (4.10) Eqs. (4.9) and (4.10) form a $2\times 2$ linear system for the unknowns $(\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h},\langle{G}_{2},{U}^{n+\frac{1}{% 2}}\rangle_{h})^{T}$. Solving $(\langle{G}_{1},{U}^{n+\frac{1}{2}}\rangle_{h},\langle{G}_{2},{U}^{n+\frac{1}{% 2}}\rangle_{h})^{T}$ from the $2\times 2$ linear system (4.9) and (4.10) and ${U}^{n+\frac{1}{2}}$ is then updated from (4.8). Subsequently, $Q_{1}^{n+\frac{1}{2}}$ and $Q_{2}^{n+\frac{1}{2}}$ are obtained from the second and third equality of (4.6), respectively. Finally, we have ${U}^{n+1}=2{U}^{n+\frac{1}{2}}-{U}^{n}$, $Q_{1}^{n+1}=2Q_{1}^{n+\frac{1}{2}}-Q_{1}^{n}$ and $Q_{2}^{n+1}=2Q_{2}^{n+\frac{1}{2}}-Q_{2}^{n}$. Remark 4.2. We should remark that, compared with the scheme obtained by the classical SAV approach, the proposed scheme need to solve an additional $2\times 2$ linear system, however, the main computational cost still comes from (4.7). Thus, our scheme enjoys the same computational advantages as the ones obtained by the classical SAV approach. In addition, in our computation, ${U}^{n+\frac{1}{2}}$ can be efficiently obtained from (4.8) by the FFT, when ones note [17] $$\displaystyle{D}_{1}={F}_{N}^{H}\Lambda{F}_{N},\ \Lambda=\text{i}\mu\big{(}0,1% ,\cdots,\frac{N}{2}-1,0,1-\frac{N}{2},\cdots,-1\big{)},$$ $$\displaystyle{D}_{2}={F}_{N}^{H}\Lambda{F}_{N},\ \Lambda=\big{[}\text{i}\mu% \big{(}0,1,\cdots,\frac{N}{2}-1,\frac{N}{2},1-\frac{N}{2},\cdots,-1\big{)}\big% {]}^{2},$$ where ${F}_{N}$ is the discrete Fourier transform matrix with elements $\big{(}{F}_{N}\big{)}_{j,k}=\frac{1}{\sqrt{N}}e^{-\text{\rm i}jk\frac{2\pi}{N}},$ ${F}_{N}^{H}$ is the conjugate transpose matrix of ${F}_{N}$. Remark 4.3. We should note that the energy (2.2) is equivalent to the energy (1.4) in continuous sense, but not for the discrete sense. This indicates that the scheme (4.1) cannot preserve the following discrete energy $$\displaystyle H^{n}=-\frac{h}{2}\sum_{j=0}^{N-1}\Big{(}(U_{j}^{n})^{3}+U_{j}^{% n}\cdot({D}_{1}{U}^{n})_{j}^{2}\Big{)},\ 0\leq n\leq M.$$ (4.11) 5 Numerical examples In this section, we report the numerical performance, accuracy, CPU time and invariants-preserving properties of the proposed scheme (4.1) (denoted by MSAV-LCNS). In addition, the following structure-preserving schemes are chosen for comparisons: • IEQ-LCNS: the linearly implicit energy-preserving scheme given in Ref. [13]; • EPFPS: the energy-preserving Fourier pseudo-spectral scheme; • MSFPS: the multi-symplectic Fourier pseudo-spectral scheme; • LICNS: the linear-implicit Crank-Nicolson scheme described in Ref. [12]; • LILFS: the leap-frog scheme stated in Ref. [12]. It is noted that EPFPS and MSFPS are obtained by using the Fourier pseudo-spectral method instead of the wavelet collocation method in Refs. [9, 24], respectively. As a summary, a detailed table on the properties of each scheme has been given in Tab. 1. In our computation, the FFT is also adopt as the solver of linear systems given by MSAV-LCNS (see (4.7)), the standard fixed-point iteration is used for the fully implicit schemes, and the Jacobi iteration method is employed for the linear systems given by IEQ-LCNS, LICNS and LILFS. Here, the iteration will terminate if the infinity norm of the error between two adjacent iterative steps is less than $10^{-14}$. In order to quantify the numerical solution, we use the $l^{2}$- and $l^{\infty}$-norms of the error between the numerical solution $U_{j}^{n}$ and the exact solution $u(x_{j},t_{n})$, respectively, as $$\displaystyle e_{h,2}^{2}(t_{n})=h\sum_{j=0}^{N-1}|U_{j}^{n}-u(x_{j},t_{n})|^{% 2},\ e_{h,\infty}(t_{n})=\max\limits_{0\leq j\leq N-1}|U_{j}^{n}-u(x_{j},t_{n}% )|,\ n\geq 0.$$ 5.1 Smooth periodic solution In Ref. [14], the authors showed that the solution of the CH equation can be described by three parameters $m,M,z\in\mathbb{R}$, where $z=c-M-m$. The equation has a smooth periodic travelling wave when three parameters $m,M,z$ fulfill the relation $z<m<M<c$. By choosing $m=0.3,M=0.7$ and $c=1$, we study a smooth periodic travelling wave with period of $L\approx 6.56$. The initial data is constructed by performing a spline interpolation to obtain $u$ as a function of $x$. For more details, please refer to Ref. [14]. We take the bounded computational domain as the interval ($a,b$) with $a=0$ and $b=L$ and the periodic boundary condition, and the exact solution is obtained by periodic extension of the initial function. To test the temporal discretization errors of the different numerical schemes, we fix $h=\frac{L}{32}$ such that the spatial discretization errors are negligible. Tab. 2, shows the temporal errors and convergence rates for different numerical schemes under different time steps at $t=6.56$. Fig. 1 shows the CPU times of the six schemes for the smooth solution under different grid points till $t=6.56$ with $\tau$=6.56e-04. From Tab. 2 and Fig. 1, we can draw the following observations: (i) all schemes have second order accuracy in time errors; (ii) the error provided by MSAV-LCNS has the same order of magnitude as the ones provided by IEQ-LCNS and LICNS. (iii) the costs of EPFPS is most expensive while the one of MSAV-LCNS is cheapest. Fig. 2 shows the errors of the invariants under $N$=32 and $\tau$=0.0082 over the time interval $t\in[0,656]$. From Fig. 2 (a)-(c), we make the following observations: (i) EPFPS can exactly preserve the energy (see (4.11)) and the energy errors of others are remained around a small order of magnitude. (ii) LICNS and LILFS can exactly preserve the momentum and MSAV-LCNS, IEQ-LCNS, EPFPS and MSFPS can preserve the momentum approximately. (iii) MSAV-LCNS, IEQ-LCNS, EPFPS and MSFPS can preserve the mass to round-of errors while LICNS and LILFS admit large errors. From Fig. 2 (d), it is clearly demonstrated that the proposed scheme can exactly preserve the discrete modified energy. Similar observations on the errors of the invariants are made in the next three examples and we will omit these details for brevity. Here, we should note that the modified energy (4.4) and the energy (4.11) are an approximate version of the continue energy (1.4), and the errors show the stability and capability for long-term computation of the numerical scheme. 5.2 Two-peakon interaction We consider the two-peakon interaction of the CH equation (1.1) with the initial condition [21] $$\displaystyle u_{0}(x)=\phi_{1}(x)+\phi_{2}(x),\ 0\leq x\leq 25,$$ where $$\displaystyle\phi_{i}(x)=\left\{\begin{aligned} &\displaystyle\frac{c_{i}}{% \cosh(L/2)}\cosh(x-x_{i}),\ |x-x_{i}|\leq L/2,\\ &\displaystyle\frac{c_{i}}{\cosh(L/2)}\cosh(L-(x-x_{i})),\ |x-x_{i}|>L/2,\end{% aligned}i=1,2.\right.$$ The parameters are $c_{1}=3,c_{2}=1,x_{1}=-8,x_{2}=0,L=25$ and the periodic boundary condition is adopted. Fig. 3 shows the interaction of two peakons at $t=0,2,4,6,8$ and $10$. We can see clearly that the taller wave overtakes the shorter one at time $t=4$ and afterwards both waves retain their original shapes and velocities. The errors of invariants under $N$=1024 and $\tau$=0.0001 over the time interval $t\in[0,10]$ are plotted in Fig. 4, which behaves similarly as that of Fig. 2. 5.3 Three-peakon interaction Subsequently, we consider the three-peakon interaction of the CH equation (1.1) with the initial condition [21] $$\displaystyle u_{0}(x)=\phi_{1}(x)+\phi_{2}(x)+\phi_{3}(x),\ 0\leq x\leq 30,$$ where $$\displaystyle\phi_{i}(x)=\left\{\begin{aligned} &\displaystyle\frac{c_{i}}{% \cosh(L/2)}\cosh(x-x_{i}),\ |x-x_{i}|\leq L/2,\\ &\displaystyle\frac{c_{i}}{\cosh(L/2)}\cosh(30-(x-x_{i})),\ |x-x_{i}|>L/2,\end% {aligned}i=1,2,3.\right.$$ The parameters are $c_{1}=2,c_{2}=1,c_{3}=0.8,x_{1}=-5,x_{2}=-3,x_{3}=-1,L=30$ and the periodic boundary condition is chosen. Fig. 5 shows the interaction of three-peakons of the CH equation (1.1) under $N$=2048 and $\tau$=0.0001 at $t=0,1,2,3,4$ and $6$. We observe that the moving peak interaction is resolved very well. The errors in invariants over the time interval $t\in[0,10]$ are displayed in Fig. 5, which demonstrates that our scheme has a good conservation of the invariants. 5.4 A solution with a discontinuous derivative Finally, we consider the following initial condition, which has a discontinuous derivative $$\displaystyle u_{0}(x)=\frac{10}{(3+|x|)^{2}},\ -30\leq x\leq 30,$$ with the periodic boundary condition. Fig. 7 shows the solutions with discontinuous derivative under $N$=1024 and $\tau$=0.001 at $t=5,10,15$ and $20$. Fig. 8 shows the errors in invariants over the time interval $t\in[0,20]$. From Figs. 7 and 8, it is clearly demonstrated that the proposed scheme has a good resolution of the solution comparable with that in Refs. [11, 21], and can preserve the modified energy exactly. 6 Concluding remarks In this paper, we present a novel linearization (energy quadratization) strategy to develop second order, fully discrete, linearly implicit scheme for the CH equation (1.1). The proposed scheme is proven to preserve the discrete modified energy and enjoys the same computational advantages as the schemes obtained by the classical SAV approach. Several numerical examples are presented to illustrate the efficiency of our numerical scheme. Comparing with some existing structure-preserving schemes of same order in both time and space, our scheme shows remarkable efficiency. The linearization idea is rather general and useful so that it can be applied to study a broad class of energy-conserving systems, such as the KdV equation, the nonlinear Klein-Gordon equation, etc. In addition, the MSAV reformulation also provides an elegant platform for developing arbitrarily high-order energy-preserving schemes. Thus, a further direction of research will focus on arbitrarily high-order energy-preserving schemes for the energy-conserving system based on the MSAV approach. Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant Nos. 11771213, 61872422, 11801269), the National Key Research and Development Project of China (Grant Nos. 2016YFC0600310, 2018YFC0603500, 2018YFC1504205), the Major Projects of Natural Sciences of University in Jiangsu Province of China (Grant Nos. 15KJA110002, 18KJA110003), the Natural Science Foundation of Jiangsu Province, China (Grant Nos. BK20180413, BK20171480), the Priority Academic Program Development of Jiangsu Higher Education Institutions and the Foundation of Jiangsu Key Laboratory for Numerical Simulation of Large Scale Complex Systems (201905) and the Yunnan Provincial Department of Education Science Research Fund Project (2019J0956). References [1] W. Cai, C. Jiang, and Y. Wang. Structure-preserving algorithms for the two-dimensional sine-Gordon equation with Neumann boundary conditions. arXiv preprint arXiv:1809.02704, 2018. [2] R. Camassa and D. Holm. An integrable shallow water equation with peaked solitons. Phys. Rev. Lett., 71:1661–1664, 1993. [3] R. Camassa, D. Holm, and J. Hyman. A new integrable shallow water equation. Adv. Appl. Mech., 31:1–33, 1994. [4] J. Chen and M. Qin. Multi-symplectic Fourier pseudospectral method for the nonlinear Schrödinger equation. Electr. Trans. Numer. Anal., 12:193–204, 2001. [5] D. Cohen and X. Raynaud. Geometric finite difference schemes for the generalized hyperelastic-rod wave equation. J. Comput. Appl. Math., 235:1925–1940, 2011. [6] A. Constantin. On the scattering problem for the Camassa-Holm equation. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 457:953–970, 2001. [7] M. Dahlby and B. Owren. A general framework for deriving integral preserving numerical methods for PDEs. SIAM J. Sci. Comput., 33:2318–2340, 2011. [8] S. Eidnes, L. Li, and S. Sato. Linearly implicit structure-preserving schemes for Hamiltonian systems. arXiv preprint arXiv:1901.03573, 2019. [9] Y. Gong and Y. Wang. An energy-preserving wavelet collocation method for general multi-symplectic formulations of Hamiltonian PDEs. Commun. Comput. Phys., 20:1313–1339, 2016. [10] Y. Gong, J. Zhao, X. Yang, and Q. Wang. Fully discrete second-order linear schemes for hydrodynamic phase field models of binary viscous fluid flows with variable densities. SIAM J. Sci. Comput., 40:B138–B167, 2018. [11] H. Holden and X. Raynaud. Convergence of a finite difference scheme for the Camassa-Holm equation. SIAM J. Numer. Anal., 44:1655–1680, 2006. [12] Q. Hong, Y. Gong, and Z. Lv. Linear and Hamiltonian-conserving Fourier pseudo-spectral schemes for the Camassa-Holm equation. Appl. Math. Comput., 346:86–95, 2019. [13] C. Jiang, Y. Wang, and Y. Gong. Arbitrarily high-order energy-preserving schemes for the Camassa-Holm equation. preprint. [14] H. Kalisch and J. Lenells. Numerical study of traveling-wave solutions for the Camassa-Holm equation. Chaos Solitons Fractals, 25:287–298, 2005. [15] T. Matsuo and H. Yamaguchi. An energy-conserving Galerkin scheme for a class of nonlinear dispersive equations. J. Comput. Phys., 228:4346–4358, 2009. [16] Y. Miyatake and T. Matsuo. Energy-preserving $H^{1}$-Galerkin schemes for shallow water wave equations with peakon solutions. Phys. Lett. A, 376:2633–2639, 2012. [17] J. Shen and T. Tang. Spectral and High-Order Methods with Applications. Science Press, Beijing, 2006. [18] J. Shen, J. Xu, and J. Yang. A new class of efficient and robust energy stable schemes for gradient flows. arXiv:1710.01331, 2017. [19] J. Shen, J. Xu, and J. Yang. The scalar auxiliary variable (SAV) approach for gradient. J. Comput. Phys., 353:407–416, 2018. [20] Q. Cheng. J. Shen. Multiple scalar auxiliary variable (MSAV) approach and its application to the phase-field vesicle membrane model. SIAM J. Sci. Comput., 40:A3982–A4006, 2018. [21] Y. Xu and C.-W Shu. A local discontinuous Galerkin method for the Camassa-Holm equation. SIAM J. Numer. Anal., 46:1998–2021, 2008. [22] X. Yang, J. Zhao, and Q. Wang. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method. J. Comput. Phys., 333:104–127, 2017. [23] J. Zhao, X. Yang, Y. Gong, and Q. Wang. A novel linear second order unconditionally energy stable scheme for a hydrodynamic-tensor model of liquid crystals. Comput. Methods Appl. Mech. Engrg., 318:803–825, 2017. [24] H. Zhu, S. Song, and Y. Tang. Multi-symplectic wavelet collocation method for the Schrödinger equation and the Camassa-Holm equation. Comput. Phys. Comm., 182:616–627, 2011.
Global existence for the kinetic chemotaxis model without pointwise memory effects, and including internal variables 111The authors thank B. Perthame for fruitful discussions and challenging directions of research about these subjects. VC is grateful to University of Edinburgh for the kind hospitality during a one week visit. (December 15, 2007) Abstract This paper is concerned with the kinetic model of Othmer-Dunbar-Alt for bacterial motion. Following a previous work, we apply the dispersion and Strichartz estimates to prove global existence under several borderline growth assumptions on the turning kernel. In particular we study the kinetic model with internal variables taking into account the complex molecular network inside the cell. Nikolaos Bournaveas University of Edinburgh, School of Mathematics JCMB, King’s Buildings, Edinburgh EH9 3JZ, UK N.Bournaveas@ed.ac.uk Vincent Calvez École Normale Supérieure, Département de Mathématiques et Applications 45 rue d’Ulm, F 75230, Paris, cedex 05, France Vincent.Calvez@ens.fr Classification (AMS 2000) Primary: 92C17, 82C40; Secondary: 35Q80, 92B05 Keywords: kinetic model, bacterial motion, chemotaxis, dispersion estimates, Strichartz estimates, internal variables 1 Introduction and results In biology, several key processes of cellular spatial organisation are driven by chemotaxis. The effective mechanism by which individual cells undergo directed motion varies among organisms. We are particularly interested here in bacterial migration, characterized by the smallness of the cells, and their ability to swim up to several orders of magnitude in the attractant concentration. Several models, depending on the level of description, have been developed mathematically for the collective motion of cells [33, 35]. Among them the kinetic model due to Othmer, Dunbar and Alt (ODA) [1, 31], describes a population of bacteria in motion (e.g. E. Coli or B. Subtilis) [13] in a field of chemoattractant (a process called chemokinesis). These small cells are not capable of measuring any head-to-tail gradient of the chemical concentration, and to choose directly some preferred direction of motion towards high concentrated regions. Therefore they develop an indirect strategy to select favourable areas, by detecting a sort of time derivative in the concentration along their pathways, and to react accordingly [27]. In fact they undergo a jump process where free movements (runs) are punctuated by reorientation phenomena (tumbles) [38]. For instance it is known that E. Coli increases the time spent in running along a favourable direction [27, 4, 13]. This jump process can be described by two different informations. First cells switch the rotation of their flagella, from counter-clockwise CCW (free runs) to clockwise CW (reorientation, or tumbling phase), and conversely. This decision is the result of a complex chain of reactions inside the cells, driven by the external concentration of the chemoattractant [15, 37, 38]. Then cells select a new direction. Although we expect large organisms (like the slime mold amoebae D. discoideum) to choose directly a favourable direction, bacteria are unable to do so, and they randomly choose a new direction of motion. Actually some directional persistence may influence this selection, privileging some angles better than others. However we will not consider inertia here for simplicity. From the molecular point of view, the frequency of tumbling events is driven by a regulatory protein network made of the membrane receptor complex (MCP), the switch complex located at the flagella motor, and six main proteins in between (namely CheA, CheW, CheY, CheZ, CheB and CheR – more are involved in B. Subtilis, but the whole picture is similar [15]). This regulatory network exhibits a remarkable excitation/adaptation process [4, 36, 37]. When attractant concentration increases suddenly the tumbling frequency decreases in a short time scale (excitation), but increases back to the basal activity after a while (adaptation). This allows bacteria to follow favorable pathways over several orders of the concentration magnitude. Note that a similar adaptation process is involved in bigger organisms like D. discoideum [20, 32]. Realistic models have been proposed based on the complete regulatory network [17, 37], as well as toy models capturing the key behavior (basically made of a two species relaxing ODE system [14]). Note that this network is also known to select positive perturbations of the chemoattractant concentration only [4], and to be highly sensitive to very low changes in the chemoattractant concentration [36]. As a drift-diffusion limit of the ODA kinetic model, one recovers the so-called Keller-Segel model [18, 7, 6], where diffusion and chemosensitivity coefficients can be derived from the mesoscopic description. The Keller-Segel model exhibits a remarkable dichotomy where cells aggregate if they are sufficient enough, and disperse if not [21]. Particularly in the two dimensional case, the total mass of cells is the key parameter which selects between these phenomena (respectively global existence versus blow-up in finite time). This simple alternative is depicted in the whole space ${\mathbb{R}}^{2}$ in [2]. In the three dimensional case however, the relevant quantity ensuring global existence is rather the $L^{3/2}$ norm of the initial cell density [9]. Therefore it is of interest to ask the question of global existence at the mesoscopic level. As far as we know, no blow-up phenomenon has been found in the ODA kinetic model. The goals of this paper are the following two. First we investigate global existence theory for several kinetic models depending on the growth of the reorientation kernel with respect to the chemical. In a previous work we succesfully applied dispersion and Strichartz estimates to kinetic models including delocalization effects [3], that can be either a time delay effect due to intracellular dynamics, or some measurement at the tip of a cell protrusion. Those techniques are applied here to a class of assumptions where the reorientation kernel is actually independent of the (inner and outer) velocities. Those assumptions are very rough from the biological point of view, but they aim to determine the critical growth of the turning kernel ensuring global existence. On the other hand we apply those ideas to a more realistic kinetic model including internal molecular variables, improving the results of [12]. We present general assumptions for global existence that can be satisfied by the two species excitative/adptative ODE system, or more generally by a complex network. We consider the following ODA kinetic model for bacterial chemotaxis: $$\displaystyle\partial_{t}f+v\cdot\nabla_{x}f$$ $$\displaystyle=\int_{v^{\prime}\in V}T[S](t,x,v,v^{\prime})f(t,x,v^{\prime})dv^% {\prime}$$ $$\displaystyle\qquad-\int_{v^{\prime}\in V}T[S](t,x,v^{\prime},v)f(t,x,v)dv^{% \prime}\ ,\quad t>0\ ,x\in{\mathbb{R}}^{d}$$ (1a) $$\displaystyle-\Delta S+S$$ $$\displaystyle=\rho(t,x)=\int_{v\in V}f(t,x,v)dv\ ,$$ (1b) associated with the initial condition $f(0,x,v)=f_{0}(x,v)$. The space density of cells is denoted by $\rho(t,x)$. We assume in this paper that the space dimension is $d=2$ or $d=3$. We assume as usual that the set $V\in{\mathbb{R}}^{d}$ of admissible cell velocities is bounded. The free transport operator $\partial_{t}f+v\cdot\nabla_{x}f$ describes the free runs of the bacteria which have velocity $v$. On the other hand, the scattering operator in the right hand side of (1a) expresses the reorientation process (tumbling) occuring during the bacterial migration towards regions of high concentration in chemoattractant $S$. Partial review of plausible reorientation mechanisms. We review below the assumptions existing in the literature concerning the reorientation kernel, in order to motivate the forthcoming work. Delocalization effects. In a previous article [3], we considered mild assumptions of the type: $$0\leq T[S](t,x,v,v^{\prime})\leq C\Big{(}1+S(t,x-\varepsilon v^{\prime})+|% \nabla S|(t,x-\varepsilon v^{\prime})\Big{)},$$ (2) or, $$0\leq T[S](t,x,v,v^{\prime})\leq C\Big{(}1+S(t,x+\varepsilon v)+|\nabla S|(t,x% +\varepsilon v)+|D^{2}S|(t,x+\varepsilon v)\Big{)}.$$ (3) Those assumptions were studied for example in [7, 22] in two or three dimensions of space. Under assumption (2), the bacteria take the decision to reorient with the probability $\lambda[S]=\int T[S](t,x,v^{\prime},v)\ dv^{\prime}$, and then choose a new direction randomly. Therefore the turning frequency increases once cells have entered a favourable area, say (where some delay effect due to internal dynamics is expressed by the space shifting $-\varepsilon v^{\prime}$; the concentration measurement is performed at position $x-\varepsilon v^{\prime}$ by the cell with velocity $v^{\prime}$, turning at position $x$). Intuitively, the cells increase the turning frequency to be confined in highly concentrated areas. The hypothesis (3) is even more intuitive: cells, when they decide to turn (due to a complex averaging over the surrounding area within a radius $\simeq\varepsilon$), simply choose a better new direction $v$ with higher probability. This anticipation measurement can be the result of sending protrusions in the surrounding, or considering that the cells have some finite radius with receptors located all over the membrane (see also [19] for a similar interpretation at the parabolic level – volume effects have also been considered at the kinetic level in [8]). However this interpretation is hardly relevant for bacteria which are small cells, unable to feel gradients and to send protrusions. Remark 1. The gradient in assumption (2) has to be motivated because we highlight that bacteria cannot feel gradients of chemical concentration. As a matter of fact, from a homogeneity viewpoint, $\nabla S$ has the same weight as the time derivative $\partial_{t}S$. Therefore we can replace indeed (2) by the assumption $$0\leq T[S](t,x,v,v^{\prime})\leq C\Big{(}1+S(t,x-\varepsilon v^{\prime})+|% \partial_{t}S|(t,x-\varepsilon v^{\prime})\Big{)}\ ,$$ which makes sense biologically (although a more realistic assumption is expressed for example in [10], see below (5)). To see that $\nabla S$ and $\partial_{t}S$ do have the same homogeneity, observe that $$\partial_{t}S=G*\partial_{t}\rho=-G*\nabla\cdot j\ ,$$ where $G$ is the Bessel potential, and the flux $j$ is given by $$j(t,x)=\int_{V}vf(t,x,v)\ dv\ ,\quad|j(t,x)|\leq(\max_{v\in V}|v|)\rho(t,x)\ .$$ As a consequence, in the three dimensional case we have $$|\partial_{t}S|\simeq\left|\frac{1}{|x|}*(\nabla\cdot j)\right|\simeq\frac{1}{% |x|^{2}}*|j|\lesssim\frac{1}{|x|^{2}}*\rho\simeq|\nabla S|\ .$$ The dispersion lemma turned out to be a powerful tool for dealing with those assumptions (even the second derivative of $S$ can be added in (3)). It turns out that putting together those two hypotheses (2) and (3) is a much harder task (due to the fact that we loose the benefit of the decay term in the balance law along the estimates). Some progress in this direction was recently made in [7, 22, 3] but the whole picture is not clear so far. For example in [3] it was shown that in $d=3$ dimensions we have global existence of weak solutions if $$0\leq T[S](t,x,v,v^{\prime})\lesssim 1+S(t,x+\varepsilon v)+S(t,x-\varepsilon v% ^{\prime})+\left|\nabla S(t,x+\varepsilon v)\right|+\left|\nabla S(t,x-% \varepsilon v^{\prime})\right|,$$ (4) provided that the initial data are small in the critical space $L^{3/2}$. If (4) is strengthened by dropping the last term, then a global existence result was established without a smallness assumption on the initial data. The proofs use the dispersion and Strichartz estimates of [5] and rely on the delocalization effects induced by $x+\varepsilon v$ and $x-\varepsilon v^{\prime}$. Interestingly the fact that some directed motion emerges from turning kernels which resemble to assumptions (2), (3) or (4) – as pointed out by the diffusion limit – seems to involve a completely different mechanism from the following commonly described behaviour in E. coli. Persistence of motion in the good directions. As opposed to the previous set of hypothesis, it is commonly accepted that bacteria increase the time spent in running in a favourable direction [38, 13]. That is, the turning kernel is expected to decrease as the chemical concentration increases along the cell’s trajectory, like $$T[S](v,v^{\prime})=T_{0}+\psi(S_{t}+v^{\prime}\cdot\nabla S)\ ,$$ (5) where $\psi$ is nonnegative and decreasing, and $S_{t}+v^{\prime}\cdot\nabla S$ denotes the directional derivative along the free run before turning (see [12], [10] where this hypothesis is injected in a model for D. discoideum self-organization, and its drift-diffusion limit is derived). One may think of $\psi$ to be: $\psi(\eta)=0$ if $\eta>0$ and $\psi(\eta)=1$ if $\eta<0$ for instance. Actually, in [12] the authors explicit two behaviour caricatures, where cells might ”perfectly avoid going in wrong directions”, or ”perfectly follow good directions”. The latter is stressed out there and leads the system to regular solutions, intuitively, whereas the former might develop singularities where cells aggregate. The above mechanism is also part of more complex models including internal variables (which is reviewed and analysed further below). In fact some molecular concentration denoted by $y$ (standing for the phosphorylated CheY-P) which induces a tumbling behavior, is actually reduced under attractant binding to the membrane receptor (excitation phase). The chemical chain of reactions is in fact inhibited under activation of the membrane complex receptor. On the contrary, expression of a repellent activates this internal network, favouring tumbling. Global existence theory for such a class of models has been discussed in [12] for the one dimensional case. Internal dynamics Complex models of bacterial motility include a cascade of chemical reactions. This chain of activator/inhibitor reactions links the evaluation of the chemical concentration by the membrane receptors to the rotational switch of the flagella, inducing or inhibiting the tumbling phase. Several works propose a chemical network describing this complexity [17, 37]. In particular, the global short term excitation/mid term adaptation is crucial for the cells to crawl up across levels of magnitude of the chemical concentration. Caricatures of such an excitation/adaptation process are depicted in [13, 10] for instance. However we will keep in this paper the necessary abstract level required for our purpose ( for an illustrative example, see section 5). In the following, $y\in{\mathbb{R}}^{m}$ denotes the whole internal state of the cells, which can correspond to huge data of molecular concentrations in the chemical network (in fact $m=2$ in the caricatural excitating/adaptating system). In accordance with previous notations, $p(t,x,v,y)$ denotes the cell density at position $x$, velocity $v$, and with internal state $y$. As before, $f(t,x,v)=\int_{y}p(t,x,v,y)\ dy$ is the cell density in position$\times$velocity space. On the other hand we introduce $\mu(t,x,y)=\int_{v}p(t,x,v,y)\ dv$, and as usual $\rho(t,x)=\int_{v,y}p(t,x,v,y)\ dvdy$. The chemical potential is given by a mean-field equation $-\Delta S+S=\rho(t,x)$. But this could be extended to a more realistic influence of the internal state on the chemical secretion (as it is in [10]) $$-\Delta S+S=\int_{y}\omega(y)\mu(t,x,y)\ dy\ ,$$ under suitable assumptions on the weight $\omega$. The dynamic inside an individual cell is driven by an ODE system representing the protein network in an abstract way: $$\frac{dy}{dt}=G\big{(}y,S(t,x)\big{)}\ ,\quad y\in{\mathbb{R}}^{m}\ .$$ The cell master equation describing the run and tumble processes, and the chemical potential equation are respectively: $$\displaystyle\partial_{t}p+v\cdot\nabla_{x}p+\nabla_{y}\cdot\Big{(}G(y,S)p\Big% {)}$$ $$\displaystyle=\int_{v^{\prime}\in V}T(t,x,v,v^{\prime},y)p(t,x,v^{\prime},y)dv% ^{\prime}$$ $$\displaystyle\qquad-\int_{v^{\prime}\in V}T(t,x,v^{\prime},v,y)p(t,x,v,y)dv^{% \prime}\ ,$$ (6a) $$\displaystyle-\Delta S+S$$ $$\displaystyle=\rho\ ,$$ (6b) The turning kernel $T$ can be decomposed in this context as product between a turning frequency $\lambda[y]$, depending on the internal state only, and a reorientation $K(v,v^{\prime})$ which may describe some persistence in the choice of a new direction with respect to the old one. Without loss of generality here we assume that $K(v,v^{\prime})$ is constant and renormalized as being $K(v,v^{\prime})=1/|V|$. It is worth noticing that this realistic kinetic model may contain enormous informations on the microscopic cell biology, and links different scales of description, because we eventually end up with a cell population $\rho(t,x)$. As a partial conclusion, we observe that several scenarios with different underlying kinds of hypotheses, drive the system to positive chemotaxis (at least considering the formal drift-diffusion limit of those). Statement of the main results In this paper we investigate the critical growth of the turning kernel in terms of space norms of the chemical which ensures the global existence for the kinetic model. In particular we consider control on the turning kernel without any dependence upon the velocity variables, that is with some abuse of notations: $$0\leq T[S](t,x,v,v^{\prime})\leq T[S](t,x)\ ,$$ under suitable conditions on the growth of $T[S]$. We exhibit examples in 2D and 3D, restricting ourselves to some $L^{p}$ norms of the chemical (and not of its gradient for instance) for which our method appears to be borderline. In particular, it is natural to ask (see Section 3 of [7] and the concluding remarks in [3]) whether global existence can be established under a hypothesis of the form $$0\leq T[S](t,x,v,v^{\prime})\leq C\Big{(}1+\left\|S(t,\cdot)\right\|_{L^{% \infty}({\mathbb{R}}^{d})}^{\alpha}\Big{)}\ ,$$ (7) where $\alpha>0$. Exponential growth in dimension 2 Consider first the case of dimension $d=2$. It is easy to see using the methods of [7] that we have global existence for any exponent $\alpha>0$ within (7). In analogy with global existence results for nonlinear wave or Schrödinger equations [23, 24, 29, 30] we can ask whether the turning kernel can grow exponentially: $$0\leq T[S](t,x,v,v^{\prime})\leq C\left(1+\exp\left[\left\|S(t,\cdot)\right\|_% {L^{\infty}({\mathbb{R}}^{2})}^{\beta}\right]\right).$$ (8) We will show that this is actually possible: if $0<\beta<1$ then we have global existence for large data; if $\beta=1$ we have global existence for initial data of small mass. Our proof requires $M<\pi$, but we don’t know if this bound is optimal. Also, we don’t know if we may have blow-up for large $M$ or for exponents $\beta>1$. We shall prove the following Theorem 1.1. Consider the system (1) in $d=2$ dimensions under hypothesis (8) and let $1<p<2$. Assume $0<\beta\leq 1$. If $\beta=1$ assume also that $M<\pi$, where $M=\int_{V}f_{0}(x,v)dv$ is the total mass of cells. Then if $f_{0}\in L^{1}_{x}L^{p}_{v}\cap L^{1}_{x,v}$ then (1) has a global weak solution $f$ with $f(t)\in L^{p}_{x}L^{1}_{v}\cap L^{1}_{x,v}$. Almost $L^{\infty}$ growth in 3D Naturally, from the global existence point of view we address the question of a $\|S\|_{\infty}$ growth of the turning kernel in the case of $d=3$ dimensions: $T[S]\leq C\big{(}1+\|S\|_{\infty}\big{)}$. Actually it cannot be handled using our method in three dimensions so far. Even in the simpler case $T[S]\leq C\big{(}1+S(t,x)\big{)}$ our dispersion method fails. Puzzling enough, if $T[S]=C\big{(}1+S(t,x)\big{)}$ then a very simple symmetrization trick does perfectly the job (see section 2). It was noticed in [7] that if $\alpha<1$ in (7) then we have global existence (a sketch of the proof will be given in Section 4.3). The case $\alpha=1$ remains open. In this direction we will use the methods of [3] to show that we have global existence under the assumption $$0\leq T[S](t,x,v,v^{\prime})\leq C\left(1+\left\|S(t,\cdot)\right\|_{L^{r}({% \mathbb{R}}^{3})}^{\alpha}\right)\ ,$$ (9) where $0<\alpha<\frac{r}{r-3}$ and $r$ can be arbitrarily large. Notice that $\frac{r}{r-3}\to 1^{+}$ as $r\to\infty$, which is coherent with the above obstruction. More precisely we shall prove the following Theorem 1.2. Let $d=3$ and $1<p<3/2$. Suppose that that the turning kernel $T[S]$ satisfies hypothesis (9) for some $r$ and $\alpha$ verifying: if $1\leq r\leq 3$, $\alpha$ can be any positive number, whereas in case of $3<r<\infty$, $0<\alpha<\frac{r}{r-3}$. Then if $f_{0}\in L^{1}_{x}L^{p}_{v}\cap L^{1}_{x,v}$ then the kinetic model (1) has a global weak solution with $f(t)\in L^{p}_{x}L^{1}_{v}\cap L^{1}_{x,v}$. If we assume that the turning kernel satisfies (7) with $\alpha=1$, we can use the Strichartz estimates of [5] to show global existence, provided that the critical norm $\left\|f_{0}\right\|_{L^{3/2}\left({\mathbb{R}}^{6}_{x,v}\right)}$ is small. Theorem 1.3. Let $d=3$ and assume that the turning kernel satisfies $$0\leq T[S](t,x,v,v^{\prime})\leq C\left(1+\left\|S(t,\cdot)\right\|_{L^{\infty% }({\mathbb{R}}^{3})}\right)\ .$$ Assume also that $f_{0}\in L^{1}\left({\mathbb{R}}^{6}_{x,v}\right)\cap L^{3/2}\left({\mathbb{R}% }^{6}_{x,v}\right)$ and that $\left\|f_{0}\right\|_{L^{3/2}\left({\mathbb{R}}^{6}_{x,v}\right)}$ is sufficiently small. Then (1) has a global weak solution. Internal dynamics. We shall prove the following theorem for global existence in three dimensions of space. Theorem 1.4. Let $d=3$. Assume that the turning kernel has the form $T=\lambda[y]\times K(v,v^{\prime})$ where $K$ is uniformly bounded, and $\lambda$ grows at most linearly: $\lambda[y]\leq C\big{(}1+|y|\big{)}$. On the other hand, assume that $G$ has a (sub)critical growth with respect to $y$ and $S$: there exists $0\leq\alpha<1$ such that $$|G|(y,S)\leq C\Big{(}1+|y|+S^{\alpha}\Big{)}\ .$$ Then there exists an exponent $1<p<3/2$ such that the system (6) admits globally existing solutions with $p\in L^{p}_{x}L^{1}_{v}L^{1}_{y}$. 2 The dispersion lemma applied to kinetic chemotaxis, and the symmetrization trick In this section we present a direct application of the dispersion lemma [5] to system (1). As a consequence we are led to the following question, which is decoupled from (1): Investigate the critical norm for the turning kernel ensuring the bound $$0\leq T[S]\leq C\Big{(}1+\|\rho(t)\|_{L^{p}}\Big{)}\ ,$$ for $p<d^{\prime}$ in dimension $d$. The rest of this paper will be devoted to this question of critical growth. Lemma 2.1. Assume the turning kernel can be controlled without any dependence on the velocity variables $v$ nor $v^{\prime}$: $$0\leq T[S](t,x,v,v^{\prime})\leq T[S](t,x).$$ Then, applying the dispersion estimate, we get the following for $p\in[1,d^{\prime})$: $$\|\rho(t)\|_{L^{p}}\leq\|f_{0}(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}}+|V|^{1/p}\int_{s% =0}^{t}(t-s)^{-\lambda}\int_{x}T[S](s,x)\rho(s,x)\ dxds\ ,$$ where $\lambda=d/p^{\prime}$. Observe that the condition $d<p^{\prime}$ is crucially required here to ensure further the time integrability of the right-hand-side. Proof. As usual we represent the solution of (1) as $$f(t,x,v)\leq f_{0}(x-tv,v)+\int_{0}^{t}T[S](s,x-(t-s)v)\rho(s,x-(t-s)v)ds\ .$$ Using dispersion we get immediately $$\|f(t,x,v)\|_{L^{p}_{x}L^{1}_{v}}\leq\|f_{0}(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}}+% \int_{0}^{t}\frac{1}{(t-s)^{d(1-1/p)}}\Big{\|}T[S](s,x)\rho(s,x)\Big{\|}_{L^{1% }_{x}L^{p}_{v}}\!ds\ .$$ ∎ As an observation, we state also a second lemma, which is interesting in its own right, but which will not be used in the sequel. Following [34], it claims that a kernel which is symmetric with respect to $v$ and $v^{\prime}$ ensures global existence. It is relevant from the mathematical point of view because we consider bounds that do not depend on $v$ and $v^{\prime}$. It is biologically irrelevant however in the case of a purely symmetric kernel because no directed motion emerges in the drift-diffusion limit [7]. Lemma 2.2. Consider the scattering equation, $$\partial_{t}f+v\cdot\nabla_{x}f=\int_{V}\left(K(t,x,v,v^{\prime})f(t,x,v^{% \prime})-K(t,x,v^{\prime},v)f(t,x,v)\right)dv^{\prime}\ .$$ (10) and assume that $K$ is symmetric w.r.t. $v$ and $v^{\prime}$, i.e. $$K(t,x,v,v^{\prime})=K(t,x,v^{\prime},v)\geq 0.$$ (11) Then all $L^{p}_{x}L^{p}_{v}-$norms of the density $f$ ($1\leq p<\infty$) are uniformly estimated like $\left\|f(t)\right\|_{L^{p}_{x,v}}\leq\left\|f_{0}\right\|_{L^{p}_{x,v}}$. Proof. First rewrite (10) using the symmetry property. It becomes $$\partial_{t}f+v\cdot\nabla_{x}f=\int_{V}K(t,x,v,v^{\prime})\left(f(t,x,v^{% \prime})-f(t,x,v)\right)dv^{\prime}\ .$$ (12) Next multiply (12) by $pf^{p-1}(t,x,v)$ to get $$\displaystyle\partial_{t}f^{p}+v\cdot\nabla_{x}f^{p}=p\int f^{p-1}(t,x,v)K(t,x% ,v,v^{\prime})\left(f(t,x,v^{\prime})-f(t,x,v)\right)dv^{\prime}$$ Integrate with respect to $x$ and $v$ to get $$\displaystyle\dfrac{d}{dt}\iint f^{p}dvdx=p\iiint f^{p-1}(t,x,v)K(t,x,v,v^{% \prime})\left(f(t,x,v^{\prime})-f(t,x,v)\right)dv^{\prime}dvdx\ .$$ We can symmetrize the latter expression to obtain eventually $$\displaystyle\dfrac{d}{dt}\iint f^{p}dvdx=\\ \displaystyle-\frac{p}{2}\iiint K(v,v^{\prime})\left(f^{p-1}(t,x,v)-f^{p-1}(t,% x,v^{\prime})\right)\left(f(t,x,v)-f(t,x,v^{\prime})\right)dv^{\prime}dvdx\ .$$ Since $f\geq 0$ we have $$\left(f^{p-1}(t,x,v)-f^{p-1}(t,x,v^{\prime})\right)\left(f(t,x,v)-f(t,x,v^{% \prime})\right)\geq 0\ ,$$ because these two factors always have the same sign. It follows that $$\dfrac{d}{dt}\iint f^{p}dvdx\leq 0\ .$$ ∎ 3 Exponential growth in $L^{\infty}$ in dimension 2 In this Section we prove Theorem 1.1. Working as in the proof of Trudinger’s inequality we expand the exponential into a power series and use Young’s inequality as in [7, 3] to estimate each term. The dispersion method is then used as in [3] through (2.1). Throughout these processes we keep track of the growth of the various constants in order to make sure that the resulting series converges. A similar approach has been used in [23, 24, 29, 30] to study nonlinear wave and Schrödinger equations. We will need the following two Lemmas. Lemma 3.1. Let $G(x)=\frac{1}{4\pi}\int_{0}^{\infty}e^{-\pi\frac{|x|^{2}}{s}}e^{-\frac{s}{4\pi% }}\frac{ds}{s}$. There exists a positive constant $A$ such that $$G(x)\leq A+\frac{1}{2\pi}\left|\log|x|\,\right|\ \ ,\ \ |x|\leq 1.$$ (13) Proof. Fix $x$ with $|x|\leq 1$. Write $G(x)=G_{1}(x)+G_{2}(x)+G_{3}(x)$ where $$\displaystyle G_{1}(x)$$ $$\displaystyle=\frac{1}{4\pi}\int_{0}^{|x|^{2}}e^{-\pi\frac{|x|^{2}}{s}}e^{-% \frac{s}{4\pi}}\frac{ds}{s},$$ $$\displaystyle G_{2}(x)$$ $$\displaystyle=\frac{1}{4\pi}\int_{|x|^{2}}^{1}e^{-\pi\frac{|x|^{2}}{s}}e^{-% \frac{s}{4\pi}}\frac{ds}{s},$$ $$\displaystyle G_{3}(x)$$ $$\displaystyle=\frac{1}{4\pi}\int_{1}^{\infty}e^{-\pi\frac{|x|^{2}}{s}}e^{-% \frac{s}{4\pi}}\frac{ds}{s}.$$ For $G_{1}$ use $e^{-s/4\pi}\leq 1$ and then change variables $s\mapsto t$, where $s=|x|^{2}t$, to get $G_{1}(x)\leq\frac{1}{4\pi}\int_{0}^{1}e^{-\pi\frac{1}{t}}\frac{dt}{t}=:A_{1}$. For $G_{2}$ we have $G_{2}(x)\leq\frac{1}{4\pi}\int_{|x|^{2}}^{1}\frac{ds}{s}=\frac{-\log|x|}{2\pi}$. For $G_{3}$ use $e^{-\pi|x|^{2}/s}\leq 1$ to get $G_{3}(x)\leq\frac{1}{4\pi}\int_{1}^{\infty}e^{-\frac{s}{4\pi}}ds=:A_{2}$. ∎ Remark 2. In fact the exact asymptotics of $G$ near the origin is: $$G(x)=-\frac{1}{2\pi}\log|x|+\gamma+\frac{1}{2\pi}\log 2+o(1)\ ,$$ where $\gamma$ is the Euler constant. Lemma 3.2. For $x>0$ define $\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}dt.$ Then (Stirling’s formula) $$\displaystyle n!=\Gamma(n+1)\sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\ \ (% n\to+\infty),$$ (14) $$\displaystyle\Gamma(x+1)\sim\sqrt{2\pi x}\left(\frac{x}{e}\right)^{x}\ \ (x\to% +\infty).$$ (15) Moreover, for all $\beta>0$, $x>0$, $$e^{x}>\frac{x^{\beta}}{\Gamma(\beta+1)}.$$ (16) Proof. (14) and (15) are well known. For (16) we have $$\Gamma(\beta+1)=\int_{0}^{\infty}t^{\beta}e^{-t}dt>\int_{x}^{\infty}t^{\beta}e% ^{-t}dt>x^{\beta}\int_{x}^{\infty}e^{-t}dt=x^{\beta}e^{-x}.$$ ∎ Proof of Theorem 1.1. Recall from section 2 that a control of the turning kernel like $T[S]\leq C\big{(}\|\rho(t)\|_{L^{p}}\big{)}$, is sufficient to guarantee global existence. The rest of this section is devoted to the proof of this estimate. Pick $1<p<2$ and set $\mu=p^{\prime}>2$. In case of $\beta=1$, assume in addition that $\mu<\frac{2\pi}{M}$. Write $S=S^{long}+S^{short}$ where $$S^{long}=\left(\mathbbm{1}_{|x|>1}G(x)\right)\ast\rho\ ,\mbox{and}\quad S^{% short}=\left(\mathbbm{1}_{|x|\leq 1}G(x)\right)\ast\rho\ .$$ Since $0<\beta\leq 1$ we have $$\left\|S\right\|_{L^{\infty}}^{\beta}\leq\left(\left\|S^{\text{long}}\right\|_% {L^{\infty}}+\left\|S^{\text{short}}\right\|_{L^{\infty}}\right)^{\beta}\leq% \left\|S^{\text{long}}\right\|_{L^{\infty}}^{\beta}+\left\|S^{\text{short}}% \right\|_{L^{\infty}}^{\beta}\ ,$$ where we have used the fact that $(x+y)^{\beta}\leq x^{\beta}+y^{\beta}$ for $x,y>0$ and $0<\beta\leq 1$. Therefore $$\exp\left\{\left\|S(t,\cdot)\right\|_{L^{\infty}}^{\beta}\right\}\leq\exp\left% \{\left\|S^{long}(t,\cdot)\right\|_{L^{\infty}}^{\beta}\right\}\cdot\exp\left% \{\left\|S^{short}(t,\cdot)\right\|_{L^{\infty}}^{\beta}\right\}.$$ For $S^{long}$ we have $$\left\|S^{long}\right\|_{L^{\infty}}^{\beta}\leq\left\|\mathbbm{1}_{|x|>1}G(x)% \right\|_{L^{\infty}}^{\beta}\left\|\rho\right\|_{L^{1}}^{\beta}\leq cM^{\beta},$$ where $c$ is a positive constant (depending on $\beta$), therefore $$\displaystyle\exp\left\{\left\|S(t,\cdot)\right\|_{L^{\infty}}^{\beta}\right\}$$ $$\displaystyle\leq e^{cM^{\beta}}\exp\left\{\left\|S^{short}(t,\cdot)\right\|_{% L^{\infty}}^{\beta}\right\}$$ $$\displaystyle=e^{cM^{\beta}}\left(1+\sum_{j=1}^{\infty}\frac{1}{j!}\left\|S^{% short}(t,\cdot)\right\|_{L^{\infty}}^{j\beta}\right).$$ For $S^{short}$ we have $$\left\|S^{short}(t,\cdot)\right\|_{L^{\infty}}\leq\left\|G(x)\mathbbm{1}_{|x|% \leq 1}\right\|_{L^{\mu j}}\left\|\rho\right\|_{L^{\frac{\mu j}{\mu j-1}}}$$ For all $j\geq 1$ we have $\frac{\mu j}{\mu j-1}\leq\frac{\mu}{\mu-1}=p$ therefore $$\displaystyle\left\|S^{short}(t,\cdot)\right\|_{L^{\infty}}$$ $$\displaystyle\leq\left\|G(x)\mathbbm{1}_{|x|\leq 1}\right\|_{L^{\mu j}}\left\|% \rho(t,\cdot)\right\|_{L^{1}}^{1-\frac{1}{j}}\left\|\rho(t,\cdot)\right\|_{L^{% p}}^{\frac{1}{j}}$$ $$\displaystyle=\left\|G(x)\mathbbm{1}_{|x|\leq 1}\right\|_{L^{\mu j}}M^{1-\frac% {1}{j}}\left\|\rho(t,\cdot)\right\|_{L^{p}}^{\frac{1}{j}},$$ therefore $$\left\|S^{short}\right\|_{L^{\infty}}^{j\beta}\leq\left\|G(x)\mathbbm{1}_{|x|% \leq 1}\right\|_{L^{\mu j}}^{j\beta}M^{j\beta-\beta}\left\|\rho(t,\cdot)\right% \|_{L^{p}}^{\beta}.$$ Consequently $$\exp\left\{\left\|S(t,\cdot)\right\|_{L^{\infty}}^{\beta}\right\}\leq e^{cM^{% \beta}}\left(1+\left[\sum_{j=1}^{\infty}\frac{1}{j!}\left\|G(x)\mathbbm{1}_{|x% |\leq 1}\right\|_{L^{\mu j}}^{j\beta}M^{j\beta}\right]M^{-\beta}\left\|\rho% \right\|_{L^{p}}^{\beta}\right)\ .$$ (17) We need to guarantee that the series in the above right-hand-side converges. Using (13) we have: $$\displaystyle\left\|G(x)\mathbbm{1}_{|x|\leq 1}\right\|_{L^{\mu j}}$$ $$\displaystyle\leq\left\|A+\frac{1}{2\pi}\left|\log|x|\right|\,\right\|_{L^{\mu j% }(|x|\leq 1)}$$ $$\displaystyle\leq A\pi^{\frac{1}{\mu j}}+\frac{1}{2\pi}\left\|\log|x|\,\right% \|_{L^{\mu j}(|x|\leq 1)}\ ,$$ and also $$\displaystyle\left\|\log|x|\,\right\|_{L^{\mu j}(|x|\leq 1)}$$ $$\displaystyle=\left(2\pi\int_{0}^{1}\left(\,-\log r\,\right)^{\mu j}rdr\right)% ^{1/\mu j}$$ $$\displaystyle\leq\left(2\pi\int_{0}^{\infty}s^{\mu j}e^{-2s}ds\right)^{1/\mu j}$$ $$\displaystyle\leq\left(2\pi\int_{0}^{\infty}\frac{s^{\mu j}}{\frac{s^{\mu j}}{% \Gamma(\mu j+1)}}e^{-s}ds\right)^{1/\mu j}\ \ \ \text{by}\ \ \ \eqref{exp6D}$$ $$\displaystyle=\left(2\pi\right)^{\frac{1}{\mu j}}\left(\Gamma(\mu j+1)\right)^% {\frac{1}{\mu j}}\ .$$ As a consequence $$\left\|G(x)\mathbbm{1}_{|x|\leq 1}\right\|_{L^{\mu j}}\leq A\pi^{\frac{1}{\mu j% }}+\frac{1}{2\pi}\left(2\pi\right)^{\frac{1}{\mu j}}\left(\Gamma(\mu j+1)% \right)^{\frac{1}{\mu j}}.$$ (18) Then the infinite sum in (17) can be estimated by $$\sum_{j=1}^{\infty}\frac{1}{j!}\left(A\pi^{\frac{1}{\mu j}}+\frac{1}{2\pi}% \left(2\pi\right)^{\frac{1}{\mu j}}\left(\Gamma(\mu j+1)\right)^{\frac{1}{\mu j% }}\right)^{j\beta}M^{j\beta}.$$ (19) We’ll show that for $\beta<1$ the series converges for any mass $M$, and that for $\beta=1$ it converges thanks to the restriction $\frac{M\mu}{2\pi}<1$. Using the root test we have $$\displaystyle\left(\frac{1}{j!}\left(A\pi^{\frac{1}{\mu j}}+\frac{1}{2\pi}% \left(2\pi\right)^{\frac{1}{\mu j}}\left(\Gamma(\mu j+1)\right)^{\frac{1}{\mu j% }}\right)^{j\beta}M^{j\beta}\right)^{\frac{1}{j}}$$ $$\displaystyle\ \ \ \ =\frac{1}{\left(j!\right)^{\frac{1}{j}}}\left(A\pi^{\frac% {1}{\mu j}}+\frac{1}{2\pi}\left(2\pi\right)^{\frac{1}{\mu j}}\left(\Gamma(\mu j% +1)\right)^{\frac{1}{\mu j}}\right)^{\beta}M^{\beta}$$ $$\displaystyle\ \ \ \ \leq\frac{1}{\left(j!\right)^{\frac{1}{j}}}\left(A^{\beta% }\pi^{\frac{\beta}{\mu j}}+\left(\frac{1}{2\pi}\right)^{\beta}\left(2\pi\right% )^{\frac{\beta}{\mu j}}\left(\Gamma(\mu j+1)\right)^{\frac{\beta}{\mu j}}% \right)M^{\beta},$$ We have $\frac{1}{\left(j!\right)^{\frac{1}{j}}}A^{\beta}\pi^{\frac{\beta}{\mu j}}\to 0$, therefore it remains to examine the limit of $$\frac{1}{\left(j!\right)^{\frac{1}{j}}}\left(\frac{1}{2\pi}\right)^{\beta}% \left(2\pi\right)^{\frac{\beta}{\mu j}}\left(\Gamma(\mu j+1)\right)^{\frac{% \beta}{\mu j}}M^{\beta}.$$ (20) From (14) we have $j!\sim\sqrt{2\pi j}\left(\frac{j}{e}\right)^{j}$ therefore $\left(j!\right)^{\frac{1}{j}}\sim\left(2\pi j\right)^{\frac{1}{2j}}\frac{j}{e}% \sim\frac{j}{e}.$ From (15) we have $\Gamma(\mu j+1)\sim\sqrt{2\pi\mu j}\left(\frac{\mu j}{e}\right)^{\mu j}$ therefore $$\left(\Gamma(\mu j+1)\right)^{\frac{\beta}{\mu j}}\sim\left(2\pi\mu j\right)^{% \frac{\beta}{2\mu j}}\left(\frac{\mu j}{e}\right)^{\beta}\sim\left(\frac{\mu j% }{e}\right)^{\beta}.$$ Therefore $$\eqref{exp6E}\ \sim\left(\frac{M}{2\pi}\right)^{\beta}\frac{\left(\frac{\mu j}% {e}\right)^{\beta}}{\frac{j}{e}}\to\begin{cases}0&,\ \ \text{if}\ \ \beta<1\\ \frac{M\mu}{2\pi}&,\ \ \text{if}\ \ \beta=1\end{cases}.$$ The limit is smaller than 1 in all cases, therefore the series converges. Summing up, we obtain $$T[S](t,x)\leq C\left(1+\exp\left\{\left\|S(t,\cdot)\right\|_{L^{\infty}}^{% \beta}\right\}\right)\leq C+C\left\|\rho(t,\cdot)\right\|_{L^{p}}^{\beta}\ .$$ Recall that we have choosen $p<2$ such that the Lemma 2.1 applies. We end up with $$\left\|\rho(t,x)\right\|_{L^{p}}\leq t^{-\lambda}\|f_{0}(x,v)\|_{L^{1}_{x}L^{p% }_{v}}+C\int_{0}^{t}\Big{(}1+\left\|\rho(s,x)\right\|_{L^{p}_{x}}^{\beta}\Big{% )}\frac{ds}{(t-s)^{\lambda}}\ ,$$ where $\lambda=2/p^{\prime}<1$ so that we can bootstrap. ∎ 4 (Almost) $L^{\infty}$ growth in dimension $3$ 4.1 Almost $L^{\infty}$ growth In this Section we consider the kinetic model (1) in $d=3$ dimensions under hypothesis (9). Proof of Theorem 1.2.. If $1\leq r<3$, $\alpha>0$ and $T[S]$ satisfies (9) then $T[S]$ can be estimated a priori in terms of the mass $M$. Indeed, $$\left\|S(t,\cdot)\right\|_{L^{r}({\mathbb{R}}^{3})}\leq\left\|G\right\|_{L^{r}% ({\mathbb{R}}^{3})}\left\|\rho(t,\cdot)\right\|_{L^{1}({\mathbb{R}}^{3})}\leq CM,$$ because $G(x)\sim\frac{C}{|x|}$ for small $|x|$, and $G(x)$ decays exponentially for large $x$. Therefore $T[S](t,x,v,v^{\prime})\leq C+CM^{\alpha}$ and global existence follows easily. Assume now that $3\leq r<\infty$ and $0<\alpha<\frac{r}{r-3}$. Choose $p$ defined by $$\frac{1}{p^{\prime}}=\frac{\alpha(r-3)}{3r}<\frac{1}{3}\ ,$$ and define $B$ such that $$\frac{1}{B^{\prime}}=\frac{1}{3}-\frac{1}{r}=\frac{1}{\alpha p^{\prime}}\ .$$ Using fractional integration [26] we get for the signal $S=G*\rho\leq\frac{C}{|x|}*\rho(x)$ (both short and long range parts), $$\displaystyle\left\|S\right\|_{L^{r}}$$ $$\displaystyle\leq C\left\|\frac{1}{|x|}\ast\rho\right\|_{L^{r}}\leq C\left\|% \rho\right\|_{L^{B}}\leq CM^{1-\frac{p^{\prime}}{B^{\prime}}}\left\|\rho\right% \|_{L^{p}}^{\frac{p^{\prime}}{B^{\prime}}}.$$ Consequently we get the crucial estimate required in Lemma 2.1: $$T[S](t,x)\leq C+C\left\|S\right\|_{L^{r}}^{\alpha}\leq C+C\left\|\rho\right\|_% {L^{p}}\ ,$$ where $p$ is smaller than $3/2$. We can complete the proof as in Theorem 1.1. ∎ 4.2 $L^{\infty}$ growth: global existence for small data Proof of Theorem 1.3. We have $$\partial_{t}f+v\cdot\nabla_{x}f\leq C\int_{V}\Big{(}1+\left\|S(t)\right\|_{L^{% \infty}}\Big{)}f(t,x,v^{\prime})\ dv^{\prime}=C\Big{(}1+\left\|S(t)\right\|_{L% ^{\infty}}\Big{)}\rho(t,x).$$ (21) To apply the Strichartz estimate [5] we need four parameters $q,p,r,a$ such that $$\displaystyle 1\leq r\leq p\leq\infty$$ (22a) $$\displaystyle 0\leq\frac{1}{r}-\frac{1}{p}<\frac{1}{3}$$ (22b) $$\displaystyle 1\leq\frac{1}{r}+\frac{1}{p}$$ (22c) $$\displaystyle\frac{2}{q}=3\left(\frac{1}{r}-\frac{1}{p}\right)$$ (22d) $$\displaystyle a=\frac{2pr}{p+r}$$ (22e) More conditions will be imposed later. We get: $$\displaystyle\left\|f\right\|_{L^{q}_{t}L^{p}_{x}L^{r}_{v}}$$ $$\displaystyle\leq\left\|f_{0}\right\|_{L^{a}_{x,v}}+C\Big{\|}(1+\left\|S(t)% \right\|_{L^{\infty}})\rho(t,x)\Big{\|}_{L^{q^{\prime}}_{t}L^{r}_{x}L^{p}_{v}}$$ $$\displaystyle=\left\|f_{0}\right\|_{L^{a}_{x,v}}+C(|V|)\left\|(1+\left\|S(t)% \right\|_{L^{\infty}})\left\|\rho(t,x)\right\|_{L^{r}_{x}}\ \right\|_{L^{q^{% \prime}}_{t}}.$$ (23) In the sequel we omit the constant part in the growth of the turning kernel for the sake of clarity. Assume $$p>\frac{3}{2}.$$ (24) Then $p^{\prime}<3$ therefore, $$\left\|S(t)\right\|_{L^{\infty}}\leq\left\|G*\rho(t)\right\|_{L^{\infty}}\leq% \left\|G\right\|_{L^{p^{\prime}}}\left\|\rho(t)\right\|_{L^{p}}\leq C\left\|% \rho(t)\right\|_{L^{p}},$$ (25) because $G(x)\sim\frac{C}{|x|}$ for small $|x|$, and $G(x)$ decays rapidly for large $|x|$. Moreover, since $r\leq p$ we have by interpolation, $$\left\|\rho(t)\right\|_{L^{r}}\leq\left\|\rho(t)\right\|_{L^{1}}^{1-\frac{p^{% \prime}}{r^{\prime}}}\left\|\rho(t)\right\|_{L^{p}}^{\frac{p^{\prime}}{r^{% \prime}}}=M^{1-\frac{p^{\prime}}{r^{\prime}}}\left\|\rho(t)\right\|_{L^{p}}^{% \frac{p^{\prime}}{r^{\prime}}}.$$ Therefore $$\displaystyle\left\|\ \left\|S(t)\right\|_{L^{\infty}}\left\|\rho(t,x)\right\|% _{L^{r}_{x}}\ \right\|_{L^{q^{\prime}}_{t}}$$ $$\displaystyle\leq C\left\|\ \left\|\rho(t)\right\|_{L^{p}}\ \left\|\rho(t)% \right\|_{L^{p}}^{\frac{p^{\prime}}{r^{\prime}}}\ \right\|_{L^{q^{\prime}}_{t}}$$ $$\displaystyle=\left\|\ \left\|\rho(t)\right\|_{L^{p}}\ \right\|_{L^{q^{\prime}% \left(1+\frac{p^{\prime}}{r^{\prime}}\right)}_{t}}^{1+\frac{p^{\prime}}{r^{% \prime}}}$$ Now $$\left\|\rho(t)\right\|_{L^{p}}=\left\|f(t,x,v)\right\|_{L^{p}_{x}L^{1}_{v}}% \leq C(|V|)\left\|f(t,x,v)\right\|_{L^{p}_{x}L^{r}_{v}}$$ therefore $$\left\|\ \left\|S(t)\right\|_{L^{\infty}}\left\|\rho(t,x)\right\|_{L^{r}_{x}}% \ \right\|_{L^{q^{\prime}}_{t}}\leq C\left\|f(t,x,v)\right\|_{L^{q^{\prime}% \left(1+\frac{p^{\prime}}{r^{\prime}}\right)}_{t}L^{p}_{x}L^{r}_{v}}^{1+\frac{% p^{\prime}}{r^{\prime}}}.$$ Suppose that $$q^{\prime}\left(1+\frac{p^{\prime}}{r^{\prime}}\right)=q.$$ (26) Then $$\left\|\ \left\|S(t)\right\|_{L^{\infty}}\left\|\rho(t,x)\right\|_{L^{r}_{x}}% \ \right\|_{L^{q^{\prime}}_{t}}\leq C\left\|f(t,x,v)\right\|_{L^{q}_{t}L^{p}_{% x}L^{r}_{v}}^{1+\frac{p^{\prime}}{r^{\prime}}}$$ (27) and plugging this into (23) we get $$\left\|f(t,x,v)\right\|_{L^{q}_{t}L^{p}_{x}L^{r}_{v}}\leq\left\|f_{0}\right\|_% {L^{a}_{x,v}}+C\left\|f(t,x,v)\right\|_{L^{q}_{t}L^{p}_{x}L^{r}_{v}}^{1+\frac{% p^{\prime}}{r^{\prime}}}$$ If $\left\|f_{0}\right\|_{L^{a}_{x,v}}$ is small enough then we can bootstrap. We need to verify that there exist $(q,p,r,a)$ satisfying (22), (24) and (26). There are many possible choices. For example, if we want initial data $f_{0}\in L^{a}_{x,v}$ with $a=\frac{3}{2}$ (critical exponent in dimension 3) we must choose $p$ and $r$ so that $\frac{1}{p}+\frac{1}{r}=\frac{4}{3}$. The complete set of exponents solving these constraints is: $$q=1+\sqrt{2}\ ,\ p=\frac{9+3\sqrt{2}}{7}\ ,\ r=3\left(\sqrt{2}-1\right)\ ,$$ where all conditions are fulfilled. ∎ 4.3 Sublinear $L^{\infty}$ growth To close this section we give a quick sketch of the observation in [7] that the hypothesis $$0\leq T[S](t,x,v,v^{\prime})\leq C\Big{(}1+\left\|S(t,\cdot)\right\|_{L^{% \infty}}^{\alpha}\Big{)}\ ,$$ implies global existence. Fix $p$ and $q$ such that $$\frac{\alpha}{3}+\frac{1}{p}=\frac{1}{q}\ ,\quad p>\frac{3}{2}\ .$$ Then we have the following elliptic estimate (see below), $$\left\|S(t,\cdot)\right\|_{L^{\infty}}=\left\|G*\rho(t)\right\|_{L^{\infty}}% \leq C(M)\left\|\rho(t)\right\|_{L^{p}}^{p^{\prime}/3}\ .$$ (28) Therefore (again omitting the constant contribution of the turning kernel) $$f(t,x,v)\leq f_{0}(x-tv,v)+C\int_{0}^{t}\left\|\rho(s)\right\|_{L^{p}}^{\alpha% }\rho(s,x-(t-s)v)ds.$$ Take the $L^{p}_{x}L^{q}_{v}$ norm and use the dispersion estimate with $\lambda=3(1/q-1/p)=\alpha$ to get $$\displaystyle\left\|f(t)\right\|_{L^{p}_{x}L^{q}_{v}}$$ $$\displaystyle\leq t^{-\alpha}\|f_{0}(x,v)\|_{L^{q}_{x}L^{p}_{v}}+|V|^{1/p}\int% _{0}^{t}\frac{1}{(t-s)^{\alpha}}\left\|\rho(s)\right\|_{L^{p}}^{p^{\prime}% \alpha/3}\left\|\rho(s)\right\|_{L^{q}_{x}}ds$$ (29) $$\displaystyle\leq t^{-\alpha}\|f_{0}(x,v)\|_{L^{q}_{x}L^{p}_{v}}+C\int_{0}^{t}% \frac{1}{(t-s)^{\alpha}}\left\|\rho(s)\right\|_{L^{p}}^{p^{\prime}\alpha/3+p^{% \prime}/q^{\prime}}\ ds,$$ (30) where $p^{\prime}\alpha/3+p^{\prime}/q^{\prime}=1$ by definition. To prove the elliptic estimate (28) write $$S\leq C\rho*\frac{\chi_{|x|\leq R}}{|x|}+C\rho*\frac{\chi_{|x|\geq R}}{|x|}$$ Then, if $p^{\prime}<3$, $$\displaystyle\|S\|_{L^{\infty}}$$ $$\displaystyle\leq C\|\rho\|_{L^{p}}\|\frac{\chi_{|x|\leq R}}{|x|}\|_{L^{p^{% \prime}}}+C\|\rho\|_{L^{1}}\|\frac{\chi_{|x|\geq R}}{|x|}\|_{L^{\infty}}$$ $$\displaystyle\leq C\Big{(}\|\rho\|_{L^{p}}R^{\frac{3}{p^{\prime}}-1}+\|\rho\|_% {L^{1}}R^{-1}\Big{)}\ .$$ Choose $R$ so that $\|\rho\|_{L^{p}}R^{\frac{3}{p^{\prime}}-1}=\|\rho\|_{L^{1}}R^{-1}$, i.e. choose $$R=\left(\frac{\|\rho\|_{L^{1}}}{\|\rho\|_{L^{p}}}\right)^{p^{\prime}/3}.$$ This gives $$\|S\|_{L^{\infty}}\leq C\|\rho\|_{L^{p}}^{p^{\prime}/3}\|\rho\|_{L^{1}}^{1-p^{% \prime}/3}=CM^{1-p^{\prime}/3}\|\rho\|_{L^{p}}^{p^{\prime}/3}\ .$$ 5 Extension to internal dynamics Recall the kinetic model with internal dynamics: $$\displaystyle\partial_{t}p+v\cdot\nabla_{x}p+\nabla_{y}\cdot\Big{(}G(y,S)p\Big% {)}=$$ $$\displaystyle\int_{v^{\prime}\in V}\lambda[y]K(v,v^{\prime})p(t,x,v^{\prime},y% )dv^{\prime}$$ $$\displaystyle\qquad\qquad-\lambda[y]p(t,x,v,y)\ ,$$ (31a) $$\displaystyle-\Delta S+S=\rho\ ,$$ (31b) Assuming that $K$ is bounded we reduce to $K=1/|V|$ without loss of generality. This model takes into account the transport along characteristics of the internal cellular dynamics $$\frac{dy}{dt}=G(y,S(t,x))\ ,\quad y\in{\mathbb{R}}^{m}\ .$$ For E. coli, the regulatory network described by $G$ is made of six main proteins essentially (named Che-proteins), and the main events are methylation and phosphorylation. Indeed, in the absence of a chemoattractant (basal activity), the phosphorylated protein Che-Y is supposed to diffuse inside the cell and to reach the flagella motor complex, enhancing switch between CCW rotation and CW rotation, that is tumbling. This transduction pathway is in fact inhibited when the chemoattractant (say aspartate) binds a membrane receptor, triggering methylation of the membrane receptor complex, and eventually inhibition of the tumbling process. This network exhibits a remarkable excitation/adaptation behavior, which is crucial for cell migration. For the sake of simplicity, one deals in general with a system of two coupled ODEs which captures the same features. This system should be excitable with slow adaptation – there is a single, stable equilibrium state, but a perturbation above a small threshold triggers a large excursion in the phase plane (see figure 1 (left)) – and possibly one-sided – in case of positive chemotaxis, the cells do not respond specifically to a decrease of the chemoattractant concentration [4]. This characterization of dynamical systems is very well known in biological modeling, as it is the basis of the FitzHugh-Nagumo models [28] for potential activity in axons. Furthermore, it is often associated to the phenomenon of pulse wave propagation (e.g. calcium waves), [25]. In the context of cell migration, it is also involved in the slime mold amoebae D. discoideum aggregation process, where the chemoattractant cAMP is relayed by the cells [20, 10]. To be more concrete, the following set of equations is generally proposed [13] $$\left\{\begin{array}[]{rll}\dfrac{dy_{1}}{dt}=&\dfrac{1}{\tau_{e}}\big{(}h(S)-% (y_{1}+y_{2})\big{)}&{\mbox{(}excitation)}\vspace{.2cm}\ ,\\ \dfrac{dy_{2}}{dt}=&\dfrac{1}{\tau_{a}}\big{(}h(S)-y_{2}\big{)}&{\mbox{(}% adaptation)}\ .\end{array}\right.$$ (32) Considered to be decoupled from the transport equation, these two internal quantities relax respectively to $$\lim_{t\to\infty}y_{1}=0\ ,\quad\lim_{t\to\infty}y_{2}=h(S)\ ,$$ with a slow time scale associated to adaptation provided that $\tau_{e}\ll\tau_{a}$. However, this system cannot reproduce true excitability with a large gain factor for small perturbations because it is linear with respect to the variable $y$. In a slightly different context (pulsatory cAMP waves), Dolak and Schmeiser considered an even simpler system [10], namely $$\left\{\begin{array}[]{rll}y_{1}=&\big{(}h(S)-y_{2}\big{)}_{+}&{\mbox{(}% excitation)}\ ,\\ \dfrac{dy_{2}}{dt}=&\dfrac{1}{\tau_{a}}\big{(}h(S)-y_{2}\big{)}&{\mbox{(}% adaptation)}\ .\end{array}\right.$$ (33) This particular choice does select responses to one-sided stimuli, but fails for true excitability. We suggest to consider the following phenomenological translated slow-fast, FHN type, system, $$\left\{\begin{array}[]{rll}\dfrac{dy_{1}}{dt}=&\dfrac{1}{\tau_{e}}\big{(}h(S)-% q(y_{1})-y_{2}\big{)}&{\mbox{(}excitation)}\vspace{.2cm}\ ,\\ \dfrac{dy_{2}}{dt}=&\dfrac{1}{\tau_{a}}\big{(}h(S)+y(1)-y_{2}\big{)}&{\mbox{(}% adaptation)}\ ,\end{array}\right.$$ (34) where $q$ is a cubic function depicted in figure 1. Proof of Theorem 1.4.. Our next step is to prove global existence under general and manageable assumptions settled in Theorem 1.4. But let us begin with an important remark on the methodology. Remark 3. To obtain a priori estimates, one possible strategy would be to use the characteristics to handle with (31), as it is performed in [12] in 1D. For this purpose, integrating the hyperbolic (31) along the backward-in-time auxiliary problem $$\dot{X}(s)=v\ ,\quad\dot{Y}(s)=G(Y,S(s,X))\ ,\quad(X(t),Y(t))=(x,y)\ ,$$ gives the estimate $$\dfrac{d}{ds}f(s,X(s),v,Y(s))-\Big{(}\nabla_{y}\cdot G\Big{)}p\leq\lambda[Y]% \mu(s,X,Y)\ .$$ The difficulty arises at two levels here. First one has to control the $\nabla_{y}\cdot G$ contribution, and secondly one has to perform later on the change of variables $z=Y(y)$. This induces a Jacobian contribution $\left|\dfrac{\partial Y}{\partial y}\right|^{-1}$, and one has to control it too. In the sequel we avoid these two difficulties by working on averaged quantities. We use a partial representation formula of the solution from the free transport operator $\partial_{t}p+v\cdot\nabla_{x}p$. First integrating the equation with respect to $y$, we obtain $$\partial_{t}f+v\cdot\nabla_{x}f+0\leq\frac{1}{|V|}\int_{y}\lambda[y]\mu(t,x,y)% \ dy\ ,$$ so that $$f(t,x,v)\leq f_{0}(x-tv,v)+\frac{1}{|V|}\int_{s=0}^{t}\int_{y}\lambda[y]\mu(s,% x-(t-s)v,y)\ dyds\ .$$ Using the $L^{p}_{x}L^{1}_{v}$ dispersion Lemma 2.1 we get as usual $$\displaystyle\|\rho(t)\|_{L^{p}}$$ $$\displaystyle\leq\|f_{0}(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}}+\frac{1}{|V|}\int_{s=0% }^{t}\left\|\int_{y}\lambda[y]\mu(s,x-(t-s)v,y)\ dy\right\|_{L^{p}_{x}L^{1}_{v% }}ds$$ $$\displaystyle\leq t^{-\lambda}\|f_{0}(x,v)\|_{L^{1}_{x}L^{p}_{v}}$$ $$\displaystyle\qquad+|V|^{1/p-1}\int_{s=0}^{t}\frac{1}{(t-s)^{\lambda}}\iint_{x% ,y}\lambda[y]\mu(s,x,y)\ dxdyds\ ,$$ (35) where $\lambda=3/p^{\prime}$. We now use the two growth assumptions on $\lambda$ and $G$: $$\lambda[y]\leq C(1+|y|)\ ,\quad|G|(y,S)\leq C(1+|y|+S^{\alpha})\ ,\ 0\leq% \alpha<1\ ,$$ to control the time growth of the average quantity $\iint_{x,y}|y|\mu(t-s,x,y)\ dxdy$. Remark 4. Note that in dimension $d=2$ we can handle any nonnegative $\alpha$. We test the master equation (31) against $|y|$: $$\dfrac{d}{ds}\iint_{x,y}|y|\mu(s,x,y)\ dxdy+0+\iint_{x,y}|y|\nabla_{y}\cdot(G(% y,S)\mu(s,x,y))\ dydx=0\ ,$$ therefore, using $|G|\leq C(1+|y|+S^{\alpha})$, $$\displaystyle\dfrac{d}{ds}\iint_{x,y}|y|\mu(s)\ dxdy$$ $$\displaystyle=\iint_{x,y}\frac{y}{|y|}\cdot G(y,S)\mu(s,x,y)\ dydx$$ $$\displaystyle\leq\iint_{x,y}|G|(y,S)\mu(s,x,y)\ dydx$$ $$\displaystyle\leq C+C\iint_{x,y}|y|\mu(s,x,y)\ dydx+C\int_{x}|S(s,x)|^{\alpha}% \rho(s,x)\ dx\ .$$ (36) Remark 5. If we agree to diminish $\alpha$ it is possible to deal with higher exponents in $\lambda[y]\leq C(1+|y|^{\gamma})$. For instance we have by Young’s inequality: $$\displaystyle\dfrac{d}{ds}\iint_{x,y}|y|^{\gamma}\mu(s)\ dxdy$$ $$\displaystyle\leq\gamma\iint_{x,y}|y|^{\gamma-1}|G|(y,S)\mu(s,x,y)\ dydx$$ $$\displaystyle\leq C\iint_{x,y}|y|^{\gamma}\mu(s,x,y)\ dydx$$ $$\displaystyle\quad+C\gamma\iint_{x,y}\left(\frac{\gamma-1}{\gamma}|y|^{\gamma}% +\frac{1}{\gamma}|S(s,x)|^{\alpha\gamma}\right)\mu(s,x,y)\ dydx\ .$$ The same argument follows provided that $\alpha\gamma<1$. More general combination of exponents could have been considered. We have chosen here a simple framework for the sake of clarity. We can use the Duhamel formula to represent the inequality (36) as $$\displaystyle\iint_{x,y}|y|\mu(s,x,y)\ dxdy\leq Ce^{Cs}+e^{Cs}\iint_{x,y}|y|% \mu_{0}(x,y)\ dxdy\\ \displaystyle+C\int_{\tau=0}^{s}e^{C(s-\tau)}\int_{x}|S(\tau,x)|^{\alpha}\rho(% \tau,x)\ dxd\tau\ .$$ Plugging that into (35) gives $$\|\rho\|_{L^{p}}\leq C_{0}(t)+C\int_{s=0}^{t}\frac{1}{(t-s)^{\lambda}}\int_{% \tau=0}^{s}e^{C(s-\tau)}\int_{x}|S(\tau,x)|^{\alpha}\rho(\tau,x)\ dxd\tau ds\ .$$ We choose $p<3/2$ so that $\lambda=3/p^{\prime}<1$. Since $\alpha<1$ we have $3<\frac{3}{\alpha}$ and we can choose $p$ sufficiently close to $3/2$ so that $3<p^{\prime}<\frac{3}{\alpha}$. Then $$\int S(t,x)^{\alpha}\rho(t,x)dx\leq\left\|S^{\alpha}\right\|_{L^{p^{\prime}}}% \left\|\rho\right\|_{L^{p}}=\left\|S\right\|_{L^{\alpha p^{\prime}}}^{\alpha}% \left\|\rho\right\|_{L^{p}}\ .$$ ¿From the mean field chemical equation (31b) $-\Delta S+S=\rho$ we deduce the following elliptic estimate. We have $\alpha p^{\prime}<3$, and $S=G*\rho$ where $G(x)\sim\frac{C}{|x|}$ for small $|x|$ (short range) and $G(x)$ decreases exponentially fast for large $|x|$ (long range). Thus we obtain $$\left\|S\right\|_{L^{\alpha p^{\prime}}}=\left\|G*\rho\right\|_{L^{\alpha p^{% \prime}}}\leq\left\|G\right\|_{L^{\alpha p^{\prime}}}\left\|\rho\right\|_{L^{1% }}\leq CM\ ,$$ therefore $$\int S(t,x)^{\alpha}\rho(t,x)dx\leq CM^{\alpha}\left\|\rho\right\|_{L^{p}}\ .$$ We obtain $$\displaystyle\|\rho(t)\|_{L^{p}}$$ $$\displaystyle\leq C_{0}(t)+C\int_{0}^{t}\frac{1}{(t-s)^{\lambda}}\int_{\tau=0}% ^{s}e^{C(s-\tau)}\left\|\rho(\tau)\right\|_{L^{p}}d\tau ds$$ $$\displaystyle\leq C_{0}(t)+C\int_{0}^{t}\left\|\rho(\tau)\right\|_{L^{p}}\int_% {\tau}^{t}\frac{e^{C(s-\tau)}}{(t-s)^{\lambda}}dsd\tau\ ,$$ Using the boundedness of $\int_{s=\tau}^{t}\frac{1}{(t-s)^{\lambda}}e^{C(s-\tau)}\ ds$ with respect to $\tau$, we conclude thanks to a Gronwall estimate. ∎ References [1] W. Alt, Biased random walk models for chemotaxis and related diffusion approximations, J. Math. Biol. 9 (1980), 147–177. [2] A. Blanchet, J. Dolbeault and B. Perthame, Two-dimensional Keller-Segel model: optimal critical mass and qualitative properties of the solutions, Electron. J. Differential Equations 44 (2006), 32 pp. (electronic). [3] N. Bournaveas, V. Calvez, S. Gutiérrez and B. Perthame, Global existence for a kinetic model of chemotaxis via dispersion and Strichartz estimates, to appear in Comm. Partial Differential Equations. Preprint arXiv:0709.4171v1. [4] D.A. Brown and H.C. Berg, Temporal stimulation of chemotaxis in Escherichia coli, Proc. Natl. Acad. Sci. USA 71 (1974), 1388–1392. [5] F. Castella and B. Perthame, Estimations de Strichartz pour les équations de transport cinétique, C. R. Math. Acad. Sci. Paris 322 (1996), 535–540. [6] F.A.C.C. Chalub, Y. Dolak-Struß, P.A. Markowich, D. Oelz, C. Schmeiser and A. Soreff, Model hierarchies for cell aggregation by chemotaxis, Math. Models Methods Appl. Sci. 16 (2006), 1173–1197. [7] F.A.C.C. Chalub, P. Markowich, B. Perthame and C. Schmeiser, Kinetic models for chemotaxis and their drift-diffusion limits, Monatsh. Math. 142 (2004), 123–141. [8] F.A.C.C. Chalub and J.F.A. Rodrigues, A class of kinetic models for chemotaxis with threshold to prevent overcrowding, Port. Math. (N.S.) 63 (2006), 227–250. [9] L. Corrias, B. Perthame and H. Zaag, Global solutions of some chemotaxis and angiogenesis systems in high space dimensions, Milan J. Math. 72 (2004), 1–28. [10] Y. Dolak and C. Schmeiser, Kinetic models for chemotaxis: hydrodynamic limits and spatio-temporal mechanisms, J. Math. Biol. 51 (2006), 595–615. [11] J. Dolbeault and B. Perthame, Optimal critical mass in the two-dimensional Keller-Segel model in $\mathbb{R}^{2}$, C. R. Math. Acad. Sci. Paris 339 (2004), 611–616. [12] R. Erban and H.J. Hwang, Global existence results for complex hyperbolic models of bacterial chemotaxis, Discrete Contin. Dyn. Syst. Ser. B 6 (2006), 1239–1260. [13] R. Erban and H.G. Othmer, From individual to collective behavior in bacterial chemotaxis, SIAM J. Appl. Math. 65 (2004), 361–391. [14] R. Erban and H.G. Othmer, Taxis equations for amoeboid cells, J. Math. Biol. 54 (2007), 847–885. [15] L.F. Garrity and G.W. Ordal, Chemotaxis in Bacillus subtilis: how bacteria monitor environmental signals, Pharmacol. Ther. 68 (1995), 87–104. [16] D. Gilbarg and N. Trudinger, “Elliptic Partial Differential Equations of Second Order”, 3rd edition, Grundlehren der Mathematischen Wissenschaften 224, Springer-Verlag, Berlin, 1998. [17] D.C. Hauri and J. Ross, A model of excitation and adaptation in bacterial chemotaxis, Biophys. J. 68 (1995), 708–722. [18] T. Hillen and H.G. Othmer, The diffusion limit of transport equations derived from velocity-jump processes, SIAM J. Appl. Math. 61 (2000), 751–775. [19] T. Hillen, K. Painter and C. Schmeiser, Global existence for chemotaxis with finite sampling radius, Discrete Contin. Dyn. Syst. Ser. B 7 (2007), 125–144. [20] T. Höfer, J.A. Sherratt and P.K. Maini, Dictyostelium discoideum: cellular self-organisation in an excitable medium, Proc. Roy. Soc. Lond. B 259 (1995), 249–257. [21] D. Horstmann, From 1970 until present: the Keller-Segel model in chemotaxis and its consequences I, Jahresber. Deutsch. Math.-Verein. 105 (2003), 103–165. [22] H.J. Hwang, K. Kang and A. Stevens, Global solutions of nonlinear transport equations for chemosensitive movement, SIAM J. Math. Anal. 36 (2005), 1177–1199. 6 [23] S. Ibrahim, M. Majdoub and N. Masmoudi, Global solutions for a semilinear, two-dimensional Klein-Gordon equation with exponential-type nonlinearity, Comm. Pure Appl. Math. 59 (2007), 1639–1658. [24] S. Ibrahim, M. Majdoub and N. Masmoudi, Double logarithmic inequality with a sharp constant, Proc. Amer. Math. Soc. 135 (2007), 87–97. 8 [25] J. Keener and J. Sneyd, “Mathematical Physiology”, Interdisciplinary Applied Mathematics 8, Springer-Verlag, New York, 1998. [26] E.H. Lieb and M. Loss, “Analysis”, 2nd edition, Graduate Studies in Mathematics 14, American Mathematical Society, Providence, RI, 2001. [27] R.M. Macnab and D.E. Koshland Jr, The gradient-sensing mechanism in bacterial chemotaxis, Proc. Natl. Acad. Sci. USA 69 (1972), 2509–2512. [28] J.D. Murray, “Mathematical Biology. I. An Introduction”, 3rd edition, Interdisciplinary Applied Mathematics 17, Springer-Verlag, New York 2002. [29] M. Nakamura and T. Ozawa, Nonlinear Schrödinger equations in the Sobolev space of critical order, J. Funct. Anal. 155 (1998), 364–380. [30] M. Nakamura and T. Ozawa, Global solutions in the critical Sobolev space for the wave equations with nonlinearity of exponential growth, Math. Z. 231 (1999), 479–487. 9 [31] H.G. Othmer, S.R. Dunbar and W. Alt, Models of dispersal in biological systems, J. Math. Biol. 26 (1988), 263–298. [32] H.G. Othmer and P. Schaap, Oscillatory cAMP signaling in the development of Dictyostelium discoideum, Comments Theor. Biol. 5 (1998), 175–282. [33] B. Perthame, PDE models for chemotactic movements: parabolic, hyperbolic and kinetic, Appl. Math. 49 (2004), 539–564. [34] B. Perthame, Mathematical tools for kinetic equations, Bull. Amer. Math. Soc. 41 (2004), 205–244. [35] B. Perthame, “Transport Equations in Biology”, Frontiers in Mathematics, Birkhäuser Verlag, Basel, 2007. [36] J.E. Segall, S.M. Block and H.C. Berg, Temporal comparisons in bacterial chemotaxis, Proc. Natl. Acad. Sci. USA 83 (1986), 8987–8991. [37] P.A. Spiro, J.S. Parkinson and H.G. Othmer, A model of excitation and adaptation in bacterial chemotaxis, Proc. Natl. Acad. Sci. USA 94 (1997), 7263–8. [38] D.J. Webre, P.M. Wolanin and J.B. Stock, Bacterial chemotaxis, Curr. Biol. 13 (2003), R47–49.
A simple energy pump for the surface quasi-geostrophic equation Alexander Kiselev and Fedor Nazarov Abstract. We consider the question of growth of high order Sobolev norms of solutions of the conservative surface quasi-geostrophic equation. We show that if $s>0$ is large then for every given $A$ there is exist small in $H^{s}$ initial data such that the corresponding solution’s $H^{s}$ norm exceeds $A$ at some time. The idea of the construction is quasilinear. We use a small perturbation of a stable shear flow. The shear flow can be shown to create small scales in the perturbation part of the flow. The control is lost once the nonlinear effects become too large. Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA; email: kiselev@math.wisc.edu, nazarov@math.wisc.edu 1. Introduction In this paper, we consider the surface quasi-geostrophic equation $$\partial_{t}\theta=(u\cdot\nabla)\theta,\,\,\,\theta(x,0)=\theta_{0}(x),$$ (1) $u=\nabla^{\perp}(-\Delta)^{-1/2}\theta,$ set on the torus $\mathbb{T}^{2}$ (which is equivalent to working with periodic initial data in $\mathbb{R}^{2}$). Observe that the structure of the SQG equation is similar to the 2D Euler equation written for vorticity, but the velocity is less regular in the SQG case ($u=\nabla^{\perp}(-\Delta)^{-1}\theta$ for the 2D Euler). The SQG equation comes from atmospheric science, and can be derived via formal asymptotic expansion (assuming small Rossby and Ekman numbers) from a larger system of 3D Navier-Stokes equations in a rotating frame coupled with temperature equation through gravity induced buoyancy force (see [8, 4]). The equation (1) describes evolution of the potential temperature on the surface, and its solution can be used to determine the main order approximation for the solution of the full three dimensional problem. In mathematical literature, the SQG equation was introduced for the first time by Constantin, Majda and Tabak in [1], where a parallel between the structure of the conservative SQG equation and 3D Euler equation was drawn. Numerical experiments carried out in [1] showed steep growth of the gradient of solution in the saddle point scenario for the initial data, and suggested the possibility of singularity formation in finite time. Subsequent numerical experiments [7] suggested the solutions stay regular. Later, Cordoba [2] ruled out singularity formation in the scenario suggested by [1]. Despite significant effort by many researchers, whether blow up for the solutions of (1) can happen in a finite time remains open. Moreover, there are no examples that exhibit just infinite growth in time for some high order Sobolev norm. This paper is a step towards better understanding of of this phenomenon. Before stating the main result, we would like to compare the situation with what is known for two-dimensional Euler equation, which in vorticity form coincides with (1) but the velocity is given by $u=\nabla^{\perp}(-\Delta)^{-1}\theta.$ The global existence of smooth solutions is known in this case, and there is an upper bound on the gradient and higher order Sobolev norms of $\theta$ that is double exponential in time (see, e.g. [5]). However the examples with actual growth are much weaker - the best current result is just superlinear in time (Denisov [3], with earlier works by Nadirashvili [6] and Yudovich [10] giving linear or weaker rates of growth). It may appear that the SQG equation being more singular, it should be easier to prove infinite growth in this case. However to prove infinite in time growth, one needs to produce an example of ”stable instability”, a controllable mechanism of small scale production. This control is more difficult for the SQG than for two-dimensional Euler equation. Let us denote $H^{s}$ the usual scale of Sobolev spaces on $\mathbb{T}^{2}.$ The main purpose of this short note is to show that the identically zero solution is strongly unstable in $H^{s}$ for any $s$ sufficiently large. Namely, we will prove the following Theorem 1.1. Assume that $s$ is sufficiently large ($s\geq 11$ will do). Given any $A>0,$ there exists $\theta_{0}$ such that $\|\theta_{0}\|_{H^{s}}\leq 1,$ but the corresponding solution of (1) satisfies $${\rm\limsup}_{t\rightarrow\infty}\|\theta(\cdot,t)\|_{H^{s}}\geq A.$$ (2) Remark. 1. In our example, the initial data $\theta_{0}$ will be simply a trigonometric polynomial with a few nonzero harmonics. Its size will be well controlled in any $H^{s}$ norm. 2. From the argument, it will be clear that it is not difficult to derive a lower bound on time when the bound (2) is achieved. This time scales as a certain power of $A.$ 3. The arguments of Denisov [3] can also be used to produce similar result, with better control of constants and time - but in a different scenario. Denisov considers perturbation of an explicitly given saddle point flow. In this note, we will consider a technically simpler but less singular case of a shear flow. 2. The Proof We shall view the solutions $\theta(x,t)$ of (1) as sequences of Fourier coefficients $\hat{\theta}_{k}$, $k=(k_{1},k_{2})\in\mathbb{Z}^{2}$. On the Fourier side, after symmetrization, our solution satisfies the following equation. $$\frac{d}{dt}\hat{\theta}_{k}=\frac{1}{2}\sum_{l+m=k}(l\wedge m)\left(\frac{1}{% |l|}-\frac{1}{|m|}\right)\hat{\theta}_{l}\hat{\theta}_{m},\,\,\,\hat{\theta}_{% k}(0)=(\hat{\theta_{0}})_{k}.$$ (3) Here $l\wedge m=l_{1}m_{2}-l_{2}m_{1}.$ Our initial data $\theta_{0}$ will be just a simple trigonometric polynomial $p,$ given by $\hat{p}_{e}=\hat{p}_{-e}=1$, $\hat{p}_{g}=\hat{p}_{g+e}=\hat{p}_{-g}=\hat{p}_{-g-e}=\tau$ where $e=(1,0)$, $g=(0,2)$, and $\tau=\tau(A)>0$ is a small parameter to be chosen later. Then it follows from (3) that the solution is an even real-valued function with $\hat{\theta}_{0}=0$ for all times. Moreover, $\hat{\theta}_{k}(t)\equiv 0$ whenever $k_{2}$ is odd. We have two easy to check conservation laws: $\sum_{k}\hat{\theta}_{k}(t)^{2}=2+4\tau^{2}$ and $\sum_{k}\frac{\hat{\theta}_{k}^{2}(t)}{|k|}=2+2\tau^{2}\left(\frac{1}{2}+\frac% {1}{\sqrt{5}}\right)$. After subtraction, we obtain that $$\sum_{k}\hat{\theta}_{k}(t)^{2}\left(1-\frac{1}{|k|}\right)=\left(3-\frac{2}{% \sqrt{5}}\right)\tau^{2},$$ for all $t\geq 0.$ Since $\hat{\theta}_{\pm g/2}(t)=0,$ this implies $$\sum_{k\neq\pm e}\hat{\theta}_{k}(t)^{2}\leq\left(\frac{\sqrt{2}}{\sqrt{2}-1}% \right)\left(3-\frac{2}{\sqrt{5}}\right)\tau^{2}\leq 10\tau^{2}$$ for all times. Then the first conservation law also implies $\hat{\theta}_{e}(t)\in(1-8\tau^{2},1+2\tau^{2})\subset(1/2,2)$ for all times, provided that $\tau$ is sufficiently small. Consider the quadratic form $$\mathcal{J}(\hat{\theta})=\sum_{k\in\mathbb{Z}^{2}_{+}}\Phi(k)\hat{\theta}_{k}% \hat{\theta}_{k+e}\,.$$ We have $$\displaystyle\frac{d}{dt}\mathcal{J}(\hat{\theta})=\frac{1}{2}\sum_{k\in% \mathbb{Z}^{2}_{+}}\Phi(k)\left[\hat{\theta}_{k}\sum_{l+m=x+e\,,\,l,m\neq\pm e% }(l\wedge m)\left(\frac{1}{|l|}-\frac{1}{|m|}\right)\hat{\theta}_{l}\hat{% \theta}_{m}\right.\\ \displaystyle+\left.\hat{\theta}_{k+e}\sum_{l+m=k\,,\,l,m\neq\pm e}(l\wedge m)% \left(\frac{1}{|l|}-\frac{1}{|m|}\right)\hat{\theta}_{l}\hat{\theta}_{m}\right% ]+\\ \displaystyle\frac{1}{2}\hat{\theta}_{e}\sum_{k\in\mathbb{Z}^{2}_{+}}(e\wedge k% )\Phi(k)\left[\left(1-\frac{1}{|k|}\right)\hat{\theta}_{k}^{2}-\left(1-\frac{1% }{|k+2e|}\right)\hat{\theta}_{k}\hat{\theta}_{k+2e}\right.\\ \displaystyle-\left.\left(1-\frac{1}{|k+e|}\right)\hat{\theta}_{k+e}^{2}+\left% (1-\frac{1}{|k-e|}\right)\hat{\theta}_{k+e}\hat{\theta}_{k-e}\right]\equiv% \sigma+\Sigma\,.$$ where $\sigma$ denotes the first sum and $\Sigma$ the second. Since for $l+m=k$, we have $\left|(l\wedge m)\left(\frac{1}{|l|}-\frac{1}{|m|}\right)\right|\leqslant 2|k|$ and $|k+e|\asymp|k|$ for $k\in\mathbb{Z}^{2}_{+}$, we conclude that $$|\sigma|\leqslant\left(\sum_{k\in\mathbb{Z}^{2}_{+}}|k||\Phi(k)||\hat{\theta}_% {k}|\right)\left(\sum_{l\neq\pm e}\hat{\theta}_{l}^{2}\right)\leqslant C\tau^{% 2}\sum_{k\in\mathbb{Z}^{2}_{+}}|k||\Phi(k)||\hat{\theta}_{k}|\,.$$ On the other hand, $\Sigma$ can be rewritten as $$\displaystyle\hat{\theta}_{e}\sum_{k_{2}>0}k_{2}\sum_{k_{1}\in\mathbb{Z}}\frac% {1}{4}\times\\ \displaystyle\left[(\Phi(k-e)-\Phi(k-2e))\left(1-\frac{1}{|k-e|}\right)\hat{% \theta}_{k-e}^{2}+(\Phi(k+e)-\Phi(k))\left(1-\frac{1}{|k+e|}\right)\hat{\theta% }_{k+e}^{2}\right.+\\ \displaystyle\left.2\left\{\Phi(k))\left(1-\frac{1}{|k-e|}\right)-\Phi(k-e)% \left(1-\frac{1}{|k+e|}\right)\right\}\hat{\theta}_{k-e}\hat{\theta}_{k+e}\right]$$ Now let $\Phi(k)=k_{1}+\frac{1}{2}.$ We get the sum of quadratic forms with the coefficients $$1-\frac{1}{\sqrt{(k_{1}-1)^{2}+k_{2}^{2}}}\ ,\ 1-\frac{1}{\sqrt{(k_{1}+1)^{2}+% k_{2}^{2}}}$$ at the squares and $$\left(k_{1}+\frac{1}{2}\right)\left(1-\frac{1}{\sqrt{(k_{1}-1)^{2}+k_{2}^{2}}}% \right)-\left(k_{1}-\frac{1}{2}\right)\left(1-\frac{1}{\sqrt{(k_{1}+1)^{2}+k_{% 2}^{2}}}\right)$$ at the double product. A straightforward computation shows that when $k_{1}=0$, this form is degenerate and when $k_{1}\neq 0$, it is strictly positive definite and dominates $\frac{c}{|k|^{3}}(\hat{\theta}_{k-e}^{2}+\hat{\theta}_{k+e}^{2})$. Using that fact that $\hat{\theta}_{e}(t)\geq 1/2$ for all times, we obtain $$\Sigma\geqslant c\sum_{k\in\mathbb{Z}^{2}_{+}}\frac{\hat{\theta}_{k}^{2}}{|k|^% {3}}\,.$$ Now there are several possibilities. A) At some time $t,$ we will have $\sum_{k\in\mathbb{Z}^{2}_{+}}|k|^{2}|\hat{\theta}_{k}|\geqslant\sum_{k\in% \mathbb{Z}^{2}_{+}}|k||\Phi(k)||\hat{\theta}_{k}|\geqslant\tau^{1/2}\,.$ Observe that $$\sum_{k\in\mathbb{Z}^{2}_{+}}|k|^{2}|\hat{\theta}_{k}|\leqslant\left(\sum_{k% \in\mathbb{Z}^{2}_{+}}\hat{\theta}_{k}^{2}\right)^{1/3}\left(\sum_{k\in\mathbb% {Z}^{2}_{+}}|k|^{21}\hat{\theta}_{k}^{2}\right)^{1/6}\left(\sum_{k\in\mathbb{Z% }^{2}_{+}}|k|^{-3}\right)^{1/2}$$ Then, since $\sum_{k\in\mathbb{Z}^{2}_{+}}\hat{\theta}_{k}^{2}\leqslant 10\tau^{2}$, we get that the $H^{11}$ norm of the solution gets large: $\|\theta\|_{H^{11}}\geq C\tau^{-1/6}.$ B) The case (A) never occurs but $\sum_{k\in\mathbb{Z}^{2}_{+}}|k|^{-3}\hat{\theta}_{k}^{2}$ becomes comparable with $\tau^{5/2}$. Note that until this moment $\mathcal{J}(\hat{\theta})$ increases from its initial value about $\tau^{2}$. Also, $\mathcal{J}(\hat{\theta})\leqslant\sum_{k\in\mathbb{Z}^{2}_{+}}|k|\hat{\theta}% _{k}^{2}$. Thus, in this case, we use $$\sum_{k\in\mathbb{Z}^{2}_{+}}|k|\hat{\theta}_{k}^{2}\leqslant\left(\sum_{k\in% \mathbb{Z}^{2}_{+}}|k|^{-3}\hat{\theta}_{k}^{2}\right)^{5/6}\left(\sum_{k\in% \mathbb{Z}^{2}_{+}}|k|^{21}\hat{\theta}_{k}^{2}\right)^{1/6}\,.$$ and, again, it follows that that the $H^{11}$ norm becomes large: $\|\theta\|_{H^{11}}\geq C\tau^{-\frac{1}{12}}$. At last, if neither (A), nor (B) occur, then $\mathcal{J}(\hat{\theta})$ grows without bound and the $H^{1/2}$-norm gets large eventually. Now given $A$ just choose $\tau$ sufficiently small and Theorem 1.1 follows. Acknowledgement. Research of AK has been supported in part by the NSF-DMS grant 0653813. Research of FN has been partially supported by the NSF-DMS grant 0800243. References [1] P. Constantin, A. Majda and E. Tabak. Formation of strong fronts in the 2D quasi-geostrophic thermal active scalar. Nonlinearity, 7, (1994), 1495–1533 [2] D. Cordoba, Nonexistence of simple hyperbolic blow up for the quasi-geostrophic equation, Ann. of Math., 148, (1998), 1135–1152 [3] S. Denisov, Infinite superlinear growth of the gradient for the two-dimensional Euler equation, Discrete Contin. Dyn. Syst. A, 23 (2009), 755–764 [4] I. Held, R. Pierrehumbert, S. Garner and K. Swanson. Surface quasi-geostrophic dynamics, J. Fluid Mech., 282, (1995), 1–20 [5] A. Majda and A. Bertozzi, Vorticity and Incompressible Flow, Cambridge University Press, 2002 [6] N.S. Nadirashvili, Wandering solutions of the two-dimensional Euler equation (Russian), Funktsional. Anal. i Prilozhen. 25 (1991), 70–71; translation in Funct. Anal. Appl. 25 (1991), 220–221 (1992) [7] K. Ohkitani and M. Yamada, Inviscid and inviscid-limit behavior of a SQG flow, Phys. Fluids 9 (1997), 876–882 [8] J. Pedlosky, Geophysical Fluid Dynamics, Springer, New York, 1987 [9] V.I. Judovich, The loss of smoothness of the solutions of the Euler equation with time (Russian), Dinamika Sploshn. Sredy 16 Nestacionarnye Probelmy Gidordinamiki (1974), 71-78 [10] V.I. Yudovich, On the loss of smoothness of the soltions to the Euler equation and the inhereht instability of flows of an ideal fluid, Chaos 10 (2000), 705–719
Bökstedt periodicity and quotients of DVRs Achim Krause, Thomas Nikolaus Abstract In this note we compute the topological Hochschild homology of quotients of DVRs. Along the way we give a short argument for Bökstedt periodicity and generalizations over various other bases. Our strategy also gives a very efficient way to redo the computations of $\operatorname{THH}$ (resp. logarithmic $\operatorname{THH}$) of complete DVRs originally due to Lindenstrauss-Madsen (resp. Hesselholt–Madsen). Introduction Topological Hochschild homology, THH, together with its induced variant topological cyclic homology, TC, has been one of the major tools to compute algebraic $K$-theory in recent years. It also is an important invariant in its own right, due to its connection to $p$-adic Hodge theory and crystalline cohomology [BMS18, BMS19]. The key point is that topological Hochschild homology $\operatorname{THH}_{*}(R)$, as opposed to algebraic $K$-theory, can be completely identified for many rings $R$. Let us list some examples here. 1. The most fundamental result in the field is Bökstedt periodicity, which states that $\operatorname{THH}_{*}(\mathbb{F}_{p})=\mathbb{F}_{p}[x]$ for a class $x$ in degree $2$. This is also the input for the work of Bhatt–Morrow–Scholze [BMS19]. 2. The $p$-adic computation of $\operatorname{THH}_{*}(\mathbb{Z}_{p})$ was also done by Bökstedt and eventually lead to the $p$-adic identification of $K_{*}(\mathbb{Z}_{p})$, see [BM94, Rog99]. 3. More generally, Lindenstrauss and Madsen identify $\operatorname{THH}_{*}(A)$ $p$-adically for a complete DVR $A$ with perfect residue field $k$ of characteristic $p$ [LM00]. This computation was one of the key inputs for Hesselholt and Madsen’s seminal computation of K-theory of rings of integers in $p$-adic number fields. 4. Brun computed $\operatorname{THH}_{*}(\mathbb{Z}/p^{n})$ in [Bru00]. This gives some information about $K_{*}(\mathbb{Z}/p^{n})$, which is still largely unknown, see [Bru01]. In this paper we revisit all the $\operatorname{THH}$-computations mentioned above from scratch, and give new, easier and more conceptual proofs. We will go one step further and give a complete formula for $\operatorname{THH}_{*}(A^{\prime})$ where $A^{\prime}=A/\pi^{k}$ is a quotient of a DVR $A$ with perfect residue field of characteristic $p$. We identify $\operatorname{THH}_{*}(A^{\prime})$ with the homology of an explicitly described DGA, see Theorem 5.2. This for example recovers the computation of $\operatorname{THH}_{*}(\mathbb{Z}/p^{k})$ by Brun and also identifies the ring structure in this case (which was unknown so far). The result shows an interesting dichotomy depending on how large $k$ is when compared to the $p$-adic valuation of the derivative of the minimal polynomial of a uniformizer of $A$ (relative to the Witt vectors of the residue field), see Section 6. The main new idea employed in this paper is to first compute $\operatorname{THH}$ of $A$ and $A/\pi^{k}$ relative to the spherical group ring $\mathbb{S}[z]$. This relative $\operatorname{THH}$ of $A$ satisfies a form of Bökstedt periodicity, which was to our knowledge first observed by J. Lurie, P. Scholze and B.Bhatt. It appeared in work of Bhatt-Morrow-Scholze [BMS19] as well as in [AMN18]. But the maneuver of working relative to the uniformizer is much older in the algebraic context, for example in the theory of Breuil-Kisin modules [Kat94, Bre99, Kis09].111We would like to thank Matthew Morrow and Lars Hesselholt for pointing this out and explaining the history to us. Finally, having computed THH relative to $\mathbb{S}[z]$ we use a descent style spectral sequence (see Section 4 and Section 5) to recover the absolute $\operatorname{THH}$. In Section 10 we also deduce the computation of logarithmic $\operatorname{THH}$ of CDVRs (due to Hesselholt–Madsen) from the computation of relative $\operatorname{THH}$ using a similar spectral sequence. Contents 1 Bökstedt periodicity for $\mathbb{F}_{p}$ 2 Bökstedt periodicity for perfect rings 3 Bökstedt periodicity for CDVRs 4 Absolute $\operatorname{THH}$ for CDVRs 5 Absolute $\operatorname{THH}$ for quotients of DVRs 6 Evaluation of the result 7 The general spectral sequences 8 Comparison of spectral sequences 9 Bökstedt periodicity for complete regular local rings 10 Logarithmic THH of CDVRs A Relation to the Hopkins-Mahowald result Conventions We freely use the language of $\infty$-categories and spectra. The sphere spectrum is denoted by $\mathbb{S}$. For a commutative ring $R$ there is an associated commutative ring spectrum which we abusively also denote by $R$. In this situation we have the ring spectra $\operatorname{HH}(R)$ (‘Hochschild homology’) and $\operatorname{THH}(R)$ (‘Topological Hochschild homology’) defined as $$\operatorname{HH}(R/\mathbb{Z})=R\otimes_{R\otimes_{\mathbb{Z}}R}R\qquad% \operatorname{THH}(R)=R\otimes_{R\otimes_{\mathbb{S}}R}R\ .$$ We denote the homotopy groups of these spectra by $\operatorname{THH}_{*}(R)$ and $\operatorname{HH}_{*}(R)$. More generally there are relative versions for a ring $R$ over a base ring (spectrum) $S$ given as $\operatorname{THH}(R/S)=R\otimes_{R\otimes_{S}R}R$ and similar for $\operatorname{HH}$. Note that Hochschild homology as defined here is equivalent to $\operatorname{THH}(R/\mathbb{Z})$ and is automatically fully derived. It thus agrees with what is classically called Shukla homology. We shall denote the $p$-completion of the spectrum $\operatorname{THH}(R)$ by $\operatorname{THH}(R;\mathbb{Z}_{p})$ and the homotopy groups accordingly by $\operatorname{THH}_{*}(R;\mathbb{Z}_{p})$. Note that these are in general not the $p$-completions of the groups $\operatorname{THH}_{*}(R)$, but in the case that the groups $\operatorname{THH}_{*}(R)$ have bounded order of $p^{\infty}$-torsion this is true. There is the commonly used conflicting notation $\operatorname{THH}_{*}(R;R^{\prime})$ for THH with coefficients in an $R$-algebra $R^{\prime}$, given by he homotopy groups of $\operatorname{THH}(R)\otimes_{R}R^{\prime}$. To avoid confusion we do not use the notation $\operatorname{THH}(R;R^{\prime})$ in this paper. Finally, there are useful equivalences $$\displaystyle\operatorname{THH}(A\otimes_{\mathbb{S}}B)=\operatorname{THH}(A)% \otimes_{\mathbb{S}}\operatorname{THH}(B)$$ $$\displaystyle\operatorname{THH}(R/S)=\operatorname{THH}(R)\otimes_{% \operatorname{THH}(S)}S$$ $$\displaystyle\operatorname{THH}(A)\otimes_{\mathbb{S}}B=\operatorname{THH}(A% \otimes_{\mathbb{S}}B/B)$$ and some variants which are straighforward to prove and will be used frequently. Acknowledgments We would like to thank Lars Hesselholt, Eva Höning, Mike Mandell, Matthew Morrow, Peter Scholze and Guozhen Wang for helpful conversations. We also thank Lars Hesselholt and Eva Höning for comments on a draft. The authors were funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2044 390685587, Mathematics Münster: Dynamic–Geometry–Structure. 1 Bökstedt periodicity for $\mathbb{F}_{p}$ We want to give a proof of the fundamental result of Bökstedt, that topological Hochschild homology of $\mathbb{F}_{p}$ is a polynomial ring on a degree 2 generator. The proof presented here is closely related to the Thom spectrum proof in [Blu10] based on a result of Hopkins-Mahowald, but in our opinion it is more direct, see Appendix A for a precise discussion. Let us first give a slightly more conceptual formulation of Bökstedt’s result. Theorem 1.1 (Bökstedt). The spectrum $\operatorname{THH}(\mathbb{F}_{p})$ is as an $\mathbb{E}_{1}$-algebra spectrum over $\mathbb{F}_{p}$ free on a generator $x$ in degree 2, i.e. equivalent to $\mathbb{F}_{p}[\Omega S^{3}]$. Here $\mathbb{F}_{p}[\Omega S^{3}]$ is the group ring of the $\mathbb{E}_{1}$-group $\Omega S^{3}$ over $\mathbb{F}_{p}$ i.e. the $\mathbb{F}_{p}$-homology $\mathbb{F}_{p}\otimes_{\mathbb{S}}\Sigma^{\infty}_{+}\Omega S^{3}$. The equivalence between the two formulations relies on the fact that $\Omega S^{3}$ is the free $\mathbb{E}_{1}$-group on $S^{2}$, where $S^{2}$ is considered as a pointed space. Our proof relies on a structural result about the dual Steenrod algebra $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$. We consider this spectrum as an $\mathbb{F}_{p}$-algebra using the inclusion into the left factor.222If we use the right factor this produces an equivalent $\mathbb{F}_{p}$-algebra where the equivalence is the conjugation. It is an $\mathbb{E}_{\infty}$-algebra over $\mathbb{F}_{p}$, but has a universal description as an $\mathbb{E}_{2}$-algebra. This result seems to be known, at least to some experts, but we have not been able to find it written up in the literature. Theorem 1.2. As an $\mathbb{E}_{2}$-$\mathbb{F}_{p}$-algebra, the spectrum $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$ is free on a single generator of degree $1$, i.e. it is as an $\mathbb{E}_{2}$-$\mathbb{F}_{p}$-algebra equivalent to $\mathbb{F}_{p}[\Omega^{2}S^{3}]$. We will give a proof of Theorem 1.2 in the next section. But let us first deduce Theorem 1.1 from it. Proof of Theorem 1.1. We have an equivalence of $\mathbb{E}_{1}$-algebras $$\displaystyle\operatorname{THH}(\mathbb{F}_{p})$$ $$\displaystyle\simeq\mathbb{F}_{p}\otimes_{\mathbb{F}_{p}\otimes_{\mathbb{S}}% \mathbb{F}_{p}}\mathbb{F}_{p}$$ $$\displaystyle\simeq\mathbb{F}_{p}\otimes_{\mathbb{F}_{p}[\Omega^{2}S^{3}]}% \mathbb{F}_{p}$$ $$\displaystyle\simeq\mathbb{F}_{p}\left[\operatorname{Bar}(\operatorname{pt},% \Omega^{2}S^{3},\operatorname{pt})\right]$$ $$\displaystyle\simeq\mathbb{F}_{p}[\Omega S^{3}].$$ The third equivalence uses that $\mathbb{F}_{p}[-]$ sends products to tensor products and preserves colimits. ∎ Remark 1.3. If one only wants to use that $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$ is free as an abstract $\mathbb{E}_{2}$-algebra and avoid space level arguments, one can observe that in any pointed presentably symmetric monoidal $\infty$-category $\mathcal{C}$ one has for every object $X\in C$ an equivalence $$\mathds{1}\otimes_{\operatorname{Free}_{\mathbb{E}_{n+1}(X)}}\mathds{1}\simeq% \operatorname{Free}_{\mathbb{E}_{n}}(\Sigma X)\ .$$ This is proven in [Lur18, Corollary 5.2.2.13] for $n=0$ and the case $n>0$ can be reduced to this case using Dunn Additivity by replacing $\mathcal{C}$ with the $\infty$-category of augmented $\mathbb{E}_{n}$-algebras $\operatorname{Alg}_{\mathbb{E}_{n}}^{\operatorname{aug}}(\mathcal{C})$. This $\infty$-category satisfies the assumptions of [Lur18, Corollary 5.2.2.13] by [Lur18, Proposition 5.1.2.9]. 1.1 Proof of Theorem 1.2 In order to prove this result we first recall that for every $\mathbb{E}_{2}$-ring spectrum $R$ over $\mathbb{F}_{2}$ there exist Dyer-Lashof operations $$Q^{i}:\pi_{k}R\to\pi_{k+i}R$$ for $i\leq k+1$ and they satisfy all the relations of the usual Dyer-Lashof operations as long as they make sense. For an $\mathbb{E}_{2}$-algebra $R$ over $\mathbb{F}_{p}$ with odd $p$, there exist operations $$\displaystyle Q^{i}:$$ $$\displaystyle\pi_{k}R\to\pi_{k+2i(p-1)}R$$ $$\displaystyle\beta Q^{i}:$$ $$\displaystyle\pi_{k}R\to\pi_{k+2i(p-1)-1}R$$ for $i\leq 2k+1$. Proposition 1.4. Let $R$ be the free $\mathbb{E}_{2}$-algebra over $\mathbb{F}_{p}$ on a generator in degree 1. Then 1. for $p=2$ we have $$\pi_{*}R=\mathbb{F}_{2}[x_{1},x_{2},\ldots]$$ where $|x_{i}|=2^{i}-1$. The element $x_{i+1}$ is given by $Q^{2^{i}}Q^{2^{i-1}}\ldots Q^{8}Q^{4}Q^{2}x_{1}$. In addition, $\beta x_{i}=x_{i-1}^{2}$. 2. for $p$ odd we have $$\pi_{*}R=\Lambda_{\mathbb{F}_{p}}(y_{0},y_{1},\ldots)\otimes\mathbb{F}_{p}[z_{% 1},z_{2},\ldots]$$ where $|y_{i}|=2p^{i}-1$, $|z_{i}|=2p^{i}-2$. The element $y_{i+1}$ is given by $Q^{p^{i}}\ldots Q^{p}Q^{1}y_{0}$, the element $z_{i}$ is given by $\beta Q^{p^{i}}\ldots Q^{p}Q^{1}y_{0}$. Any $\mathbb{E}_{2}$-algebra $R$ over $\mathbb{F}_{p}$ whose homotopy ring together with the action of the Dyer-Lashof operations is of the above form, is also free on a generator in degree 1. Proof. We use that $R\simeq\mathbb{F}_{p}\otimes\Omega^{2}S^{3}$, i.e. we are computing the Pontryagin ring of the space $\Omega^{2}S^{3}$. Then the first part is due to Araki and Kudo [KA56, Theorem 7.1], the second part is due to Dyer-Lashof [DL62, Theorem 5.2]. These results are relatively straightforward computations using the Serre spectral sequence and the Kudo transgression theorem. Now for the last part assume that we have given any such $R$ and any non-trivial element $x_{1}\in\pi_{1}(R)$. We get an induced map from the free algebra $\operatorname{Free}_{\mathbb{E}_{2}}(x_{1})\to R$. Since this map is an $\mathbb{E}_{2}$-map the induced map on homotopy groups is compatible with the ring structure as well as the Dyer-Lashof operations. But everything is generated from $x_{1}$ under these operations in the same way, so the map is an equivalence. ∎ Proof of Theorem 1.2. By Proposition 1.4 we only have to verify that the homotopy groups $\mathbb{F}_{p}\otimes\mathbb{F}_{p}$ have the correct ring structure and Dyer-Lashof operations. This is a classical calculation due to Milnor for the ring structure and Steinberger [BMMS86, Chapter 3, Theorem 2.2 and 2.3] for the Dyer-Lashof operations: at $p=2$, the generator $x_{i}$ corresponds to the Milnor basis element $\overline{\zeta}_{i}$, at $p$ odd $z_{i}$ corresponds to the element $\overline{\xi}_{i}$ and $y_{i}$ to $\overline{\tau}_{i}$. ∎ Remark 1.5. We want to remark that Theorem 1.1 also implies Theorem 1.2. Thus assume that Theorem 1.1 holds. We have that $\pi_{1}(\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p})$ is isomorphic to $\mathbb{F}_{p}$, generated by an element $b$. We can thus choose an $\mathbb{E}_{2}$-map $$\operatorname{Free}_{\mathbb{E}_{2}}(b)\to\mathbb{F}_{p}\otimes_{\mathbb{S}}% \mathbb{F}_{p}$$ which induces an equivalence on $1$-types.333The computation of the first two homotopy groups of $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$ is everything that we input about the dual Steenrod algebra. So in fact even Milnor’s computation, as well as the results of Steinberger cited here, could be recovered from an independent proof of Bökstedt’s result. We can form the Bar construction on these augmented $\mathbb{F}_{p}$-algebras, and the resulting map $$\operatorname{Free}_{\mathbb{E}_{1}}(x)\to\operatorname{THH}(\mathbb{F}_{p})$$ is an equivalence on $\pi_{2}$, so by Theorem 1.1 it is an equivalence. Thus, Theorem 1.2 follows from the following lemma. Lemma 1.6. Let $A\to B$ be a map augmented connected $\mathbb{E}_{1}$-algebras over $\mathbb{F}_{p}$. Then if the map $$\mathbb{F}_{p}\otimes_{A}\mathbb{F}_{p}\to\mathbb{F}_{p}\otimes_{B}\mathbb{F}_% {p}$$ is an equivalence, so is $A\to B$. Proof. Assume $A\to B$ is not an equivalence. Let $d$ denote the connectivity of the cofiber of $A\to B$, i.e. $\pi_{i}(B/A)=0$ for $i<d$, but $\pi_{d}(B/A)\neq 0$. $\mathbb{F}_{p}\otimes_{A}\mathbb{F}_{p}$ admits a filtration (obtained by filtering the Bar construction over $\mathbb{F}_{p}$ by its skeleta) whose associated graded is given in degree $n$ by $\Sigma^{n}(A/\mathbb{F}_{p})^{\otimes_{\mathbb{F}_{p}}n}$. Here $A/\mathbb{F}_{p}$ is the cofiber of $\mathbb{F}_{p}\to A$ and $1$-connective by assumption. The map $$\Sigma^{n}(A/\mathbb{F}_{p})^{\otimes_{\mathbb{F}_{p}}n}\to\Sigma^{n}(B/% \mathbb{F}_{p})^{\otimes_{\mathbb{F}_{p}}n}$$ has $(d+2n-1)$-connective cofiber. Thus, the $(d+1)$-type of the cofiber of $\mathbb{F}_{p}\otimes_{A}\mathbb{F}_{p}\to\mathbb{F}_{p}\otimes_{B}\mathbb{F}_% {p}$ receives no contribution from the terms for $n\geq 2$, and coincides with the $(d+1)$-type of the cofiber of $\Sigma(A/\mathbb{F}_{p})\to\Sigma(B/\mathbb{F}_{p})$, which is $\Sigma(B/A)$ and has nonvanishing $\pi_{d+1}$ by assumption. So $\mathbb{F}_{p}\otimes_{A}\mathbb{F}_{p}\to\mathbb{F}_{p}\otimes_{B}\mathbb{F}_% {p}$ cannot have been an equivalence. ∎ 2 Bökstedt periodicity for perfect rings Now we also want to recover the well-known calculation of $\operatorname{THH}$ for a perfect $\mathbb{F}_{p}$-algebra $k$. This can directly be reduced to Bökstedt’s theorem. Let us first note that there is a morphism $\operatorname{THH}(\mathbb{F}_{p})\to\operatorname{THH}(k)$ induced from the map $\mathbb{F}_{p}\to k$. Moreover the spectrum $\operatorname{THH}(k)$ is a $k$-module, so that we get an induced map $$k[x]\cong k\otimes_{\mathbb{F}_{p}}\operatorname{THH}(\mathbb{F}_{p})\to% \operatorname{THH}(k)$$ (1) where the first term $k[x]$ denotes the free $\mathbb{E}_{1}$-algebra on a generator in degree 2. Proposition 2.1. For a perfect $\mathbb{F}_{p}$-algebra $k$ the map (1) is an equivalence. Proof. Recall that for every perfect $\mathbb{F}_{p}$-algebra $k$ there is a $p$-complete $\mathbb{E}_{\infty}$-ring spectrum $\mathbb{S}_{W(k)}$, called the spherical Witt vectors, with $\pi_{0}(\mathbb{S}_{W(k)})=W(k)$ and which is flat over $\mathbb{S}_{p}$. It follows that the homology $\mathbb{Z}\otimes_{\mathbb{S}}\mathbb{S}_{W(k)}$ is given by $W(k)$ and thus the $\mathbb{F}_{p}$-homology $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{S}_{W(k)}$ by $k$. In particular we get that $$\displaystyle\operatorname{THH}(k)$$ $$\displaystyle=\operatorname{THH}(\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{S}_% {W(k)})$$ $$\displaystyle=\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{S}}% \operatorname{THH}(\mathbb{S}_{W(k)})$$ $$\displaystyle=\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}(% \mathbb{F}_{p}\otimes_{\mathbb{S}}\operatorname{THH}(\mathbb{S}_{W(k)}))$$ $$\displaystyle=\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}% \operatorname{HH}(k/\mathbb{F}_{p})\ .$$ where $\operatorname{HH}(k/\mathbb{F}_{p})$ is the Hochschild homology of $k$ relative to $\mathbb{F}_{p}$. The result now follows once we know that this is given by $k$ concentrated in degree $0$. This immediately follows from the vanishing of the cotangent complex of $k$ but we want to give a slightly different argument here: It suffices to show that the positive dimensional groups $\operatorname{HH}_{i}(k/\mathbb{F}_{p})$ are zero. To see this it is enough to show that for every $\mathbb{F}_{p}$-algebra $A$ the Frobenius $\varphi:A\to A$ induces the zero map $\operatorname{HH}_{i}(A/\mathbb{F}_{p})\to\operatorname{HH}_{i}(A/\mathbb{F}_{% p})$ for $i>0$, since for $A=k$ perfect the Frobenius is also an isomorphism. Now for general $A$ this follows since $\operatorname{HH}(A/\mathbb{F}_{p})$ is a simplicial commutative $\mathbb{F}_{p}$-algebra and the Frobenius $\varphi$ acts through the levelwise Frobenius. But the levelwise Frobenius for every simplicial commutative $\mathbb{F}_{p}$-algebra induces the zero map in positive dimensional homotopy. 444This follows since for every simplicial commutative $\mathbb{F}_{p}$-algebra $R_{\bullet}$ the Frobenius can be factored as $\pi_{n}(R_{\bullet})\to\pi_{n}(R_{\bullet})^{\times p}\to\pi_{n}(R_{\bullet})$ where the latter map is induced by the multiplication $R_{\bullet}^{\times p}\to R_{\bullet}$ considered as a map of underlying simplicial sets. For $n>0$ it follows by an Eckmann-Hilton argument that the multiplication map $\pi_{n}(R_{\bullet})\times\pi_{n}(R_{\bullet})\to\pi_{n}(R_{\bullet})$ is at the same time multilinear and linear, hence zero. ∎ Remark 2.2. Note that the proof in particular shows that $\operatorname{THH}(\mathbb{S}_{W(k)})$ is $p$-adically equivalent to $\mathbb{S}_{W(k)}$ as this can be checked on $\mathbb{F}_{p}$-homology. We will also write $\operatorname{THH}(\mathbb{S}_{W(k)};\mathbb{Z}_{p})$ for the $p$-completion of $\operatorname{THH}(\mathbb{S}_{W(k)})$ so that we have $$\operatorname{THH}(\mathbb{S}_{W(k)};\mathbb{Z}_{p})\simeq\mathbb{S}_{W(k)}\ .$$ Integrally this is not quite the case, as one encounters contributions form the cotangent complex $L_{W(k)/\mathbb{Z}}$ which only vanishes after $p$-completion. We also note that one can also deduce Proposition 2.1 from a statement similar to Theorem 1.2 which we want to list for completeness. Proposition 2.3. For $k$ a perfect $\mathbb{F}_{p}$-algebra, we have $$k\otimes_{\mathbb{S}_{W(k)}}k=\operatorname{Free}^{k}_{\mathbb{E}_{2}}(\Sigma k).$$ i.e. the spectrum $k\otimes_{\mathbb{S}_{W(k)}}k$ is as an $\mathbb{E}_{2}$-$k$-algebra free on a single generator in degree 1. Proof. As $\mathbb{S}_{W(k)}\otimes_{\mathbb{S}}\mathbb{F}_{p}=k$, we have $$k\otimes_{\mathbb{S}_{W(k)}}k=k\otimes_{\mathbb{S}}\mathbb{F}_{p}=k\otimes_{% \mathbb{F}_{p}}(\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}),$$ so the statement follows from base-changing the statement over $\mathbb{F}_{p}$. ∎ 3 Bökstedt periodicity for CDVRs Now we want to turn our attention to complete discrete valuation rings, abbreviated as CDVRs. We will determine their absolute $\operatorname{THH}$ later, but for the moment we focus on an analogue of Bökstedt’s theorem which works relative to the $\mathbb{E}_{\infty}$-ring spectrum $$\mathbb{S}[z]:=\mathbb{S}[\mathbb{N}]=\Sigma^{\infty}_{+}\mathbb{N}\ .$$ For a CDVR $A$ we let $\pi$ be a uniformizer, i.e. a generator of the maximal ideal, and consider it as a $\mathbb{S}[z]$-algebra via $z\mapsto\pi$. Everything that follows will implicitly depend on such a choice. By assumption $A$ is complete with respect to $\pi$. Since $\pi$ is a non-zero divisor this is equivalent to being derived $\pi$-complete. Moreover $A$ if has residue field of characteristic $p$ then $A$ is also (derived) $p$-complete since $p$ is contained in the maximal ideal. The following result is, at least in mixed characteristic, due to Bhargav Bhatt, Jacob Lurie and Peter Scholze in private communication but versions of it also appear in [BMS19] and in [AMN18]. Theorem 3.1. Let $A$ be a CDVR with perfect residue field of characteristic $p$. Then we have $$\operatorname{THH}_{*}(A/\mathbb{S}[z];\mathbb{Z}_{p})=A[x]$$ for $x$ in degree 2. Proof. We distinguish the cases of equal and of mixed characteristic. In mixed characteristic we have the equation $(p)=(\pi^{e})$ where $e$ is the ramification index. We deduce that $\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ is $\pi$-complete since it is $p$-complete. Now we have $$\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{A}k=\operatorname{% THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{\mathbb{S}[z]}\mathbb{S}=% \operatorname{THH}(k)$$ and thus this is by Proposition 2.1 given by an even dimensional polynomial ring over $k$. Thus $\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ is $\pi$-torsion free and the result follows. If $A$ is of equal characteristic $p$ then $A$ is isomorphic to the formal power series ring $k[\kern-1.0pt[z]\kern-1.0pt]$ where $k$ is the residue field (which is perfect by assumption). We consider the $\mathbb{E}_{\infty}$-ring $\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]$ obtained as the $z$-completion of $\mathbb{S}_{W(k)}[z]$. Then we have an equivalence $$k[\kern-1.0pt[z]\kern-1.0pt]\simeq\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{S}% _{W(k)}[\kern-1.0pt[z]\kern-1.0pt]$$ which uses that $\mathbb{F}_{p}$ is of finite type over the sphere. As a result, we get an equivalence $$\displaystyle\operatorname{THH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{S}[z])$$ $$\displaystyle\simeq\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{S}}% \operatorname{THH}(\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{S}[z])$$ $$\displaystyle\simeq\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}(% \mathbb{F}_{p}\otimes_{\mathbb{S}}\operatorname{THH}(\mathbb{S}_{W(k)}[\kern-1% .0pt[z]\kern-1.0pt]/\mathbb{S}[z]))$$ $$\displaystyle\simeq\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}% \operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])\ .$$ Now in order to show the claim it suffices to show that $\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])$ is concentrated in degree 0 (where it is given by $k[\kern-1.0pt[z]\kern-1.0pt]$). In order to prove this we first note that $\mathbb{F}_{p}[z]\to k[\kern-1.0pt[z]\kern-1.0pt]$ is (derived) relatively perfect, i.e. the square $$\xymatrix{\mathbb{F}_{p}[z]\ar[r]\ar[d]^{\varphi}&k[\kern-1.0pt[z]\kern-1.0pt]% \ar[d]^{\varphi}\\ \mathbb{F}_{p}[z]\ar[r]&k[\kern-1.0pt[z]\kern-1.0pt]}$$ (2) is a pushout of commutative ring spectra, where $\varphi$ is the Frobenius. This holds because $1,z,...,z^{p-1}$ is basis for $\mathbb{F}_{p}[z]$ as a $\varphi(\mathbb{F}_{p}[z])=\mathbb{F}_{p}[z^{p}]$-module and also for $k[\kern-1.0pt[z]\kern-1.0pt]$ as a $\varphi(k[\kern-1.0pt[z]\kern-1.0pt])=k[\kern-1.0pt[z^{p}]\kern-1.0pt]$-algebra. Now the map $$\pi_{i}(\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])% \otimes_{\mathbb{F}_{p}[z]}\mathbb{F}_{p}[z])\to\pi_{i}\operatorname{HH}(k[% \kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])$$ induced from the square (2) is an equivalence since the square is a pushout. We claim again, as in the proof of Proposition 2.1, that this map is zero for $i>0$. Since $\varphi:\mathbb{F}_{p}[z]\to\mathbb{F}_{p}[z]$ is flat, we have $$\pi_{i}\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])% \otimes_{\mathbb{F}_{p}[z]}\mathbb{F}_{p}[z])=\pi_{i}\operatorname{HH}(k[\kern% -1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])\otimes_{\mathbb{F}_{p}[z]}\mathbb{F}_% {p}$$ as right $\mathbb{F}_{p}[z]$-modules. The map $$\pi_{i}\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])% \otimes_{\mathbb{F}_{p}[z]}\mathbb{F}_{p}[z]\to\pi_{i}\operatorname{HH}(k[% \kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])$$ is induced up from the map $\pi_{i}\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])\to\pi% _{i}\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])$ induced by the Frobenius of $k[\kern-1.0pt[z]\kern-1.0pt]$, which is given by the Frobenius of the simplicial commutative ring $\operatorname{HH}(k[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{F}_{p}[z])$. Thus it is zero on positive dimensional homotopy groups. ∎ Remark 3.2. The isomorphism $\operatorname{THH}_{*}(A/\mathbb{S}[z];\mathbb{Z}_{p})\cong A[x]$ of Theorem 3.1 depends on the choice of generator $x$ of $\operatorname{THH}_{2}(A/\mathbb{S}[z];\mathbb{Z}_{p})$. The proof of Theorem 3.1 determines $x$ in mixed characteristic only modulo $\pi$. We will see later that there is in fact a preferred choice of generator $x$ which then makes the isomorphism of Theorem 3.1 canonical, see Remark 4.3. Remark 3.3. Let $A$ be a not necessarily complete DVR of mixed characteristic $(0,p)$ with perfect residue field. Then we have that $$\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})\to\operatorname{THH}(A_{p}/% \mathbb{S}[z];\mathbb{Z}_{p})$$ is an equivalence where $A_{p}$ is the $p$-completion of $A$. This is true for every ring $A$. But for a DVR the $p$-completion $A_{p}$ is the same as the completion of $A$ with respect to the maximal ideal so that Theorem 3.1 applies to yield that $$\operatorname{THH}_{*}(A/\mathbb{S}[z];\mathbb{Z}_{p})=A_{p}[x]\ .$$ For every prime $\ell\neq p$ we have that $$\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{\ell})=0$$ since $\ell$ is invertible in $A$. If we can show that $\operatorname{THH}(A/\mathbb{S}[z])$ is finitely generated in each degree we can therefore even get that $\operatorname{THH}_{*}(A/\mathbb{S}[z])=A[x]$ without $p$-completion. For example if $A=\mathbb{Z}_{(p)}$ or more generally localizations of rings of integers at prime ideals. But in general one can not control the rational homotopy type of $\operatorname{THH}(A/\mathbb{S}[z])$, as the example of $\mathbb{Z}_{p}$ shows, where we get contributions from $\mathbb{Z}_{p}\otimes_{\mathbb{Z}}\mathbb{Z}_{p}$. In equal characteristic we do not know how to compute $\operatorname{THH}_{*}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ if $A$ is not complete, since in general the cotangent complex $L_{A/\mathbb{F}_{p}[z]}$ does not vanish.555For an explicit counterexample consider an element $f$ in the fraction field $Q(\mathbb{F}_{p}[\kern-1.0pt[z]\kern-1.0pt])$ which is transcendental over $Q(\mathbb{F}_{p}[z])$. This exists for cardinality reasons. Now the cotangent complex $L_{Q(\mathbb{F}_{p}[z])(f)/Q(\mathbb{F}_{p}[z])}$ is nontrivial. Since it agrees with a localisation of $L_{A/\mathbb{F}_{p}[z]}$, where $A=\mathbb{F}_{p}[\kern-1.0pt[z]\kern-1.0pt]\cap Q(\mathbb{F}_{p}[z])(f)$, $A$ is a DVR with nontrivial $L_{A/\mathbb{F}_{p}[z]}$. Remark 3.4. One can also deduce the mixed characteristic version of Theorem 3.1 from an analogue of Theorem 1.2 which under the same assumptions as Theorem 3.1 and in mixed characteristic states that $A\otimes_{\mathbb{S}_{W(k)}[z]}A$ is $p$-adically the free $\mathbb{E}_{2}$-algebra on a single generator in degree 1. We also want to remark that there are some equivalent ways of stating Theorem 3.1 which might be a bit more canonical from a certain point of view. Proposition 3.5. In the situation of Theorem 3.1 the map $\mathbb{S}[z]\to A$ extends to a map $\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]\to A$ by completeness of $A$. The induced canonical maps $$\xymatrix{\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})\ar[r]^{\simeq}\ar% [d]^{\simeq}&\operatorname{THH}(A/\mathbb{S}[\kern-1.0pt[z]\kern-1.0pt];% \mathbb{Z}_{p})\ar[d]_{\simeq}\\ \operatorname{THH}(A/\mathbb{S}_{W(k)}[z];\mathbb{Z}_{p})\ar[r]^{\simeq}&% \operatorname{THH}(A/\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt];\mathbb{Z}_{% p})\\ \operatorname{THH}(A/\mathbb{S}_{W(k)}[z])\ar[u]_{\simeq}\ar[r]^{\simeq}&% \operatorname{THH}(A/\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt])\ar[u]^{% \simeq}}$$ are all equivalences. Proof. For the upper four maps this follows from the equivalences $$\displaystyle\operatorname{THH}(\mathbb{S}[\kern-1.0pt[z]\kern-1.0pt]/\mathbb{% S}[z];\mathbb{Z}_{p})\simeq\mathbb{S}[\kern-1.0pt[z]\kern-1.0pt]^{\wedge}_{p}$$ $$\displaystyle\operatorname{THH}(\mathbb{S}_{W(k)}[z]/\mathbb{S}[z];\mathbb{Z}_% {p})\simeq\mathbb{S}_{W(k)}[z]^{\wedge}_{p}$$ $$\displaystyle\operatorname{THH}(\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]/% \mathbb{S}_{W(k)}[z];\mathbb{Z}_{p})\simeq\mathbb{S}_{W(k)}[\kern-1.0pt[z]% \kern-1.0pt]$$ $$\displaystyle\operatorname{THH}(\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]/% \mathbb{S}[\kern-1.0pt[z]\kern-1.0pt]];\mathbb{Z}_{p})\simeq\mathbb{S}_{W(k)}[% \kern-1.0pt[z]\kern-1.0pt]$$ which can all be checked in $\mathbb{F}_{p}$-homology (see Remark 2.2 and the proof of Theorem 3.1). The last two vertical equivalences follows since $\operatorname{THH}(A/\mathbb{S}_{W(k)}[z])$ and $\operatorname{THH}(A/\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt])$ are already $p$-complete. If $A$ is of equal characteristic this is clear anyhow (and in the whole diagram we did not need the $p$-completions). In mixed characteristic this follows from Lemma 3.6 below, since $A$ is of finite type over $\mathbb{S}_{W(k)}[z]$ and over $\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt]$, which can be seen by the presentation $$A\cong W(k)[z]/\phi(z)\cong W(k)[\kern-1.0pt[z]\kern-1.0pt]/\phi(z).$$ where $\phi$ is the minimal polynomial of the uniformizer $\pi$. ∎ Recall that a connective ring spectrum $A$ over a connective, commutative ring spectrum $S$ is said to be of finite type if $A$ is as an $R$-module a filtered colimit of perfect modules along increasingly connective maps (i.e. has a cell structure with finite ‘skeleta’). Lemma 3.6. If $A$ is $p$-complete and of finite type over $R$ then $\operatorname{THH}(A/R)$ is also $p$-complete $\operatorname{THH}(A/R)$. Proof. We first observe that all tensor products $A\otimes_{R}...\otimes_{R}A$ are of finite type over $A$ (say by action from the right) which follows inductively. Thus they are $p$-complete. Finally, the $n$-truncation of $\operatorname{THH}(A/R)$ is equivalent to the $n$-truncation of the restriction of the cyclic Bar construction to $\Delta^{\mathrm{op}}_{\leq n+1}$. This colimit is finite and the stages are $p$-complete by the above. ∎ We now consider quotients $A^{\prime}$ of a CDVR $A$ as in Theorem 3.1. Every ideal is of the form $(\pi^{k})\subseteq A$ and thus $A^{\prime}=A/\pi^{k}$ for some $k\geq 1$. Proposition 3.7. In the situation above we have a canonical equivalence $$\operatorname{THH}(A^{\prime}/\mathbb{S}[z])\simeq\operatorname{THH}(A/\mathbb% {S}[z];\mathbb{Z}_{p})\otimes_{\mathbb{Z}[z]}\operatorname{HH}\big{(}(\mathbb{% Z}[z]/z^{k})\,/\mathbb{Z}[z]\big{)}$$ and on homotopy groups we get $$\operatorname{THH}_{*}(A^{\prime}/\mathbb{S}[z])=A^{\prime}[x]\langle y\rangle$$ where $y$ is a divided power generator in degree 2. Proof. Since $\pi$ is a non-zero divisor we can write $A^{\prime}=A\otimes_{\mathbb{S}[z]}(\mathbb{S}[z]/z^{k})$ where $\mathbb{S}[z]/z^{k}$ is the reduced suspension spectrum of the pointed monoid $\mathbb{N}/[k,\infty)$. Thus we find $$\displaystyle\operatorname{THH}(A^{\prime}/\mathbb{S}[z])$$ $$\displaystyle\simeq\operatorname{THH}(A/\mathbb{S}[z])\otimes_{\mathbb{S}[z]}% \operatorname{THH}((\mathbb{S}[z]/z^{k})/\mathbb{S}[z])$$ $$\displaystyle\simeq\operatorname{THH}(A/\mathbb{S}[z])\otimes_{\mathbb{Z}[z]}% \Big{(}\mathbb{Z}\otimes_{\mathbb{S}}\operatorname{THH}\big{(}(\mathbb{S}[z]/z% ^{k})/\mathbb{S}[z]\big{)}\Big{)}$$ $$\displaystyle\simeq\operatorname{THH}(A/\mathbb{S}[z])\otimes_{\mathbb{Z}[z]}% \operatorname{HH}\big{(}(\mathbb{Z}[z]/z^{k})\,/\mathbb{Z}[z]\big{)}$$ $$\displaystyle\simeq\operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{% \mathbb{Z}[z]}\operatorname{HH}\big{(}(\mathbb{Z}[z]/z^{k})\,/\mathbb{Z}[z]% \big{)}$$ where in the last step we have used that $p$ is nilpotent in $A^{\prime}$ and thus we are already $p$-complete. Finally $\operatorname{HH}\big{(}(\mathbb{Z}[z]/z^{k})\,/\mathbb{Z}[z]\big{)}$ is given by a divided power algebra $(\mathbb{Z}[z]/z^{k})\langle y\rangle$. To see this we first observe that $\mathbb{Z}[z]/z^{k}\otimes_{\mathbb{Z}[z]}\mathbb{Z}[z]/z^{k}$ is given by the exterior algebra $\Lambda_{\mathbb{Z}[z]/z^{k}}(e)$ with $e$ in degree $1$. Then it follows that $\operatorname{HH}\big{(}(\mathbb{Z}[z]/z^{k})\,/\mathbb{Z}[z]\big{)}$, which is the Bar construction on that, is given by $$\operatorname{Tor}_{*}^{\Lambda_{\mathbb{Z}[z]/z^{k}}(e)}\big{(}\mathbb{Z}[z]/% z^{k},\mathbb{Z}[z]/z^{k}\big{)}=(\mathbb{Z}[z]/z^{k})\langle y\rangle\ .$$ This implies the claim. ∎ 4 Absolute $\operatorname{THH}$ for CDVRs For $A$ a CDVR with perfect residue field of characteristic $p$ we have computed $\operatorname{THH}$ relative to $\mathbb{S}[z]$. In order to compute the absolute $\operatorname{THH}$ we are going to employ a spectral sequence which works very generally (see Proposition 7.1). Proposition 4.1. For every commutative algebra $A$ (over $\mathbb{Z}$) with an element $\pi\in A$ considered as a $\mathbb{S}[z]$-algebra there is a multiplicative, convergent spectral sequence $$\operatorname{THH}_{*}(A/\mathbb{S}[z];\mathbb{Z}_{p})\otimes_{\mathbb{Z}[z]}% \Omega^{*}_{\mathbb{Z}[z]/\mathbb{Z}}\Rightarrow\operatorname{THH}_{*}(A;% \mathbb{Z}_{p}).$$ Proof. This is a special case of the spectral sequence of Proposition 7.1. ∎ Now for $A$ a CDVR we want to use this spectral sequence to determine $\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$. From Theorem 3.1 we see that this spectral sequence takes the form $$E^{2}=A[x]\otimes\Lambda(dz)\Rightarrow\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$$ with $|x|=(2,0)$ and $|dz|=(0,1)$. $$A$$$$A\{x\}$$$$A\{x^{2}\}$$…$$A\{dz\}$$$$A\{xdz\}$$…$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$\ldots$$$$\vdots$$ Using the multiplicative structure one only has to determine a single differential $$d^{2}:A\{x\}\to A\{dz\}\ .$$ In the equal characteristic case this has to vanish since $x$ can be chosen to lie in the image of the map $\operatorname{THH}(\mathbb{F}_{p})\to\operatorname{THH}(A;\mathbb{Z}_{p})\to% \operatorname{THH}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ and thus has to be a permanent cycle. Thus the spectral sequence degenerates and we get $\operatorname{THH}_{*}(A)=A[x]\otimes\Lambda(dz)$ as there can not be any extension problems for degree reasons. 666This can also be seen directly using that $A=k[\kern-1.0pt[z]\kern-1.0pt]=\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{S}_{W% (k)}[\kern-1.0pt[z]\kern-1.0pt]$ which implies $\operatorname{THH}(A)=\operatorname{THH}(\mathbb{F}_{p})\otimes_{\mathbb{S}}% \operatorname{THH}(\mathbb{S}_{W(k)}[\kern-1.0pt[z]\kern-1.0pt])=\operatorname% {THH}(\mathbb{F}_{p})\otimes_{\mathbb{F}_{p}}\operatorname{HH}(A/\mathbb{F}_{p% })\ .$ Let us now assume that $A$ is a CDVR of mixed characteristic. Once we have chosen a uniformizer $\pi$ we get a minimal polynomial $\phi(z)\in W(k)[z]$ which we normalize such that $\phi(0)=p$. Note that usually $\phi$ is taken to be monic, of the form $\phi(z)=z^{e}+p\theta(z)$. This differs from our convention by the unit $\theta(0)$. Lemma 4.2. There is a choice of generator $x\in\operatorname{THH}_{2}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ such that $d^{2}(x)=\phi^{\prime}(\pi)dz$. Proof. $\operatorname{THH}(A;\mathbb{Z}_{p})$ agrees with $\operatorname{THH}(A/\mathbb{S}_{W(k)};\mathbb{Z}_{p})$, since $\operatorname{THH}(\mathbb{S}_{W(k)};\mathbb{Z}_{p})=\mathbb{S}_{W(k)}$. Since $A$ is of finite type over $\mathbb{S}_{W(k)}$ we use Lemma 3.6 to see that $\operatorname{THH}(A/\mathbb{S}_{W(k)};\mathbb{Z}_{p})=\operatorname{THH}(A/% \mathbb{S}_{W(k)})$. For connectivity reasons, $$\operatorname{THH}_{1}(A/\mathbb{S}_{W(k)})=\operatorname{HH}_{1}(A/W(k))=% \Omega^{1}_{A/W(k)}.$$ Since $A=W(k)[z]/\phi(z)$, we have $$\Omega^{1}_{A/W(k)}={A\{dz\}}/{\phi^{\prime}(\pi)dz}.$$ Comparing with the spectral sequence, this means that the image of $d^{2}:E^{2}_{2,0}\to E^{2}_{0,1}$ is precisely the submodule of $A\{dz\}$ generated by $\phi^{\prime}(\pi)dz$. Since $A$ is a domain, any two generators of a principal ideal differ by a unit, and thus for any generator $x$ in degree $(2,0)$, $d^{2}(x)$ differs from $\phi^{\prime}(\pi)dz$ by a unit. In particular, we can choose $x$ such that $d^{2}(x)=\phi^{\prime}(\pi)dz$. ∎ Remark 4.3. The generator $x\in\operatorname{THH}_{2}(A/\mathbb{S}[z];\mathbb{Z}_{p})$ determined by Lemma 4.2 maps under basechange along $\mathbb{S}[z]\to\mathbb{S}$ to a generator of $\operatorname{THH}_{2}(A/\pi;\mathbb{Z}_{p})=\operatorname{THH}_{2}(k)$. The choice of normalization of $\phi$ with $\phi(0)=p$ is chosen such that this is compatible with the generator obtained from the generator of $\operatorname{THH}_{2}(\mathbb{F}_{p})$ under the map $\operatorname{THH}_{2}(\mathbb{F}_{p})\to\operatorname{THH}_{2}(k)$ induced by $\mathbb{F}_{p}\to k$. Lemma 4.2 implies that $\operatorname{THH}_{*}(A,\mathbb{Z}_{p})$ is isomorphic to the homology of the DGA $$(A[x]\otimes\Lambda(d\pi),\partial)\qquad\qquad|x|=2,|d\pi|=1$$ with differential $\partial x=\phi^{\prime}(\pi)\cdot d\pi$ and $\partial(d\pi)=0$ as there are no multiplicative extensions possible. Here we have named the element detected by $dz$ by $d\pi$ as it is given by Connes operator $d:\operatorname{THH}_{*}(A,\mathbb{Z}_{p})\to\operatorname{THH}_{*}(A,\mathbb{% Z}_{p})$ applied to the uniformizer $\pi$. This follows from the identification of the degree $1$ part with $\Omega^{1}_{A/W(k)}$ as in the proof of Lemma 4.2. We warn the reader that we have obtained this description for $\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$ from the relative $\operatorname{THH}$ which depends on a choice of uniformizer. As a result the DGA description is only natural in maps that preserve the chosen uniformizer. The homology of this DGA can easily be additively evaluated to yield the following result, which was first obtained in [LM00, Theorem 5.1], but with completely different methods. Theorem 4.4 (Lindenstrauss-Madsen). For a CDVR $A$ of mixed characteristic $(0,p)$ with perfect residue field we have non-natural isomorphisms 777In the sense that they are only natural in maps that preserve the chosen uniformizer. $$\operatorname{THH}_{*}(A;\mathbb{Z}_{p})\cong\begin{cases}A&\text{for }*=0\\ A/n\phi^{\prime}(\pi)&\text{for }*=2n-1\\ 0&\text{otherwise }\end{cases}$$ where $\pi$ is a uniformizer with minimal polynomial $\phi$. In this case the multiplicative structure is necessarily trivial, so that we do not really get more information from the DGA description. But we also obtain a spectral sequence analogous to the one of Proposition 4.1 for $p$-completed $\operatorname{THH}$ of $A$ with coefficients in a discrete $A$-algebra $A^{\prime}$, which is $\operatorname{THH}(A;\mathbb{Z}_{p})\otimes_{A}A^{\prime}$. This takes the same form, just base-changed to $A^{\prime}$. Thus we get the following result, which was of course also known before. Proposition 4.5. For a CDVR $A$ of mixed characteristic and any map of commutative algebras $A\to A^{\prime}$ we have a non-natural ring isomorphism $$\pi_{*}(\operatorname{THH}(A;\mathbb{Z}_{p})\otimes_{A}A^{\prime})\cong H_{*}(% A^{\prime}[x]\otimes\Lambda(d\pi),\partial)$$ with $\partial x=\phi^{\prime}(x)d\pi$ and $\partial d\pi=0$.∎ 5 Absolute $\operatorname{THH}$ for quotients of DVRs Now we come back to the case of quotients of DVRs. Thus let $A^{\prime}=A/\mathfrak{m}^{k}\cong A/\pi^{k}$ where $A$ is a DVR with perfect residue field of characteristic $p$. Recall that in Proposition 3.7 we have shown that $$\operatorname{THH}_{*}(A^{\prime}/\mathbb{S}[z])\cong A^{\prime}[x]\langle y\rangle$$ We want to consider the spectral sequence of Proposition 4.1, which in this case takes the form $$E^{2}=A^{\prime}[x]\langle y\rangle\otimes\Lambda(dz)\Rightarrow\operatorname{% THH}_{*}(A^{\prime})$$ with $|x|=(2,0)$, $|y|=(2,0)$ and $|dz|=(0,1)$. $$A^{\prime}$$$$A^{\prime}\{x,y\}$$$$A^{\prime}\{x^{2},xy,y^{[2]}\}$$…$$A^{\prime}\{dz\}$$$$A^{\prime}\{xdz,ydz\}$$…$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$\ldots$$$$\vdots$$ Here we write $y^{[n]}$ for the $n$-th divided power of $y$. The reader should think of $y^{[n]}$ as ‘$y^{n}/n!$’. Lemma 5.1. We can choose the generator $y$ and its divided powers in such a way that in the associated spectral sequence, $d^{2}(y^{[i]})=k\pi^{k-1}\cdot y^{[i-1]}dz$. In particular the differential is a PD derivation, i.e. satisfies $d^{2}(y^{[i+1]})=d^{2}(y)y^{[i]}$ for all $i\geq 0$. 888Note that since $A^{\prime}$ is not a domain this does not uniquely determine $y$. One could fix a choice of such a $y$ by comparison with elements in the Bar complex, but this is not necessary for our applications. Proof. The construction of the spectral sequence of Proposition 4.1 (given in the proof of Proposition 7.1) applies generally to any $\operatorname{HH}(\mathbb{Z}[z]/\mathbb{Z})$-module $M$ to produce a spectral sequence $$\pi_{*}(M\otimes_{\operatorname{HH}(\mathbb{Z}[z]/\mathbb{Z})}\mathbb{Z}[z])% \otimes_{\mathbb{Z}[z]}\operatorname{HH}_{*}(\mathbb{Z}[z]/\mathbb{Z})% \Rightarrow\pi_{*}(M).$$ Since we can write $A^{\prime}=A\otimes_{\mathbb{S}[z]}(\mathbb{S}[z]/z^{k})$, we have $$\operatorname{THH}(A^{\prime})\simeq\operatorname{THH}(A)\otimes_{% \operatorname{THH}(\mathbb{S}[z])}\operatorname{THH}(\mathbb{S}[z]/z^{k})% \simeq\operatorname{THH}(A)\otimes_{\operatorname{HH}(\mathbb{Z}[z])}% \operatorname{HH}(\mathbb{Z}[z]/z^{k}).$$ So we have a map of $\operatorname{HH}(\mathbb{Z}[z])$-algebras $\operatorname{HH}(\mathbb{Z}[z]/z^{k})\to\operatorname{THH}(A^{\prime})$, and thus a multiplicative map of the corresponding spectral sequences. The spectral sequence for $\operatorname{HH}(\mathbb{Z}[z]/z^{k})$ is of the form $$\operatorname{HH}_{*}((\mathbb{Z}[z]/z^{k})/\mathbb{Z}[z])\otimes\Lambda(dz)% \Rightarrow\operatorname{HH}(\mathbb{Z}[z]/z^{k}).$$ We have that $\operatorname{HH}_{*}((\mathbb{Z}[z]/z^{k})/\mathbb{Z}[z])=(\mathbb{Z}[z]/z^{k% })\langle y\rangle$. Since the spectral sequence is multiplicative, we get $$i!d^{2}(y^{[i]})=d^{2}(y^{i})=id^{2}(y)y^{i-1}=i!d^{2}(y)y^{[i-1]},$$ and since the $E^{2}$-page consists of torsion free abelian groups, we can divide this equation by $i!$ to get $$d^{2}(y^{[i]})=d^{2}(y)y^{[i-1]},$$ i.e. the differential is compatible with the divided power structure. Now, $\operatorname{HH}_{1}((\mathbb{Z}[z]/z^{k})/\mathbb{Z}[z])=\Omega^{1}_{(% \mathbb{Z}[z]/z^{k})/\mathbb{Z}[z]}=(\mathbb{Z}[z]/z^{k})\{dz\}/kz^{k-1}dz$. In particular, in the spectral sequence $$\operatorname{HH}_{*}((\mathbb{Z}[z]/z^{k})/\mathbb{Z}[z])\otimes\Lambda(dz)% \Rightarrow\operatorname{HH}(\mathbb{Z}[z]/z^{k})$$ $d^{2}(y)$ is a unit multiple of $kz^{k-1}dz$. We can thus choose our generator $y$ of $\operatorname{HH}_{2}((\mathbb{Z}[z]/z^{k})/\mathbb{Z}[z])$ in such a way that $d^{2}(y)=kz^{k-1}dz$, and by compatibility with divided powers, $d^{2}(y^{[i]})=kz^{k-1}\cdot y^{[i-1]}dz$. After base-changing along $\mathbb{Z}[z]\to A$, this implies the claim. ∎ Theorem 5.2. Let $A^{\prime}\cong A/\pi^{k}$ be a quotient of a DVR $A$ with perfect residue field of characteristic $p$. Then $\operatorname{THH}_{*}(A^{\prime})$ is as a ring non-naturally isomorphic to the homology of the DGA $$(A^{\prime}[x]\langle y\rangle\otimes\Lambda(d\pi),\partial)\qquad\qquad|x|=2,% |y|=2,|d\pi|=1$$ with differential $\partial$ given by $\partial(d\pi)=0$ and $\partial(y^{[i]})=k\pi^{k-1}\cdot y^{[i-1]}d\pi$ and $$\partial(x)=\begin{cases}\phi^{\prime}(\pi)\cdot d\pi&\text{if $A$ is of mixed% characteristic}\\ 0&\text{if $A$ is of equal characteristic}\end{cases}$$ Here $\pi\in A$ is a uniformizer and $\phi$ its minimal polynomial. Proof. This follows immediately from Lemma 5.1 together with the fact that there are no extension problems for degree reasons. ∎ 6 Evaluation of the result In this section we want to make the results of Theorem 5.2 explicit. We start by considering the case of the p-adic integers $\mathbb{Z}_{p}$ in which Theorem 5.2 reduces additively to Brun’s result, but gives some more multiplicative information. We note that all the computations in this section depend on the presentation $A^{\prime}=A/\pi^{k}$ and are in particular highly non-natural in $A^{\prime}$. Example 6.1. We start by discussing the case $A=\mathbb{Z}_{p}$ and $k\geq 2$. We pick the uniformizer $\pi=p$. The minimal polynomial is $\phi(z)=z-p$, and $A^{\prime}=\mathbb{Z}/p^{k}$. The resulting groups $\operatorname{THH}_{*}(\mathbb{Z}/p^{k})$ were additively computed by Brun [Bru00]. We have $\partial y^{[i]}=kp^{k-1}\partial y^{[i-1]}d\pi$, and since the minimal polynomial is given by $z-p$ we get $\partial x=d\pi$. If $k\geq 2$, then $y^{\prime}=y-kp^{k-1}x$ still has divided powers, given by $$(y^{\prime})^{[i]}=\sum_{l\geq 0}(-1)^{l}\frac{k^{l}p^{l(k-1)}}{l!}y^{[i-l]}x^% {l},$$ which makes sense since $v_{p}(l!)<\frac{l}{p-1}\leq l(k-1)$ by Lemma 6.6 below. Now $\partial(y^{\prime})^{[i]}=0$, and we get a map of DGAs $$\left((\mathbb{Z}/p^{k})[x]\otimes\Lambda(d\pi),\partial\right)\otimes_{% \mathbb{Z}}\left(\mathbb{Z}\langle y^{\prime}\rangle,0\right)\to\left((\mathbb% {Z}/p^{k})[x]\langle y\rangle\otimes\Lambda(d\pi),\partial\right)$$ which is an isomorphism by a straightforward filtration argument. By Proposition 4.5, the homology of $((\mathbb{Z}/p^{k})[x]\otimes\Lambda(d\pi),\partial)$ coincides with $\pi_{*}(\operatorname{THH}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p% ^{k})$. Thus applying the Künneth theorem we get $$\operatorname{THH}_{*}(\mathbb{Z}/p^{k})=\pi_{*}(\operatorname{THH}_{*}(% \mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{k})\otimes_{\mathbb{Z}}% \mathbb{Z}\langle y^{\prime}\rangle$$ as rings. Concretely we get $$\displaystyle\operatorname{THH}_{*}(\mathbb{Z}/p^{k})$$ $$\displaystyle=\bigoplus_{i\geq 0}\pi_{*-2i}(\operatorname{THH}(\mathbb{Z}_{p})% \otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{k})$$ $$\displaystyle=\begin{cases}\mathbb{Z}/p^{k}\oplus\bigoplus_{1\leq i\leq n}% \mathbb{Z}/{\gcd(p^{k},i)}&\text{for }*=2n\\ \bigoplus_{1\leq i\leq n}\mathbb{Z}/{\gcd(p^{k},i)}&\text{for }*=2n-1\ .\end{cases}$$ So in the case $k\geq 2$, we can replace the divided power generator of our DGA by one in the kernel of $\partial$. We contrast this with the case $k=1$. In this case, of course, we expect to recover Bökstedt’s result $\operatorname{THH}_{*}(\mathbb{Z}/p)=(\mathbb{Z}/p)[x]$, but it is nevertheless interesting to analyze this result in terms of Theorem 5.2 and observe how this differs from Example 6.1. Example 6.2. For $A=\mathbb{Z}_{p}$ with uniformizer $p$ and $k=1$, i.e. $A^{\prime}=\mathbb{Z}/p$, we have $\partial x=d\pi$ and $\partial y=d\pi$. Here, we can set $x^{\prime}=x-y$ to obtain an isomorphism of DGAs $$\left(\mathbb{Z}[x^{\prime}],0\right)\otimes_{\mathbb{Z}}\left((\mathbb{Z}/p)% \langle y\rangle\otimes\Lambda(d\pi),\partial\right)\to\left((\mathbb{Z}/p)[x]% \langle y\rangle\otimes\Lambda(d\pi),\partial\right).$$ Since $\partial y^{[i]}=y^{[i-1]}dz$, and thus the homology of the second factor is just $\mathbb{Z}/p$ in degree $0$, Künneth applies to show that $\operatorname{THH}_{*}(\mathbb{Z}/p)\cong(\mathbb{Z}/p)[x^{\prime}]$. The two qualitatively different behaviours illustrated in Examples 6.1 and 6.2 also appear in the general case: For sufficiently big $k$, we can modify the divided power generator $y$ to a $y^{\prime}$ that splits off, and obtain a description in terms of $\operatorname{THH}(A;A^{\prime})$ (Proposition 6.7). For sufficiently small $k$, we can modify the polynomial generator to an $x^{\prime}$ that splits off, and obtain a description in terms of $\operatorname{HH}(A^{\prime})$ (Proposition 6.4. In the general case, as opposed to the case of the integers, these two cases do not cover all possibilities, and for $k$ in a certain region the homology groups of the DGA of Theorem 5.2 are possibly without a clean closed form description. Recall that, in the DGA of Theorem 5.2, we have $\partial x=\phi^{\prime}(\pi)dz$ and $\partial y=k\pi^{k-1}dz$. The behavior of the DGA depends on which of the two coefficients has greater valuation. Lemma 6.3. In mixed characteristic, we have $$\operatorname{THH}_{2}(A^{\prime})\cong A^{\prime}\oplus A/{\gcd(\phi^{\prime}% (\pi),k\pi^{k-1},\pi^{k})}.$$ 1. If $p|k$ and $\pi^{k}|\phi^{\prime}(\pi)$, we can take as generators $$\operatorname{THH}_{2}(A^{\prime})\cong A^{\prime}\left\{x,y\right\}.$$ 2. If $p|k$ and $\phi^{\prime}(\pi)|\pi^{k}$, we can take as generators $$\operatorname{THH}_{2}(A^{\prime})\cong A^{\prime}\left\{y\right\}\oplus(A/% \phi^{\prime}(\pi))\left\{\frac{\pi^{k}}{\phi^{\prime}(\pi)}\right\}.$$ 3. If $p\nmid k$ and $\phi^{\prime}(\pi)|\pi^{k-1}$, we can take as generators $$\operatorname{THH}_{2}(A^{\prime})\cong A^{\prime}\left\{y^{\prime}=y-\frac{k% \pi^{k-1}}{\phi^{\prime}(\pi)}x\right\}\oplus(A/\phi^{\prime}(\pi))\left\{% \frac{\pi^{k}}{\phi^{\prime}(\pi)}x\right\}.$$ 4. If $p\nmid k$ and $\pi^{k-1}|\phi^{\prime}(\pi)$, we can take as generators $$\operatorname{THH}_{2}(A^{\prime})\cong A^{\prime}\left\{x^{\prime}=x-\frac{% \phi^{\prime}(\pi)}{k\pi^{k-1}}y\right\}\oplus(A/\pi^{k-1})\left\{\pi y\right\}.$$ We now want to discuss the structure of $\operatorname{THH}_{*}(A^{\prime})$ in the cases appearing in Lemma 6.3. We start with the simplest case, which is analogous to Example 6.2: Proposition 6.4. Assume we are in the situation of Theorem 5.2 and that either $A$ is of equal characteristic, or $A$ is of mixed characteristic and we are in case (1) or (4) of Lemma 6.3, i.e. $p|k$ and $\pi^{k}|\phi^{\prime}(\pi)$, or $p\nmid k$ and $\pi^{k-1}|\phi^{\prime}(\pi)$. Then we have $$\operatorname{THH}_{*}(A^{\prime})\cong\mathbb{Z}[x^{\prime}]\otimes_{\mathbb{% Z}}H_{*}\left(A^{\prime}\langle y\rangle\otimes\Lambda(d\pi),\partial\right)% \qquad|x^{\prime}|=2$$ which evaluates additively to $$\displaystyle\operatorname{THH}_{2k}(A^{\prime};\mathbb{Z}_{p})\cong A/\pi^{k}% \oplus\bigoplus_{i=1}^{k}A/\gcd(k\pi^{k-1},\pi^{k})$$ $$\displaystyle\operatorname{THH}_{2k-1}(A^{\prime};\mathbb{Z}_{p})\cong% \bigoplus_{i=1}^{k}A/\gcd(k\pi^{k-1},\pi^{k}),$$ Proof. We set $x^{\prime}=x$ if $A$ is of equal characteristic or if $p|k$ and $\pi^{k}|\phi^{\prime}(\pi)$, and $x^{\prime}=x-\frac{\phi^{\prime}(\pi)}{k\pi^{k-1}}y$ if $p\nmid k$ and $\pi^{k-1}|\phi^{\prime}(\pi)$. Then $\partial x^{\prime}=0$. We get a map of DGAs $$\left(\mathbb{Z}[x^{\prime}],0\right)\otimes_{\mathbb{Z}}\left(A^{\prime}% \langle y\rangle\otimes\Lambda(d\pi),\partial\right)\to\left(A^{\prime}[x]% \langle y\rangle\otimes\Lambda(d\pi),\partial\right).$$ which is an isomorphism by a straightforward filtration argument. By Künneth, we get an isomorphism $$\operatorname{THH}_{*}(A^{\prime})\cong\mathbb{Z}[x^{\prime}]\otimes_{\mathbb{% Z}}H_{*}\left(A^{\prime}\langle y\rangle\otimes\Lambda(d\pi),\partial\right).$$ The additive description of the homology is easily seen from the fact that $\partial y^{[i]}=k\pi^{k-1}(d\pi)y^{[i-1]}$. ∎ Remark 6.5. In fact, we can identify $H_{*}\left(A^{\prime}\langle y\rangle\otimes\Lambda(d\pi),\partial\right)$ with the Hochschild homology $\operatorname{HH}_{*}(\mathbb{Z}[z]/z^{k}\otimes A^{\prime}/A^{\prime})$. Compare Section 8. Essentially, the takeaway of Proposition 6.4 is that in cases (1) and (4) of Lemma 6.3 we can modify the polynomial generator $x$ to a cycle which splits a polynomial factor off $\operatorname{THH}(A^{\prime})$. One would hope that, complementarily, in cases 2 and 3, we can split off a divided power factor. This is only true after more restrictive conditions. To formulate those, we will require the following lemma on the valuation of factorials: Lemma 6.6 (Legendre). For a natural number $l\geq 1$ and a prime $p$ we have $$v_{p}(l!)<\frac{l}{p-1}$$ Proof. We count how often $p$ divides $l!$. Every multiple of $p$ not greater than $l$ provides a factor of $p$, every multiple of $p^{2}$ provides an additional factor of $p$, and so on. We get the following formula, due to Legendre: $$v_{p}(l!)=\sum_{i\geq 1}\left\lfloor\frac{l}{p^{i}}\right\rfloor,$$ where $\lfloor-\rfloor$ denotes rounding down to the nearest integer. In particular, $$v_{p}(l!)<\sum_{i\geq 1}\frac{l}{p^{i}}=\frac{l}{p-1}.\qed$$ Proposition 6.7. Assume we are in the situation of Theorem 5.2, and for $A$ of equal characteristic $p|k$, and for $A$ of mixed characteristic either $p|k$ (i.e. we are in case (1) or (2) of Lemma 6.3), or we have the following strengthening of case (3): $$v_{p}\left(\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}\right)\geq\frac{1}{p-1}$$ Then we have an isomorphism of rings $$\operatorname{THH}_{*}(A^{\prime})\cong\pi_{*}(\operatorname{THH}(A)\otimes_{A% }A^{\prime})\otimes_{\mathbb{Z}}\mathbb{Z}\langle y^{\prime}\rangle\qquad|y^{% \prime}|=2$$ In particular, we get additively $$\displaystyle\operatorname{THH}_{2k}(A^{\prime};\mathbb{Z}_{p})\cong A/\pi^{k}% \oplus\bigoplus_{i=1}^{k}A/\gcd(i\phi^{\prime}(\pi),\pi^{k})$$ $$\displaystyle\operatorname{THH}_{2k-1}(A^{\prime};\mathbb{Z}_{p})\cong% \bigoplus_{i=1}^{k}A/\gcd(i\phi^{\prime}(\pi),\pi^{k})\ .$$ Proof. If $p|k$, all $y^{[i]}$ are cycles, and we set $y^{\prime}:=y$. If $$v_{p}\left(\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}\right)\geq\frac{1}{p-1},$$ we set $y^{\prime}=y-\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}x$. In either case, $(y^{\prime})$ admits divided powers, defined in the first case just by $(y^{\prime})^{[i]}=y^{[i]}$, and in the second case by $$(y^{\prime})^{[i]}=\sum_{l\geq 0}(-1)^{l}\frac{k^{l}\pi^{l(k-1)}}{\phi^{\prime% }(\pi)^{l}l!}y^{[i-l]}x^{l},$$ which is well-defined because $$v_{p}\left(\frac{k^{l}\pi^{l(k-1)}}{\phi^{\prime}(\pi)^{l}}\right)\geq\frac{l}% {p-1}>v_{p}(l!)$$ by assumption and Lemma 6.6. We get a map of DGAs $$\left((\mathbb{Z}/p^{k})[x]\otimes\Lambda(d\pi),\partial\right)\otimes\left(% \mathbb{Z}\langle y^{\prime}\rangle,0\right)\to\left((\mathbb{Z}/p^{k})[x]% \langle y\rangle\otimes\Lambda(d\pi),\partial\right)$$ which is an isomorphism by a straightforward filtration argument. By Proposition 4.5 and Künneth, we then get $$\operatorname{THH}_{*}(A^{\prime};\mathbb{Z}_{p})\cong\pi_{*}(\operatorname{% THH}(A)\otimes_{A}A^{\prime})\otimes_{\mathbb{Z}}\mathbb{Z}\langle y^{\prime}\rangle\qed$$ Finally, we want to illustrate that the case ‘in between’ Propositions 6.7 and 6.4 is more complicated and probably doesn’t admit a simple uniform description. Example 6.8. For a mixed characteristic CDVR $A$ with perfect residue field and $A^{\prime}=A/\mathfrak{m}^{k}=A/\pi^{k}$, Theorem 5.2 implies that the even-degree part of $\operatorname{THH}_{*}(A^{\prime})$ is given by the kernel of $\partial$ in the DGA $(A^{\prime}[x]\langle y\rangle\otimes\Lambda(d\pi),\partial)$. We can thus consider $\bigoplus\operatorname{THH}_{2n}(A^{\prime})$ as a subring of $A^{\prime}[x]\langle y\rangle$. Suppose we are in the situation of case (3) of Lemma 6.3. Then a basis for $\operatorname{THH}_{2}(A^{\prime})$ is given by $$\displaystyle y-\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}x,$$ $$\displaystyle\frac{\pi^{k}}{\phi^{\prime}(\pi)}x.$$ Now suppose the valuations of the coefficients $\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}$ and $\frac{\pi^{k}}{\phi^{\prime}(\pi)}$ are positive, but small, say smaller than $\frac{1}{p}$. Then observe that $$\left(y-\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}x\right)^{p}=\frac{k^{p}\pi^{p(k-% 1)}}{\phi^{\prime}(\pi)^{p}}x^{p}\text{ mod }p,$$ in particular, under our assumptions, $\left(y-\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}x\right)^{p}$ is divisible by $\pi$ but not $p$. Similarly, $$\left(\frac{\pi^{k}}{\phi^{\prime}(\pi)}x\right)^{p}=\frac{\pi^{kp}}{\phi^{% \prime}(\pi)^{p}}x^{p}$$ is divisible by $\pi$ but not $p$. So both of our generators of $\operatorname{THH}_{2}(A^{\prime})$ are nilpotent, but cannot admit divided powers. It is not hard to see that this holds more generally for any element of $\operatorname{THH}_{2}(A^{\prime})$ that is nonzero mod $\pi$. So in this situation, $\operatorname{THH}_{*}(A^{\prime})$ cannot admit a description similar to Proposition 6.4 or 6.7. One example for $A^{\prime}$ fulfilling the requirements used here is given by $A=\mathbb{Z}_{p}[\sqrt[e]{p}]$ with uniformizer $\pi=\sqrt[e]{p}$, and $k=e+1$, as long as $p\nmid e,k$ and $e>2p$. 7 The general spectral sequences We now want to establish a spectral sequence to compute absolute $\operatorname{THH}$ from relative ones of which Proposition 4.1 is a special case. This will come in two slightly different flavours. We let $R\to A$ be a map of commutative rings and let $\mathbb{S}_{R}$ be a lift of $R$ to the sphere, i.e. a commutative ring spectrum with an equivalence $$\mathbb{S}_{R}\otimes_{\mathbb{S}}\mathbb{Z}\simeq R.$$ The example that will lead to the spectral sequence of Proposition 4.1 is $R=\mathbb{Z}[z]$ and $\mathbb{S}_{R}=\mathbb{S}[z]$. Recall that for every commutative ring $R$ we can form the derived de Rham complex $L\Omega_{R/\mathbb{Z}}$, which has a filtration whose associated graded is in degree $*=i$ given by a shift the non-abelian derived functor of the $i$-term of the de Rham complex $\Omega^{i}_{R/\mathbb{Z}}$ (considered as a functor in $R$). Concretely this is done by simplically resolving $R$ by polynomial algebras $\mathbb{Z}[x_{1},...,x_{k}]$, taking $\Omega^{i}_{\bullet/\mathbb{Z}}$ levelwise and considering the result via Dold-Kan as an object of $\mathcal{D}(\mathbb{Z})$. This derived functor agrees with the $i$-th derived exterior power $\Lambda^{i}L_{R/\mathbb{Z}}$ of the cotangent complex $L_{R/\mathbb{Z}}$. For $R$ smooth over $\mathbb{Z}$ this just recovers the usual terms in the de Rham complex. In general one should be aware that $L\Omega_{R/\mathbb{Z}}$ is a filtered chain complex, hence has two degrees, one homological and one filtration degree. We shall only need its associated graded $L\Omega^{*}_{R/\mathbb{Z}}$ which is a graded chain complex. We warn the reader that the homological direction comes from deriving and has nothing to do with the de Rham differential. Proposition 7.1. In the situation described above there are two multiplicative, convergent spectral sequences $$\displaystyle\pi_{i}\Big{(}\operatorname{THH}(A/\mathbb{S}_{R})\otimes_{R}% \operatorname{HH}_{j}(R/\mathbb{Z})\Big{)}\Rightarrow\pi_{i+j}\operatorname{% THH}(A)$$ $$\displaystyle\pi_{i}\Big{(}\operatorname{THH}(A/\mathbb{S}_{R})\otimes_{R}L% \Omega^{j}_{R/\mathbb{Z}}\Big{)}\Rightarrow\pi_{i+j}\operatorname{THH}(A)\ .$$ Here we use homological Serre grading, i.e. the displayed bigraded ring is the $E_{2}$-page and the $d^{r}$-differential has $(i,j)$-bidegree $(-r,r-1)$. A similar spectral sequence with everything all terms $p$-completed (including the tensor products) exists as well. Proof. We consider the lax symmetric monoidal functor $$\displaystyle\operatorname{Mod}_{\operatorname{HH}(R/\mathbb{Z})}$$ $$\displaystyle\to\operatorname{Mod}_{\operatorname{THH}(A)}$$ (3) $$\displaystyle M$$ $$\displaystyle\mapsto\operatorname{THH}(A)\otimes_{\operatorname{HH}(R/\mathbb{% Z})}M$$ where we have used the equivalence $\operatorname{HH}(R/\mathbb{Z})=\operatorname{THH}(\mathbb{S}_{R})\otimes_{% \mathbb{S}}\mathbb{Z}$ to get the $\operatorname{HH}(R/\mathbb{Z})$-module structure on $\operatorname{THH}(A)$. Now we filter $\operatorname{HH}(R/\mathbb{Z})$ by two different filtrations: either by the Whitehead tower $$...\to\tau_{\geq 2}\operatorname{HH}(R/\mathbb{Z})\to\tau_{\geq 1}% \operatorname{HH}(R/\mathbb{Z})\to\tau_{\geq 0}\operatorname{HH}(R/\mathbb{Z})% =\operatorname{HH}(R/\mathbb{Z})$$ or by the HKR-filtration [NS18, Proposition IV.4.1] $$...\to F^{2}_{\operatorname{HKR}}\to F^{1}_{\operatorname{HKR}}\to F^{0}_{% \operatorname{HKR}}=\operatorname{HH}(R/\mathbb{Z})\ .$$ The HKR-filtration is in fact the derived version of the Whitehead tower, in particular for $R$ smooth (or more generally ind-smooth) both filtrations agree. Both filtrations are complete and multiplicative, in particular they are filtrations through $\operatorname{HH}(R/\mathbb{Z})$ modules. On the associated graded pieces the $\operatorname{HH}(R/\mathbb{Z})$-module structure factors through the map $\operatorname{HH}(R/\mathbb{Z})\to R$ of ring spectra. This is obvious for the Whitead tower and thus also follows for the HKR filtration. Thus the graded pieces are only $R$-modules and as such given by $\operatorname{HH}_{i}(R)$ in the first case and by $\Lambda^{j}L_{R/\mathbb{Z}}$ in the second case. After applying the functor (3) to this filtration we obtain two multiplicative filtrations of $\operatorname{THH}(A)$: $$\operatorname{THH}(A)\otimes_{\operatorname{HH}(R/\mathbb{Z})}\big{(}\tau_{% \geq j}\operatorname{HH}(R/\mathbb{Z})\big{)}\qquad\text{and}\qquad% \operatorname{THH}(A)\otimes_{\operatorname{HH}(R/\mathbb{Z})}F^{j}_{% \operatorname{HKR}}$$ which are complete since the connectivity of the pieces tends to infinity. Let us identify the associated gradeds for the HKR filtration, the case of the Whitehead tower works the same: $$\displaystyle\operatorname{THH}(A)\otimes_{\operatorname{HH}(R/\mathbb{Z})}% \Lambda^{j}L_{R/\mathbb{Z}}$$ $$\displaystyle\simeq\operatorname{THH}(A)\otimes_{\operatorname{HH}(R/\mathbb{Z% })}R\otimes_{R}\Lambda^{j}L_{R/\mathbb{Z}}$$ $$\displaystyle\simeq(\operatorname{THH}(A)\otimes_{\operatorname{THH}(\mathbb{S% }_{R})\otimes_{\mathbb{S}}\mathbb{Z}}(\mathbb{S}_{R}\otimes_{\mathbb{S}}% \mathbb{Z}))\otimes_{R}\Lambda^{j}L_{R/\mathbb{Z}}$$ $$\displaystyle\simeq(\operatorname{THH}(A)\otimes_{\operatorname{THH}(\mathbb{S% }_{R})}\mathbb{S}_{R})\otimes_{R}\Lambda^{j}L_{R/\mathbb{Z}}$$ $$\displaystyle\simeq(\operatorname{THH}(A/\mathbb{S}_{R})\otimes_{R}\Lambda^{j}% L_{R/\mathbb{Z}}\ .$$ Thus by the standard construction we get conditionally convergent, multiplicative spectral sequences which are concentrated in a single quadrant and therefore convergent. ∎ If $R$ is smooth (or more generally ind-smooth) over $\mathbb{Z}$ then both spectral sequences of Proposition 7.1 agree and take the form $$\operatorname{THH}_{*}(A/\mathbb{S}_{R})\otimes_{R}\Omega^{*}_{R/\mathbb{Z}}% \Rightarrow\operatorname{THH}_{*}(A)\ .$$ In general the HKR spectral sequence seems to be slightly more useful even though the other one looks easier (at least easier to state). We will explain the difference in the example of a quotient of a DVR in Section 8 where $R=\mathbb{Z}[z]/z^{k}$ and $\mathbb{S}_{R}=\mathbb{S}[z]/z^{k}$. Remark 7.2. With basically the same construction as in Proposition 7.1 (and if $R\otimes_{\mathbb{Z}}A$ is discrete in the first case) one gets variants of these spectral sequences which take the form $$\displaystyle\pi_{i}\Big{(}\operatorname{THH}(A/\mathbb{S}_{R})\otimes_{A}% \operatorname{HH}_{j}(R\otimes_{\mathbb{Z}}A/A)\Big{)}\Rightarrow\pi_{i+j}% \operatorname{THH}(A)$$ $$\displaystyle\pi_{i}\Big{(}\operatorname{THH}(A/\mathbb{S}_{R})\otimes_{A}L% \Omega^{j}_{R\otimes_{\mathbb{Z}}A/A}\Big{)}\Rightarrow\pi_{i+j}\operatorname{% THH}(A).$$ These spectral sequences agree with the ones of Proposition 7.1 as soon as $A$ is flat over $R$ or $R$ is smooth over $\mathbb{Z}$, which covers all cases of interest for us. These modified spectral sequences are probably in general the ‘correct’ ones but we have decided to state Proposition 7.1 in the more basic form. Finally we end this section by construction a slightly different spectral sequence in the situation of a map of rings $A\to A^{\prime}$. This was constructed in Theorem 3.1 of [Lin00]. See also Brun [Bru00], which contains the special case $A=\mathbb{Z}_{p}$. We will explain how it was used by Brun to compute $\operatorname{THH}_{*}(\mathbb{Z}/p^{n})$ in the next section and compare that approach to ours. Proposition 7.3. In general for a map of rings $A\to A^{\prime}$ there is a multiplicative, convergent spectral sequence $$\pi_{i}\Big{(}\operatorname{HH}(A^{\prime}/A)\otimes_{A^{\prime}}\pi_{j}\big{(% }\operatorname{THH}(A)\otimes_{A}A^{\prime}\big{)}\Big{)}\Rightarrow% \operatorname{THH}_{i+j}(A^{\prime}).$$ Proof. We filter $\operatorname{THH}(A)\otimes_{A}A^{\prime}=:T$ by its Whitehead tower $\tau_{\geq\bullet}T$ and consider the associated filtration $$\operatorname{THH}(A^{\prime})\otimes_{T}\tau_{\geq\bullet}T.$$ This filtration is multiplicative, complete and the colimit is given by $\operatorname{THH}(A^{\prime})$. The associated graded is given by $$\displaystyle\operatorname{THH}(A^{\prime})\otimes_{T}\pi_{j}T$$ $$\displaystyle\simeq\operatorname{THH}(A^{\prime})\otimes_{T}A^{\prime}\otimes_% {A^{\prime}}\pi_{j}T$$ $$\displaystyle\simeq\left(\operatorname{THH}(A^{\prime})\otimes_{\operatorname{% THH}(A)\otimes_{A}A^{\prime}}A^{\prime}\right)\otimes_{A^{\prime}}\pi_{j}T$$ $$\displaystyle\simeq\left(\operatorname{THH}(A^{\prime})\otimes_{\operatorname{% THH}(A)}A\right)\otimes_{A^{\prime}}\pi_{j}T$$ $$\displaystyle\simeq\operatorname{HH}(A^{\prime}/A)\otimes_{A^{\prime}}\pi_{j}T\ .$$ where we have again used various base change formulas for $\operatorname{THH}$. ∎ 8 Comparison of spectral sequences Let us consider the situation of Section 5 i.e. $A^{\prime}=A/\pi^{k}$ is a quotient of a DVR $A$ with perfect residue field of characteristic $p$. We want to compare four different multiplicative spectral sequences converging to $\operatorname{THH}(A^{\prime})$ that can be used in such a situation. They all have absolutely isomorphic (virtual) $E^{0}$-pages give by $A[x]\langle y\rangle\otimes\Lambda(dz)$ but totally different grading and differential structure. 1. In Section 5 we have constructed a spectral sequence which ultimately identifies $\operatorname{THH}_{*}(A^{\prime})$ as the homology of a DGA $(A^{\prime}[x]\langle y\rangle\otimes\Lambda(d\pi),\partial)$, see Theorem 5.2. This spectral sequence takes the form $$1$$$$x,y$$$$xdz,ydz$$$$x^{2},xy,y^{[2]}$$$$0$$$$0$$$$0$$$$0$$$$0$$$$\ldots$$$$\ldots$$$$\ldots$$$$dz$$ i.e. we have both $x$ and $y$ along the lower edge, and they both support differentials hitting certain multiples of $dz$ (here $dz$ corresponds to $d\pi$). The main point is that it suffices to determine the differential on $x$ and $y$ and the rest follows using multiplicative and divided power structures. There is no space for higher differentials. 2. We now consider Brun’s spectral sequence, see Proposition 7.3. It also computes $\operatorname{THH}_{*}(A^{\prime})$ but has $E^{2}$-term $$E^{2}=\operatorname{HH}_{*}(A^{\prime}/A)\otimes_{A^{\prime}}\pi_{*}(% \operatorname{THH}_{*}(A)\otimes_{A}A^{\prime})$$ Since $\operatorname{HH}_{*}(A^{\prime}/A)$ is a divided power algebra $A^{\prime}\langle y\rangle$, and $\pi_{*}(\operatorname{THH}_{*}(A)\otimes_{A}A^{\prime})$ can be computed as the homology of the DGA $(A^{\prime}[x]\otimes\Lambda(dz),\partial)$ by Proposition 4.5, one can introduce a virtual zeroth page of the form $$E^{0}=A^{\prime}[x]\langle y\rangle\otimes\Lambda(dz)\qquad|y|=(2,0),|x|=(0,2)% ,|dz|=(0,1)$$ We interpret $\partial$ as the $d^{0}$-differential999We do not claim that there is a direct algebraic construction of a spectral sequence with this zeroth page. We simply define the spectral sequence by defining $E^{0}$ and $d^{0}$ as explained and from $E^{2}$ and higher on we take Brun’s spectral sequence. This should be seen as a mere tool of visualization. and get the following picture: $$1$$$$y$$$$x$$$$\vdots$$$$xdz$$$$x^{2}$$$$dz$$$$0$$$$y^{[2]}$$$$ydz$$$$\ldots$$$$\vdots$$$$\ldots$$$$0$$$$\vdots$$$$0$$$$0$$$$0$$ This spectral sequence behaves well and degenerates in the ‘big $k$’ case discussed in 6.7, since then we have divided power elements $(y^{\prime})^{[i]}\in\operatorname{THH}_{*}(A^{\prime})$ that are detected by the $y^{[i]}$, but this is not obvious from this spectral sequence, and Brun [Bru00] has to do serious work to determine its structure in the case $A^{\prime}=\mathbb{Z}/p^{k}$ for $k\geq 2$. In fact, for $A^{\prime}=\mathbb{Z}/p^{k}$ with $k=1$ the spectral sequence becomes highly nontrivial. After $d^{0}$, determined by $d^{0}(x)=dz$, the leftmost column consists of elements of the form $x^{ip}$ and $x^{ip-1}dz$. From Example 6.2, we know that $\operatorname{THH}_{*}(\mathbb{F}_{p})$ is polynomial on $x^{\prime}=x-y$. This is detected as $y$ in this spectral sequence. Since $y$ is a divided power generator, its $p$-th power is $0$ on the $E_{\infty}$-page. But $p^{k-1}(x-y)^{p}=p^{k-1}x^{p}$, and thus there is a multiplicative extension. In addition, the elements $x^{kp-1}dz$ and the divided powers of $y$ cannot exist on the $E_{\infty}$-page, so there are also longer differentials. While these phenomena might seem like a pathology in the case $A=\mathbb{Z}_{p}$ – after all, we knew $\operatorname{THH}(\mathbb{Z}/p)$ before – qualitatively, they generally appear whenever we are not in the ‘big $k$’ case discussed in Proposition 6.4. 3. We can also consider the first spectral constructed in Proposition 7.1, which takes the form $$E^{2}=\operatorname{THH}_{*}(A^{\prime}/(\mathbb{S}[z]/z^{k}))\otimes_{A^{% \prime}}\operatorname{HH}_{i}((A^{\prime}[z]/z^{k})/A^{\prime})\Rightarrow% \operatorname{THH}_{*}(A^{\prime}).$$ One gets $\operatorname{THH}_{*}(A^{\prime}/(\mathbb{S}[z]/z^{k}))=A^{\prime}[x]$ by a version of Theorem 3.1, and $\operatorname{HH}((A^{\prime}[z]/z^{k})/A^{\prime})$ is computed as the homology of the DGA $(A^{\prime}\langle y\rangle\otimes\Lambda(dz),\partial)$ where $y$ sits in degree $2$ and $dz$ in degree 1. Thus we again introduce a virtual $E^{0}$-term $$E^{0}=A^{\prime}[x]\langle y\rangle\otimes\Lambda(dz)\qquad|y|=(0,2),|x|=(2,0)% ,|dz|=(0,1)$$ and consider $\partial$ as a $d^{0}$ differential. Then the spectral sequence visually looks as follows: $$1$$$$x$$$$y$$$$\vdots$$$$dz$$$$0$$$$x^{2}$$$$xdz$$$$\ldots$$$$\ldots$$$$\ldots$$$$0$$$$0$$$$0$$ This spectral sequence behaves well and degenerates in the ‘small $k$’ case discussed in Proposition 6.4, since then we have a polynomial generator $x^{\prime}\in\operatorname{THH}_{*}(A^{\prime})$ whose powers are detected by the $x^{i}$. If we are not in this case, we generally have nontrivial extensions. For example, let $A^{\prime}$ be chosen such that $p\nmid k$, and $\pi\phi^{\prime}(\pi)|\pi^{k-1}$. In this case, $\operatorname{THH}_{2}(A^{\prime})$, using 5.2, is of the form $$A^{\prime}\{y^{\prime}\}\oplus(A/\phi^{\prime}(\pi))\left\{\frac{\pi^{k}}{\phi% ^{\prime}(\pi)}x\right\},$$ with $$y^{\prime}=y-\frac{k\pi^{k-1}}{\phi^{\prime}(\pi)}x.$$ In this spectral sequence, the $E^{\infty}$ page consists in total degree $2$ of a copy of $(A/\pi^{k-1})\{\pi y\}$ in degree $(0,2)$, and a copy of $(A/\pi\phi^{\prime}(\pi))\left\{\frac{\pi^{k-1}}{\phi^{\prime}(\pi)}x\right\}$ in degree $(2,0)$. The element $y^{\prime}\in\operatorname{THH}_{2}(A^{\prime})$ is detected as a generator of the degree $(2,0)$ part, but it is not actually annihilated by $\pi\phi^{\prime}(\pi)$. Rather, $\pi\phi^{\prime}(\pi)y^{\prime}$ agrees with $\pi\phi^{\prime}(\pi)y$, detected as a $\phi^{\prime}(\pi)$-multiple of the generator in degree $(0,2)$ and nonzero under our assumption $\pi\phi^{\prime}(\pi)|k\pi^{k-1}$. 4. Finally we can consider the second spectral sequence constructed in Proposition 7.1 which takes the form $$E^{2}=\operatorname{THH}(A^{\prime}/(\mathbb{S}[z]/z^{k}))\otimes_{A^{\prime}}% L\Omega_{A^{\prime}/A}\Rightarrow\operatorname{THH}(A^{\prime}).$$ One again has $\operatorname{THH}_{*}(A^{\prime}/(\mathbb{S}[z]/z^{k}))=A^{\prime}[x]$ and $L\Omega_{A^{\prime}/A}$ is computed as the homology of the DGA $(A^{\prime}\langle y\rangle\otimes\Lambda(dz),\partial)$ where this time $y$ sits in grading $1$ and homological degree $1$ (recall that $L\Omega$ has a grading and a homological degree). Thus our virtual $E^{0}$-term this time takes the form $$E^{0}=A^{\prime}[x]\langle y\rangle\otimes\Lambda(dz)\qquad|y|=(1,1),|x|=(2,0)% ,|dz|=(0,1)\ .$$ and the differential $\partial$ becomes a $d^{1}$. The spectral sequence looks graphically as follows: $$1$$$$x$$$$y$$$$0$$$$0$$$$\iddots$$$$\iddots$$$$\vdots$$$$dz$$$$0$$$$x^{2}$$$$xdz$$$$\ldots$$$$y^{[2]}$$$$\ldots$$$$ydz$$$$y^{[2]}dz$$$$0$$$$0$$ This spectral sequence is a slightly improved version of spectral sequence (3) as there are way less higher differentials possible. The whole wedge above the diagonal line through $1$ on the $j$-axis is zero. Again this spectral sequence behaves well and degenerates in the ‘small $k$’ case 6.4, but behaves as badly in the other cases. Essentially, one should view Proposition 6.4 as degeneration result for the spectral sequences (3) and (4), and Proposition 6.7 as a degeneration result for the Brun spectral sequence (2). By putting both the Bökstedt element $x$ and the divided power element $y$ (coming from the relation $\pi^{k}=0$) in the same filtration, the spectral sequence (1) that we have used allows us to uniformly treat both of these cases, as well as still behaving well in the cases not covered by Propositions 6.7 and 6.4 (like Example 6.8), where the homology of the DGA of Theorem 5.2 becomes more complicated and all of the three alternative spectral sequences discussed here can have nontrivial extension problems, seen in our spectral sequence in the form of cycles which are interesting linear combinations of powers of $x$ and $y$. 9 Bökstedt periodicity for complete regular local rings In this section we want to discuss the more general case of a complete regular local ring $A$, that is, a complete local ring $A$ whose maximal ideal $\mathfrak{m}$ is generated by a regular sequence $(a_{1},\ldots,a_{n})$, see [Stacks, Tag 00NQ] and [Stacks, Tag 00NU]. Assume furthermore that $A/\mathfrak{m}=k$ is perfect of characteristic $p$. We focus on the mixed characteristic case, since by a result of Cohen [Coh46], $A$ agrees with a power series ring over $k$ in the equal characteristic case. We can regard $A$ as an algebra over $\mathbb{S}[z_{1},\ldots,z_{n}]=\mathbb{S}[\mathbb{N}\times\ldots\times\mathbb{% N}]$. We then have the following generalisation of Theorem 3.1: Theorem 9.1. For a complete regular local ring $A$ of mixed characteristic with perfect residue field of characteristic $p$ we have $$\operatorname{THH}_{*}(A/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})=A[x]$$ with $x$ in degree $2$. We will give a proof which is completely analogous to the one of Theorem 3.1. We first need the following Lemma. Lemma 9.2. If $A$ is a complete regular local ring as above, it is of finite type over $W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]$. More precisely, it takes the form $$A=W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]/\phi(z_{1},\ldots,z_{n})$$ for a power series $\phi$ with $\phi(0,\ldots,0)=p$. Proof. The map $W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]\to A$ is a surjective $W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]$-module map, and its kernel $K$ base-changes along $W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]\to W(k)$ to the kernel of $W(k)\to k$, i.e. $pW(k)$. $K$ is therefore free of rank $1$, on a generator $\phi\in W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]$ reducing to $p$ modulo $(z_{1},\ldots,z_{n})$. ∎ Proof of Theorem 9.1. From Lemma 9.2, one can deduce as in Proposition 3.5 that the following all agree: These statements can again all be checked modulo $p$, observing that the lower right hand term $\operatorname{THH}(A/\mathbb{S}_{W(k)}[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.% 0pt])$ is already $p$-complete by Lemma 3.6 since $A$ is of finite type over $\mathbb{S}_{W(k)}[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]$. The key is (as in the proof of Proposition 3.5) that the maps $$\xymatrix{\mathbb{F}_{p}[z_{1},\ldots,z_{n}]\ar[r]\ar[d]&\mathbb{F}_{p}[\kern-% 1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]\ar[d]\\ k[z_{1},\ldots,z_{n}]\ar[r]&k[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]}$$ are all relatively perfect. Note that, as opposed to the DVR case, $A$ is not of finite type over the ring spectrum $\mathbb{S}_{W(k)}[z_{1},\ldots,z_{n}]$, and thus $\operatorname{THH}(A/\mathbb{S}_{W(k)}[z_{1},\ldots,z_{n}])$ is not necessarily $p$-complete. ∎ From Proposition 7.1, we now obtain: Proposition 9.3. There is a multiplicative, convergent spectral sequence $$\operatorname{THH}_{*}(A/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})\otimes% _{\mathbb{Z}}\Omega^{*}_{\mathbb{Z}[z_{1},\ldots,z_{n}]/\mathbb{Z}}\Rightarrow% \operatorname{THH}_{*}(A;\mathbb{Z}_{p}).$$ Analogously to Lemma 4.2 we can describe the differential $d^{2}$: Lemma 9.4. We can choose the generator $x\in\operatorname{THH}_{2}(A/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})$ in such a way that $$d^{2}x=\sum_{i}\frac{\partial\phi}{\partial z_{i}}dz_{i}.$$ Proof. We have $\operatorname{THH}_{1}(A;\mathbb{Z}_{p})=\Omega^{1}_{A/W(k)[\kern-1.0pt[z_{1},% \ldots,z_{n}]\kern-1.0pt]}$. For a polynomial $\phi$ as in Lemma 9.2, we get $$\Omega^{1}_{A/W(k)[\kern-1.0pt[z_{1},\ldots,z_{n}]\kern-1.0pt]}=A\{dz_{1},% \ldots,dz_{n}\}\Big{/}\left(\sum_{i}\frac{\partial\phi}{\partial z_{i}}dz_{i}% \right).$$ So the image of $d^{2}$ in degree $(0,1)$ has to agree with the ideal generated by $\sum_{i}\frac{\partial\phi}{\partial z_{i}}dz_{i}$. Up to a unit, we thus have $$d^{2}x=\sum_{i}\frac{\partial\phi}{\partial z_{i}}dz_{i}.\qed$$ For $n=2$, this differential again completely determines $\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$, since $d^{2}$ in degrees $(2k,0)\mapsto(2k-2,1)$ is injective, and therefore the $E^{3}$-page is concentrated in degrees $(0,0)$, $(2k,1)$ and $(2k,2)$ for $k\geq 0$ and the spectral sequence degenerates thereafter without potential for extensions. For $n\geq 3$, there could be extensions, and for $n\geq 4$, there could be longer differentials, both of which we do not know how to control. Finally, we want to remark a couple of things about computing $\operatorname{THH}_{*}(A^{\prime};\mathbb{Z}_{p})$ for $A^{\prime}=A/(f_{1},\ldots,f_{d})$, with $(f_{1},\ldots,f_{d})$ a regular sequence analogously to Section 5. We still have a spectral sequence $$\operatorname{THH}_{*}(A^{\prime}/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p% })\otimes_{\mathbb{Z}}\operatorname{HH}_{*}(\mathbb{Z}[z_{1},\ldots,z_{n}])% \Rightarrow\operatorname{THH}_{*}(A^{\prime}),$$ but the study of $\operatorname{THH}_{*}(A^{\prime}/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})$ turns out to be potentially more subtle. As opposed to Proposition 3.7, we only have a spectral sequence $$\operatorname{THH}_{*}(A/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})\otimes% _{A}\operatorname{HH}_{*}(A^{\prime}/A)\Rightarrow\operatorname{THH}_{*}(A^{% \prime}/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p}),$$ but this does not necessarily degenerate into an equivalence since there is no analogue of the spherical lift $\mathbb{S}[z]/z^{k}$ used in the proof of Proposition 3.7. In our case, $\operatorname{THH}_{*}(A/\mathbb{S}[z_{1},\ldots,z_{n}];\mathbb{Z}_{p})$ is $A[x]$, and $\operatorname{HH}_{*}(A^{\prime}/A)$ is easily seen to be a divided power algebra on $d$ generators. So the spectral sequence is even and cannot have nontrivial differentials. However, there is potential for multiplicative extensions. We have been informed by Guozhen Wang that these do indeed show up, which will be part of forthcoming work of Guozhen Wang with Ruochuan Liu. 10 Logarithmic THH of CDVRs In this section we want to explain how to deduce results about logarithmic topological Hochschild homology from our methods. This way we recover known computations of Hesselholt–Madsen [HM03] for logarithmic $\operatorname{THH}$ of DVRs. We thank Eva Höning for asking about the relation between relative and logarithmic $\operatorname{THH}$, which inspired this section. First we recall the definition of logarithmic $\operatorname{THH}$ following [HM03, Lei18] and [Rog09]. For an abelian monoid $M$ we consider the spherical group ring $\mathbb{S}[M]$ and have $$\operatorname{THH}(\mathbb{S}[M])=\mathbb{S}[B^{\mathrm{cyc}}M]$$ where $B^{\mathrm{cyc}}M$ is the cyclic Bar construction, i.e. the unstable version of topological Hochschild homology. We denote by $M\to M^{\operatorname{gp}}$ the group completion and define the logarithmic $\operatorname{THH}$ of $\mathbb{S}[M]$ relative to $M$ by $$\operatorname{THH}\big{(}\mathbb{S}[M]\,|\,M\big{)}:=\mathbb{S}[M\times_{M^{% \operatorname{gp}}}B^{\mathrm{cyc}}M^{\operatorname{gp}}]\ .$$ There are induced maps of commutative ring spectra $$\operatorname{THH}(\mathbb{S}[M])\to\operatorname{THH}\big{(}\mathbb{S}[M]\,|% \,M\big{)}\to\mathbb{S}[M]$$ whose composition is the canonical map. These are induced from the maps $B^{\mathrm{cyc}}M\to M\times_{M^{\operatorname{gp}}}B^{\mathrm{cyc}}M^{% \operatorname{gp}}\to M$. Definition 10.1. For a commutative ring $R$ with a map $\mathbb{S}[M]\to R$ we define logarithmic THH as the commutative ring spectrum $$\operatorname{THH}(R\,|\,M):=\operatorname{THH}(R)\otimes_{\operatorname{THH}(% \mathbb{S}[M])}\operatorname{THH}\big{(}\mathbb{S}[M]\,|\,M\big{)}\ .$$ In practice, we will only need the case $M=\mathbb{N}$ with the map $\mathbb{S}[\mathbb{N}]=\mathbb{S}[z]\to R$ given by sending $z$ to an element $\pi\in R$. In this case we will also denote $\operatorname{THH}(R\,|\,\mathbb{N})$ by $\operatorname{THH}(R\,|\,\pi)$. Lemma 10.2. We have an equivalence of commutative ring spectra $$\operatorname{THH}(R/\mathbb{S}[M])\simeq\operatorname{THH}(R\,|\,M)\otimes_{% \operatorname{THH}(\mathbb{S}[M]\,|\,M)}\mathbb{S}[M]\ .$$ Proof. We have $$\displaystyle\operatorname{THH}(R/\mathbb{S}[M])$$ $$\displaystyle\simeq\operatorname{THH}(R)\otimes_{\operatorname{THH}(\mathbb{S}% [M])}\mathbb{S}[M]$$ $$\displaystyle\simeq\operatorname{THH}(R)\otimes_{\operatorname{THH}(\mathbb{S}% [M])}\operatorname{THH}\big{(}\mathbb{S}[M]\,|\,M\big{)}\otimes_{\operatorname% {THH}(\mathbb{S}[M]\,|\,M)}\mathbb{S}[M]$$ $$\displaystyle\simeq\operatorname{THH}(R\,|\,M)\otimes_{\operatorname{THH}(% \mathbb{S}[M]\,|\,M)}\mathbb{S}[M]\ .\qed$$ We will use this Lemma to get a spectral sequence similar to the one of Proposition 7.1. To this end let us introduce some further notation. We set $$\operatorname{HH}(\mathbb{Z}[M]\,|\,M):=\operatorname{THH}\big{(}\mathbb{S}[M]% \,|\,M\big{)}\otimes_{\mathbb{S}}\mathbb{Z}=\mathbb{Z}[M\times_{M^{% \operatorname{gp}}}B^{\mathrm{cyc}}M^{\operatorname{gp}}],$$ which comes with a canonical map $\operatorname{HH}(\mathbb{Z}[M])\to\operatorname{HH}(\mathbb{Z}[M]\,|\,M)$ . Example 10.3. For $M=\mathbb{N}$ we have $\mathbb{Z}[M]=\mathbb{Z}[z]$ and we get that the logarithmic Hochschild homology $\operatorname{HH}_{*}(\mathbb{Z}[M]\,|\,M)=\operatorname{HH}_{*}(\mathbb{Z}[z]% \,|\,z)$ is the exterior algebra over $\mathbb{Z}[z]$ on a generator $\operatorname{dlog}z$. One should think of $\operatorname{dlog}z$ as ‘$dz/z$’. Indeed, under the canonical map $$\Omega^{*}_{\mathbb{Z}[z]/\mathbb{Z}}=\operatorname{HH}_{*}(\mathbb{Z}[z])\to% \operatorname{HH}_{*}(\mathbb{Z}[z]\,|\,z)$$ the element $dz\in\Omega^{1}_{\mathbb{Z}[z]/\mathbb{Z}}$ gets mapped to $z\cdot\operatorname{dlog}z$ as one easily checks. In particular one should think of $\operatorname{HH}_{*}(\mathbb{Z}[z]\,|\,z)$ as differential forms on the space $\mathbb{A}^{1}\setminus 0$ with logarithmic poles at $0$. This is a subalgebra of differential forms on $\mathbb{A}^{1}\setminus 0$ as is topologically witnessed by the injective map $\operatorname{HH}_{*}(\mathbb{Z}[z]\,|\,z)\to\operatorname{HH}_{*}(\mathbb{Z}[% z^{\pm}])$ and the map $\operatorname{HH}_{*}(\mathbb{Z}[z])$ then includes the forms on $\mathbb{A}^{1}$. Proposition 10.4. For every map $\mathbb{S}[M]\to R$ of commutative rings there is a multiplicative and convergent spectral sequence $$\pi_{i}\Big{(}\operatorname{THH}(R/\mathbb{S}[M])\otimes_{\mathbb{Z}[M]}% \operatorname{HH}_{j}(\mathbb{Z}[M]\,|\,M)\Big{)}\Rightarrow\pi_{i+j}% \operatorname{THH}(R\,|\,M)\ .$$ Moreover this spectral sequence receives a multiplicative map from the spectral sequence $$\pi_{i}\Big{(}\operatorname{THH}(R/\mathbb{S}[M])\otimes_{\mathbb{Z}[M]}% \operatorname{HH}_{j}(\mathbb{Z}[M])\Big{)}\Rightarrow\pi_{i+j}\operatorname{% THH}(R)$$ of Proposition 7.1, which refines on the abutment the canonical map $\operatorname{THH}_{*}(R)\to\operatorname{THH}_{*}(R\,|\,M)$ and on the $E^{2}$-page the map $\operatorname{HH}_{*}(\mathbb{Z}[M])\to\operatorname{HH}_{*}(\mathbb{Z}[M]\,|% \,M)$. Similarly, there is a $p$-completed version of this spectral sequence. Proof. We proceed exactly as in the proof of Proposition 7.1 and define a filtration on $\operatorname{THH}(R\,|\,M)$ by $$\operatorname{THH}(R\,|\,M)\otimes_{\operatorname{HH}(\mathbb{Z}[M]\,|\,M)}% \tau_{\geq i}{\operatorname{HH}(\mathbb{Z}[M]\,|\,M)}.$$ By the same manipulations as there we get the result using Lemma 10.2. ∎ Now for a CDVR $A$ of mixed characteristic with perfect residue field of characteristic $p$, we want to use this spectral sequence to determine the logarithmic $\operatorname{THH}_{*}(A\,|\,\pi;\mathbb{Z}_{p})$. As usual, this denotes the homotopy groups of the $p$-completion of $\operatorname{THH}(A\,|\,\pi)$. From Theorem 3.1 we see that the spectral sequence of Proposition 10.4 takes the form $$E^{2}=A[x]\otimes\Lambda(\operatorname{dlog}z)\Rightarrow\operatorname{THH}_{*% }(A\,|\,\pi;\mathbb{Z}_{p})$$ with $|x|=(2,0)$ and $|\operatorname{dlog}z|=(0,1)$. $$A$$$$A\{x\}$$$$A\{x^{2}\}$$…$$A\{\operatorname{dlog}z\}$$$$A\{x\operatorname{dlog}z\}$$…$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$0$$$$\ldots$$$$\vdots$$ The spectral sequence receives a map from the spectral sequence $$E^{2}=A[x]\otimes\Lambda(dz)\Rightarrow\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$$ used in Section 4. This map sends $x$ to $x$ and $dz$ to $\pi\operatorname{dlog}z$. Thus from our knowledge of the differential in this second spectral sequence where we have $d^{2}(x)=\phi^{\prime}(\pi)dz$ (Lemma 4.2), we can conclude that $d^{2}$ in the first spectral sequence has to send $x$ to $\pi\phi^{\prime}(\pi)\operatorname{dlog}z$. Thus we get the following result of Hesselholt–Madsen [HM03, Theorem 2.4.1 and Remark 2.4.2]. Proposition 10.5. For a CDVR $A$ of mixed characteristic with perfect residue field of characteristic $p$, the ring $\operatorname{THH}_{*}(A\,|\,\pi;\mathbb{Z}_{p})$ is isomorphic to the homology of the DGA $$H_{*}(A[x]\otimes\Lambda(\operatorname{dlog}\pi),\partial)$$ with $\partial x=\pi\phi^{\prime}(x)\operatorname{dlog}\pi$ and $\partial d\pi=0$. In particular $$\operatorname{THH}_{*}(A\,|\,\pi;\mathbb{Z}_{p})=\begin{cases}A&\text{for }*=0% \\ A/n\pi\phi^{\prime}(\pi)&\text{for }*=2n-1\\ 0&\text{otherwise }\end{cases}$$ Similarly to Proposition 4.5 one can also obtain a version with coefficients in an $A$-algebra $A^{\prime}$, namely that $\pi_{*}(\operatorname{THH}(A\,|\,\pi;\mathbb{Z}_{p})\otimes_{A}A^{\prime})$ is given by the homology of the DGA $H_{*}(A^{\prime}[x]\otimes\Lambda(\operatorname{dlog}\pi),\partial)$ with $\partial$ as above. Note that one could alternatively also deduce the differential in the log spectral sequence using the description of $\operatorname{THH}_{1}(A\,|\,\pi;\mathbb{Z}_{p})$ in terms of logarithmic Kähler differentials, similar to the way we have deduced the differential in the absolut spectral sequence for $\operatorname{THH}_{*}(A;\mathbb{Z}_{p})$ in Lemma 4.2. Remark 10.6. We have considered the DVR $A$ together with the map $\mathbb{N}\to A$ as input for our logarithmic $\operatorname{THH}$. This is what is called a pre-log ring. The associated log ring is given by the saturation $M\to A$ with $M=A\cap(A[\pi^{-1}])^{\times}$. However we have $M=A^{\times}\times\mathbb{N}$ as one easily verifies. Chasing through the definitions one sees that this implies that $\operatorname{THH}(A\,|\,\mathbb{N})\simeq\operatorname{THH}(A\,|\,M)$, i.e. that the logarithmic $\operatorname{THH}$ only depends on the logarithmic structure. Appendix A Relation to the Hopkins-Mahowald result Theorem 1.2 about $\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$ is closely related to the following statement due to Hopkins and Mahowald. We thank Mike Mandell for explaining a proof to us. Theorem A.1 (Hopkins, Mahowald). The Thom spectrum of the $\mathbb{E}_{2}$-map $$\Omega^{2}S^{3}\to\operatorname{BGL}_{1}(\mathbb{S}_{p})$$ corresponding to the element $1-p\in\pi_{0}(\operatorname{GL}_{1}(\mathbb{S}_{p}))$ is equivalent to $\mathbb{F}_{p}$. We claim that this result is equivalent to Theorem 1.2. More precisely we will show that each of the two results can be deduced from the other only using formal considerations and elementary connectivity arguments. Lemma A.2. Theorem A.1 is equivalent to Theorem 1.2. Proof. Let us first phrase Theorem A.1 a bit more conceptually following [ACB19]. We can view $\Omega^{2}S^{3}\to\operatorname{BGL}_{1}(\mathbb{S}_{p})$ as the free $\mathbb{E}_{2}$-monoid on $$S^{1}\xrightarrow{1-p}\operatorname{BGL}_{1}(\mathbb{S}_{p})$$ in the category $(\mathcal{S}_{*})_{/\operatorname{BGL}_{1}(\mathbb{S}_{p})}$ of pointed spaces over $\operatorname{BGL}_{1}(\mathbb{S}_{p})$. The Thom spectrum functor $\mathcal{S}_{/\operatorname{BGL}_{1}(\mathbb{S}_{p})}\to\operatorname{Mod}_{% \mathbb{S}_{p}}$ is symmetric-monoidal and thus the Thom spectrum of $\Omega^{2}S^{3}$ can equivalently be described as the free $\mathbb{E}_{2}$-algebra over $\mathbb{S}_{p}$ on the pointed $\mathbb{S}_{p}$-module obtained as the Thom spectrum of $S^{1}\xrightarrow{1-p}\operatorname{BGL}_{1}(\mathbb{S}_{p})$. This is easily seen to be $\mathbb{S}_{p}\to\mathbb{S}_{p}/p$. Since the free $\mathbb{E}_{2}$-$\mathbb{S}$-algebra on the pointed $\mathbb{S}$-module $\mathbb{S}\to\mathbb{S}/p$ is already $p$-complete, it also agrees with this Thom spectrum. We will write this as $\operatorname{Free}^{\mathbb{E}_{2}}(\mathbb{S}\to\mathbb{S}/p)$. There is a map $\mathbb{S}/p\to\mathbb{F}_{p}$ of pointed $\mathbb{S}$-modules which induces an isomorphism on $\pi_{0}$. We get an induced map $$\operatorname{Free}^{\mathbb{E}_{2}}(\mathbb{S}\to\mathbb{S}/p)\to\mathbb{F}_{% p}.$$ (4) Theorem A.1 is now equivalently phrased as the statement that the map (4) is an equivalence. Since both sides are $p$-complete, this is equivalent to the claim that the map is an equivalence after tensoring with $\mathbb{F}_{p}$. This is the map $$\operatorname{Free}^{\mathbb{E}_{2}}_{\mathbb{F}_{p}}(\mathbb{F}_{p}\to\mathbb% {F}_{p}\otimes\mathbb{S}/p)\to\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$$ induced by the map $\mathbb{F}_{p}\otimes\mathbb{S}/p\to\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{% F}_{p}$ of pointed $\mathbb{F}_{p}$-modules. It follows by elementary connectivity arguments that this map is an isomorphism on $\pi_{0}$ and $\pi_{1}$. Now we have an equivalence $\mathbb{F}_{p}\otimes\mathbb{S}/p\simeq\mathbb{F}_{p}\oplus\Sigma\mathbb{F}_{p}$ as pointed $\mathbb{F}_{p}$-modules. Thus, we can also write $\operatorname{Free}^{\mathbb{E}_{2}}_{\mathbb{F}_{p}}(\mathbb{F}_{p}\to\mathbb% {F}_{p}\otimes\mathbb{S}/p)$ as the free $\mathbb{E}_{2}$-algebra on the unpointed $\mathbb{F}_{p}$-module $\Sigma\mathbb{F}_{p}$. Thus, the Hopkins-Mahowald result is seen to be equivalent to the claim that the map $$\operatorname{Free}^{\mathbb{E}_{2}}_{\mathbb{F}_{p}}(\Sigma\mathbb{F}_{p})\to% \mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$$ induced by a map $\Sigma\mathbb{F}_{p}\to\mathbb{F}_{p}\otimes_{\mathbb{S}}\mathbb{F}_{p}$ which is an isomorphism on $\pi_{1}$, is an equivalence. But this is precisely Theorem 1.2. ∎ In Section 1 we have deduced Bökstedt’s theorem (Theorem 1.1) directly from Theorem 1.2. Blumberg–Cohen–Schlichtkrull deduce an additive version of Bökstedt’s theorem in [BCS10, Theorem 1.3] from Theorem A.1. A variant of this argument is also given in [Blu10, Section 9]. We note that the argument that they use only works additively and does not give the ring structure on $\operatorname{THH}_{*}(\mathbb{F}_{p})$. We will explain this argument now and also how to modify it to give the ring struture as well. Proof of Theorem 1.1 from Theorem A.1 . The Thom spectrum functor $$\mathcal{S}_{*/\operatorname{BGL}_{1}(\mathbb{S}_{p})}\to\operatorname{Mod}_{% \mathbb{S}_{p}}$$ preserves colimits and sends products to tensor products, and thus sends the unstable cyclic Bar construction of $\Omega^{2}S^{3}$ to the cyclic Bar construction of $\mathbb{F}_{p}$. This identifies $\operatorname{THH}(\mathbb{F}_{p})$ as an $\mathbb{E}_{1}$-ring with a Thom spectrum on the free loop space $LB\Omega^{2}S^{3}\simeq L\Omega S^{3}$. Now, using the natural fibre sequence of $\mathbb{E}_{1}$-monoids in $\mathcal{S}_{/\operatorname{BGL}_{1}(\mathbb{S}_{p})}$, $\Omega^{2}S^{3}\to L\Omega S^{3}\to\Omega S^{3}$, one can identify $\operatorname{THH}(\mathbb{F}_{p})$ with $\mathbb{F}_{p}[\Omega S^{3}]$. For example, since this is a split fibre sequence of $\mathbb{E}_{1}$ monoids, one gets an equivalence $L\Omega S^{3}\simeq\Omega^{2}S^{3}\times\Omega S^{3}$ and thus an identification of $\operatorname{THH}(\mathbb{F}_{p})$ as a tensor product of the Thom spectrum on $\Omega^{2}S^{3}$ (i.e. $\mathbb{F}_{p}$) and the Thom spectrum on $\Omega S^{3}$. Thus, a Thom isomorphism yields an equivalence $\operatorname{THH}(\mathbb{F}_{p})\simeq\mathbb{F}_{p}[\Omega S^{3}]$. But the equivalence $L\Omega S^{3}\simeq\Omega^{2}S^{3}\times\Omega S^{3}$ is not an $\mathbb{E}_{1}$-map, so this argument only describes $\operatorname{THH}(\mathbb{F}_{p})$ additively. One can fix this as follows. The Thom spectrum can be interpreted as the colimit of the functor $L\Omega S^{3}\to\operatorname{Sp}$ obtained by postcomposing with the functor $\operatorname{BGL}_{1}(\mathbb{S}_{p})\to\operatorname{Sp}$ that sends the point to $\mathbb{S}_{p}$. Instead of passing to the colimit directly, one can pass to the left Kan extension along the map $L\Omega S^{3}\to\Omega S^{3}$. This yields a functor $\Omega S^{3}\to\operatorname{Sp}$ which sends the basepoint of $\Omega S^{3}$ to the colimit along the fiber, i.e. the Thom spectrum over $\Omega^{2}S^{3}$. But this is precisely $\mathbb{F}_{p}$. We thus obtain a functor $\Omega S^{3}\to\operatorname{BGL}_{1}(\mathbb{F}_{p})$, whose colimit is the Thom spectrum of $L\Omega S^{3}$. Since the original functor $L\Omega S^{3}\to\operatorname{Sp}$ was lax monoidal, because it came from an $\mathbb{E}_{1}$ map, the Kan extension $\Omega S^{3}\to\operatorname{BGL}_{1}(\mathbb{F}_{p})$ is also an $\mathbb{E}_{1}$ map. But the space of $\mathbb{E}_{1}$ maps $\Omega S^{3}\to\operatorname{BGL}_{1}(\mathbb{F}_{p})$ agrees with the space of maps $S^{2}\to\operatorname{BGL}_{1}(\mathbb{F}_{p})$ and is thus trivial. So the resulting colimit $\operatorname{THH}(\mathbb{F}_{p})$ is, as an $\mathbb{E}_{1}$ ring, given by $\mathbb{F}_{p}[\Omega S^{3}]$. ∎ We think that the proof of Bökstedt’s Theorem given in Section 1 directly from Theorem 1.2 is easier than the ‘Thom spectrum proof’ presented in this section, since the latter first uses Theorem 1.2 to deduce the Hopkins-Mahowald theorem and then the (extended) Blumberg–Cohen–Schlichtkrull argument to deduce Bökstedt’s result. However, logically all three results (Theorems 1.1, 1.2 and A.1) are equivalent as shown in Remark 1.5 and Lemma A.2. So either can be deduced from the others. It would be nice to have a proof of one of these that does not rely on computing the dual Steenrod algebra with its Dyer-Lashof operations (or dually the Steenrod algebra and the Nishida relations). References [ACB19] O. Antolín-Camarena and T. Barthel, A simple universal property of thom ring spectra, Journal of Topology 12 (2019), no. 1, 56–78. [AMN18] B. Antieau, A. Mathew, and T. Nikolaus, On the Blumberg-Mandell Künneth theorem for TP, Selecta Math. (N.S.) 24 (2018), no. 5, 4555–4576. MR 3874698 [BCS10] A. J. Blumberg, R. L. Cohen, and C. Schlichtkrull, Topological Hochschild homology of Thom spectra and the free loop space, Geom. Topol. 14 (2010), no. 2, 1165–1242. MR 2651551 [Blu10] A. J. Blumberg, Topological Hochschild homology of Thom spectra which are $E_{\infty}$ ring spectra, J. Topol. 3 (2010), no. 3, 535–560. MR 2684512 [BM94] M. Bökstedt and I. Madsen, Topological cyclic homology of the integers, Astérisque (1994), no. 226, 7–8, 57–143, $K$-theory (Strasbourg, 1992). MR 1317117 [BMMS86] R. R. Bruner, J. P. May, J. E. McClure, and M. Steinberger, $H_{\infty}$ ring spectra and their applications, Lecture Notes in Mathematics, vol. 1176, Springer-Verlag, Berlin, 1986. MR 836132 [BMS18] B. Bhatt, M. Morrow, and P. Scholze, Integral $p$-adic Hodge theory, Publ. Math. Inst. Hautes Études Sci. 128 (2018), 219–397. MR 3905467 [BMS19]  , Topological Hochschild homology and integral $p$-adic Hodge theory, Publ. Math. Inst. Hautes Études Sci. 129 (2019), 199–310. MR 3949030 [Bre99] C. Breuil, Schémas en groupe et modules filtrés, C. R. Acad. Sci. Paris Sér. I Math. 328 (1999), no. 2, 93–97. MR 1669039 [Bru00] M. Brun, Topological Hochschild homology of ${\bf Z}/p^{n}$, J. Pure Appl. Algebra 148 (2000), no. 1, 29–76. MR 1750729 [Bru01] M. Brun, Filtered topological cyclic homology and relative $K$-theory of nilpotent ideals, Algebr. Geom. Topol. 1 (2001), 201–230. MR 1823499 [Coh46] I. S. Cohen, On the structure and ideal theory of complete local rings, Trans. Amer. Math. Soc. 59 (1946), 54–106. MR 16094 [DL62] E. Dyer and R. K. Lashof, Homology of iterated loop spaces, Amer. J. Math. 84 (1962), 35–88. MR 0141112 [HM03] L. Hesselholt and I. Madsen, On the $K$-theory of local fields, Ann. of Math. (2) 158 (2003), no. 1, 1–113. MR 1998478 [KA56] T. Kudo and S. Araki, Topology of $H_{n}$-spaces and $H$-squaring operations, Mem. Fac. Sci. Kyūsyū Univ. Ser. A. 10 (1956), 85–120. MR 0087948 [Kat94] K. Kato, Semi-stable reduction and $p$-adic étale cohomology, Astérisque (1994), no. 223, 269–293, Périodes $p$-adiques (Bures-sur-Yvette, 1988). MR 1293975 [Kis09] M. Kisin, Moduli of finite flat group schemes, and modularity, Ann. of Math. (2) 170 (2009), no. 3, 1085–1180. MR 2600871 [Lei18] M. Leip, $\operatorname{THH}$ of log rings, Arbeitsgemeinschaft: Topological Cyclic Homology (L. Hesselholt and P. Scholze, eds.), vol. 15, Oberwolfach Reports, no. 2, 2018, pp. 805–940. MR 3941522 [Lin00] A. Lindenstrauss, A relative spectral sequence for topological Hochschild homology of spectra, J. Pure Appl. Algebra 148 (2000), no. 1, 77–88. MR 1750728 [LM00] A. Lindenstrauss and I. Madsen, Topological Hochschild homology of number rings, Trans. Amer. Math. Soc. 352 (2000), no. 5, 2179–2204. MR 1707702 [Lur18] J. Lurie, Higher Algebra, 2018, www.math.harvard.edu/~lurie/papers/HA.pdf. [NS18] T. Nikolaus and P. Scholze, On topological cyclic homology, Acta Math. 221 (2018), no. 2, 203–409. MR 3904731 [Rog99] J. Rognes, Algebraic $K$-theory of the two-adic integers, J. Pure Appl. Algebra 134 (1999), no. 3, 287–326. MR 1663391 [Rog09]  , Topological logarithmic structures, New topological contexts for Galois theory and algebraic geometry (BIRS 2008), Geom. Topol. Monogr., vol. 16, Geom. Topol. Publ., Coventry, 2009, pp. 401–544. MR 2544395 [Stacks] The Stacks project authors, The stacks project, https://stacks.math.columbia.edu, 2019.
Oscillating Universe from inhomogeneous EoS and coupled dark energy Diego Sáez-Gómez${}^{1,}$111Electronic address:saez@ieec.uab.es ${}^{1}$Consejo Superior de Investigaciones Científicas ICE/CSIC-IEEC, Campus UAB, Facultat de Ciències, Torre C5-Parell-2a pl, E-08193 Bellaterra (Barcelona) Spain Abstract An occurrence of an oscillating Universe is showed using an inhomogeneous equation of state for dark energy fluid. The Hubble parameter described presents a periodic behavior such that early and late time acceleration are unified under the same mechanism. Also, it is considered a coupling between dark energy fluid, with homogeneous and constant EoS, and matter, that gives a periodic Universe too. The possible phantom phases and future singularities are studied in the oscillating Universe under discussion. The equivalent scalar-tensor representation for the same oscillating Universe is presented too. I INTRODUCTION The discovery of cosmic acceleration in 1998 by two groups independentlyDiscAcc -DiscAcc2 brought to propose a big number of dark energy models (for recent reviews, see Ref. DErev -DErev2 ), where this mysterius cosmic fluid was introduced under the prescription that its equation of state (EoS) parameter should be less than -1/3. At the era of precision cosmology, the observational data establishes that the EoS parameter for dark energy $w$ is close to -1 (see Ref.obsData -obsData1 ). The main task is to describe the nature of this component, for that purpose, several candidates have been proposed, we can mention the cosmological constant model with $w=-1$, the so-called dark fluids with an inhomogeneous EoSInhEoS -InhEoSandOscillating1 , and the quintessence/phantom scalar fields modelsphantom -scalarth3 . This kind of models may reproduce late-time acceleration but it is not easy to construct a model that keeps untouched the radiation/matter dominated epochs. An additional gain of these models with scalar and ideal dark fluids is that they allow the possibility to unify early and late time acceleration under the same mechanism, in such a way that the Universe history may be reconstructed completly. On the other hand, it is impotant to keep in mind that these models represent just an effective description that own a number of well-known problems, as the ending of inflation. Nevertheless, they may represent a simple and natural way to resolve the coincidence problem, one of the possibilities may be an oscillating Universe (Ref. InhEoSandOscillating and OscillaUniverse -oscillate4 ), where the differents phases of the Universe are reproduced due to its periodic behavior. The purpose of this paper is to show that, from inhomogeneous EoS for a dark energy fluid, an oscillating Universe is obtained, and several examples are given to illustrate it. It is studied the possibility of an interaction between dark energy fluid, with homogeneous EoS, and matter that also reproduces that kind of periodic Hubble parameter, such case has been studied and is allowed by the observations (see CopuplingObsv ). The possible phantom epochs are explored, and the possibility that Universe may reach a Big Rip singularity (for a classification of future singularities, see Ref. CouplingAndSingula ). The organization of this paper is the following: in Sec. 2, a dark energy fluid with inhomogeneous EoS is presented, where this EoS depends on Hubble parameter, its derivatives and on time, it is showed that this kind of EoS reproduces an oscillating behavior of the Hubble parameter. In Sec. 3 a matter component is included, in the first part the problem is driven supossing no coupling between the matter and the dark fluid, a periodic Hubble parameter is obtained under some restriction on inhomogeneous EoS for the dark fluid, and in the second part a coupling is introduced, and it is showed that for a constant homogeneous EoS for dark energy, it is possible reconstruct early and late-time acceleration in a naturally way, due to the interaction between both fluids. Finnally, in Sec. 4 the mathematically equivalent scalar tensor description is showed, where the above solutions are reproduced by canonical/phantom scalar fields. II INHOMOGENEOUS EQUATION OF STATE FOR DARK ENERGY Let us consider firstly a Universe filled with a dark energy fluid, neglecting the rest of possible components (dust matter, radiation..), where its EoS depends on the Hubble parameter and its derivatives, such kind of EoS has been treated in several articlesInhEoS -InhEoSandOscillating1 . We show that for some choices of the EoS, an oscillating Universe resultedOscillaUniverse -oscillate4 , which may include phantom phases. Then, the whole Universe history, from inflation to cosmic acceleration, is reproduced in such a way that observational constraints may be satisfiedobsData -obsData1 . We work in a spatially flat FRW Universe , then the metric is given by: $$ds^{2}=-dt^{2}+a(t)^{2}\sum_{i=1}^{3}dx_{i}^{2}\ .$$ (1) The Friedmann equations are obtained: $$H^{2}=-\frac{\kappa^{2}}{3}\rho,\quad\quad\dot{H}=-\frac{\kappa^{2}}{2}\left(% \rho+p\right)\ .$$ (2) By combining the FRW equations, the energy conservation equation for dark energy density results: $$\dot{\rho}+3H(\rho+p)=0\ .$$ (3) At this section, the EoS considered is given to have the general form: $$p=w\rho+g(H,\dot{H},\ddot{H},..;t)\ ,$$ (4) where $w$ is a constant and $g(H,\dot{H},\ddot{H},..;t)$ is an arbitrary function of the Hubble parameter $H$, its derivatives and the time $t$, (such kind of EoS has been treated in Ref. InhEoS ). Using the FRW equations (2) and (4), the following differential equation is obtained: $$\dot{H}+\frac{3}{2}(1+w)H^{2}+\frac{\kappa^{2}}{2}g(H,\dot{H},\ddot{H},..;t)=0\ .$$ (5) Hence, for a given function $g$, the Hubble parameter is determinated by solving the equation (5). It is possible to reproduce an oscillating Universe by an specific EoS (4). To illustrate this construction, let us consider the following $g$ function as an example: $$g(H,\dot{H},\ddot{H})=-\frac{2}{\kappa^{2}}\left(\ddot{H}+\dot{H}+\omega_{0}^{% 2}H+\frac{3}{2}(1+w))H^{2}-H_{0}\right)\ ,$$ (6) where $H_{0}$ and $\omega_{0}^{2}$ are constants. By substituting (6) in (5) the Hubble parameter equation acdquieres the form: $$\ddot{H}+\omega_{0}H=H_{0}\ ,$$ (7) which is the classical equation for an harmonic oscillator. The solutionscalar2 is found: $$H(t)=\frac{H_{0}}{\omega_{0}^{2}}+H_{1}\sin(\omega_{0}t+\delta_{0})\ ,$$ (8) where $H_{1}$ and $\delta_{0}$ are integration constants. To study the system, we calculate the first derivative of the Hubble parameter, which is given by $\dot{H}=H_{1}\cos(\omega_{0}t+\delta_{0})$, so the Universe governed by the dark energy fluid (6) oscillates between phantom and non-phantom phases with a frecuency given by the constant $\omega_{0}$, constructing inflation epoch and late-time acceleration under the same mechanism, and Big Rip singularity avoided As another example, we consider the following EoS (4) for the dark energy fluid: $$p=w\rho+\frac{2}{\kappa^{2}}Hf^{\prime}(t)\ .$$ (9) In this case $g(H;t)=\frac{2}{\kappa^{2}}Hf^{\prime}(t)$, where $f(t)$ is an arbitrary function of the time $t$, and the prime denotes a derivative on $t$. The equation (5) takes the form: $$\dot{H}+Hf^{\prime}(t)=-\frac{3}{2}(1+w)H^{2}\ .$$ (10) This is the well-known Bernoulli differential equation. For a function $f(t)=-ln\left(H_{1}+H_{0}\sin\omega_{0}t\right)$, where $H_{1}>H_{0}$ are arbitrary constants, then the following solution for (10) is found: $$H(t)=\frac{H_{1}+H_{0}\sin\omega_{0}t}{\frac{3}{2}(1+w)t+k}\ ,$$ (11) here, the $k$ is an integration constant. As it is seen, for some values of the free constant parameters, the Hubble parameter tends to infinity for a given finite value of $t$. The first derivative of the Hubble parameter is given by: $$\dot{H}=\frac{\frac{H_{0}}{\omega_{0}}(\frac{3}{2}(1+w)t+k)\cos\omega_{0}t-(H_% {1}+H_{0}\sin\omega_{0}t)\frac{3}{2}(1+w)}{(\frac{3}{2}(1+w)t+k)^{2}}\ .$$ (12) As it is shown in fig.1, the Universe has a periodic behavior, it passes through phantom and non-phantom epochs, with its respectives transitions. A Big Rip singularity may take place depending on the value of $w$, such that it is avoided for $w\geq-1$, while if $w<-1$ the Universe reaches the singularity in the Rip time given by $t_{s}=\frac{2k}{3|1+w|}$. III DARK ENERGY IDEAL FLUID AND DUST MATTER III.1 No coupling between matter and dark energy Let us now explore a more realistic model by introducing a matter component with EoS given by $p_{m}=w_{m}\rho_{m}$, we consider an inhomogeneous EoS for the dark energy componentInhEoS ${}^{-}$InhoEoSandCoupling . It is shown below that an oscillating Universe may be obtained by constructing an specific EoS. In this case, the FRW equations (2) take the form: $$H^{2}=-\frac{\kappa^{2}}{3}(\rho+\rho_{m}),\quad\quad\dot{H}=-\frac{\kappa^{2}% }{2}\left(\rho+p+\rho_{m}+p_{m}\right)\ .$$ (13) At this section, we consider a matter fluid that doesnt interact with the dark energy fluid, then the energy conservation equations are satisfied for each fluid separately: $$\dot{\rho_{m}}+3H(\rho_{m}+p_{m})=0,\quad\quad\dot{\rho}+3H(\rho+p)=0\ .$$ (14) It is useful to construct an specific solution for the Hubble parameter by defining the effective EoS with an effective parameter $w_{eff}$: $$w_{eff}=\frac{p_{eff}}{\rho_{eff}},\quad\rho_{eff}=\rho+\rho_{m},\quad p_{eff}% =p+p_{m}\ ,$$ (15) and the energy conservation equation $\dot{\rho}_{eff}+3H(\rho_{eff}+p_{eff})=0$ is satisfied. We consider a dark energy fluid which is described by the EoS describes by the following expression: $$p=-\rho+\frac{2}{\kappa^{2}}\frac{2(1+w(t))}{3\int(1+w(t))dt}-(1+w_{m})\rho_{m% 0}{\rm e}^{-3(1+w_{m})\int dt\frac{2}{3\int(1+w(t))}}\ ,$$ (16) here $\rho_{m0}$ is a constant, and $w(t)$ is an arbitrary function of time $t$. Then the following solution is found: $$H(t)=\frac{2}{3\int dt(1+w(t))}\ .$$ (17) And the effective parameter (15) takes the form $w_{eff}=w(t)$. Then, it is shown that a solution for the Hubble parameter may be constructed from EoS (16) by specifying a function $w(t)$. Let us consider an exampleOscillaUniverse with the following function for $w(t)$: $$w=-1+w_{0}\cos\omega t\ .$$ (18) In this case, the EoS for the dark energy fluid, given by (16), takes the form: $$p=-\rho+\frac{4}{3\kappa^{2}}\frac{\omega w_{0}\cos\omega t}{w_{1}+w_{0}\sin% \omega t}-(1+w_{m})\rho_{m0}{\rm e}^{-3(1+w_{m})\frac{2w}{3(w_{1}+w_{0}\sin% \omega t)}}\ ,$$ (19) where $w_{1}$ is an integration constant. Then, by (17), the Hubble parameter yields: $$H(t)=\frac{2\omega}{3(w_{1}+w_{0}\sin\omega t)}\ .$$ (20) The Universe passes through phantom and non phantom phases since the first derivative of the Hubble parameter has the form: $$\dot{H}=-\frac{2\omega^{2}w_{0}\cos\omega t}{3(w_{1}+w_{0}\sin\omega t)^{2}}\ .$$ (21) In this way, a Big Rip singularity will take place in order that $|w_{1}|<w_{0}$, and it is avoided when $|w_{1}|>w_{0}$. As it is shown, this model reproduces unified inflation and cosmic acceleration in a natural way, where the Universe presents a periodic behavior. In order to reproduce accelerated and decelerated phases, the acceleration parameter is studied, which is given by: $$\frac{\ddot{a}}{a}=\frac{2\omega^{2}}{3(w_{1}+w_{0}\sin\omega t)^{2}}\left(% \frac{2}{3}-w_{0}\cos\omega t\right)\ .$$ (22) Hence, if $w_{0}>2/3$ the differents phases that Universe passes are reproduced by the EoS (19), presenting a periodic evolution that may unify all the epochs by the same description. As a second example, we may consider a classical periodic function, the step function: $$w(t)=-1+\left\{\begin{array}[]{lr}w_{0}&0<t<T/2\\ w_{1}&T/2<t<T\end{array}\right.\ ,$$ (23) and $w(t+T)=w(t)$. It is useful to use a Fourier expansion such that the function (23) become continuos. Aproximating to third order, $w(t)$ is given by: $$w(t)=-1+\frac{(w_{0}+w_{1})}{2}+\frac{2(w_{0}-w_{1})}{\pi}\left(\sin\omega t+% \frac{\sin 3\omega t}{3}+\frac{\sin 5\omega t}{5}\right)\ .$$ (24) Hence, the EoS for the Dark energy ideal fluid is given by (16), and the solution (17) takes the following form: $$H(t)=\frac{2}{3}\left[w_{2}+\frac{(w_{0}+w_{1})}{2}t\right.$$ $$\left.-\frac{2(w_{0}-w_{1})}{\pi\omega}\left(\cos\omega t+\frac{\cos 3\omega t% }{9}+\frac{\cos 5\omega t}{25}\right)\right]^{-1}\ .$$ (25) The model is studied by the first derivative of the Hubble parameter in order to see the possible phantom epochs, since: $$\dot{H}=-\frac{3}{2}H^{2}\left[\frac{(w_{0}+w_{1})}{2}\right.$$ $$\left.-\frac{2}{\pi}(w_{0}-w_{1})\left(\sin\omega t+\frac{\sin 3\omega t}{3}+% \frac{\sin 5\omega t}{5}\right)\right]\ .$$ (26) Then, depending on the values from $w_{0}$ and $w_{1}$ the Universe passes through phantom phases. To explore the different epochs of acceleration and deceleration that the Universe passes on, the acceleration parameter is calculated: $$\frac{\ddot{a}}{a}=H^{2}+\dot{H}=$$ $$H^{2}\left[1-\frac{3}{2}\left(\frac{(w_{0}+w_{1})}{2}-\frac{2}{\pi}(w_{0}-w_{1% })\left(\sin\omega t+\frac{\sin 3\omega t}{3}+\frac{\sin 5\omega t}{5}\right)% \right)\right]\ .$$ (27) Then, in order to get acceleration and deceleration epochs, the constants parameters $w_{0}$ and $w_{1}$ may be chosen such that $w_{0}<2/3$ and $w_{1}>2/3$, as it is seen by (23). For this selection, phantom epochs take place in the case that $w_{0}<0$. In any case, the oscillated behavior is damped by the inverse term on the time $t$, as it is shown in fig.2, where the acceleration parameter is ploted for some determinated values of the free parameters. This inverse time term makes reduce the acceleration and the Hubble parameter such that the model tends to a static Universe. We consider now a third example where a classical damped oscillator is showed, the function $w(t)$ is given by: $$w(t)=-1+{\rm e}^{-\alpha t}w_{0}\cos\omega t\ ,$$ (28) here $\alpha$ and $w_{0}$ are two positive constants. Then, the EoS for the dark energy ideal fluid is constructed from (16). The solution for the Hubble parameter (17) is integrated, and takes the form: $$H(t)=\frac{2}{3}\frac{\omega^{2}+\alpha^{2}}{w_{1}+w_{0}{\rm e}^{-\alpha t}(% \omega\sin\omega t-\alpha\cos\omega t)}\ ,$$ (29) where $w_{1}$ is an integration constant. The Hubble parameter oscillates damped by an exponential term, and for big times, it tends to a constant $H(t\longrightarrow\infty)=\frac{2}{3}\frac{\omega^{2}+\alpha^{2}}{w_{1}}$, recovering the cosmological constant model. The Universe passes through different phases as it may be shown by the accelerated parameter: $$\frac{\ddot{a}}{a}=H^{2}\left(1-\frac{3}{2}{\rm e}^{-\alpha t}w_{0}\cos\omega t% \right)\ .$$ (30) It is possible to restrict $w_{0}>\frac{2}{3}$ in order to get deceleration epochs when the matter component dominates. On the other hand, the Universe also passes through phantom epochs, since the Hubble parameter derivative gives: $$\dot{H}=-\frac{3}{2}H^{2}{\rm e}^{-\alpha t}w_{0}\cos\omega t\ .$$ (31) Hence, the example (28) exposes an oscillating Universe with a frecuency given by $\omega$ and damped by a negative exponential term, which depends on the free parameter $\alpha$, these may be adjusted such that the phases agree with the phases times constraints by the observational data. III.2 Dark energy and coupled matter In general, one may consider a Universe filled with a dark energy ideal fluid whose Eos is given $p=w\rho$, where $w$ is a constant, and matter described by $p_{m}=\omega_{m}\rho_{m}$, both interacting with eah other. In order to preserve the energy conservation, the equations for the energy density are written as following: $$\dot{\rho_{m}}+3H(\rho_{m}+p_{m})=Q,\quad\quad\dot{\rho}+3H(\rho+p_{)}=-Q\ ,$$ (32) here $Q$ is an arbitrary function. In this way, the total energy conservation is satisfied $\dot{\rho}_{eff}+3H(\rho_{eff}+p_{eff})=0$, where $\rho_{eff}=\rho+\rho_{m}$ and $p_{eff}=p+p_{m}$, and the FRW equations (13) doesnt change. To resolve this set of equations for a determined function $Q$, the second FRW equation (13) is combined with the conservation equations (32), this yields: $$\dot{H}=-\frac{\kappa^{2}}{2}\left[(1+w_{m})\frac{\int Q\exp(\int dt3H(1+w_{m}% ))}{\exp(\int dt3H(1+w_{m}))}\right.$$ $$\left.+(1+w)\frac{-\int Q\exp(\int dt3H(1+w))}{\exp(\int dt3H(1+w))}\right]\ .$$ (33) In general, this is difficult to resolve for a function Q. As a particular simple case is the cosmological constant where the dark energy EoS parameter $w=-1$ is considered, the equations become very clear, and (32) yields $\dot{\rho}=-Q$, which is resolved and the dark energy density is given by: $$\rho(t)=\rho_{0}-\int dtQ(t)\ ,$$ (34) where $\rho_{0}$ is an integration constant. Then, the Hubble parameter is obtained by introducing (34) in the FRW equations, which yields: $$\dot{H}+\frac{3}{2}(1+w_{m})H^{2}=\frac{\kappa^{2}}{2}(1+w_{m})\left(\rho_{0}-% \int dtQ\right)\ .$$ (35) Hence, Hubble parameter depends essentially on the form of the coupling function $Q$. This means that a Universe model may be constructed from the coupling between matter and dark energy fluid, which is given by $Q$, an arbitrary function. It is showed below that some of the models given in the previus section by an inhomogeneous EoS dark energy fluid, are reproduced by a dark energy fluid with constant EoS ($w=-1$), but coupled to dust matter. By differentiating equation (35), the function $Q$ may be written in terms of the Hubble parameter and its derivatives: $$Q=-\frac{2}{\kappa^{2}}\frac{1}{1+w_{m}}\left(\ddot{H}+3(1+w_{m})H\dot{H}% \right)\ .$$ (36) As an example, we use the solution(8): $$H(t)=H_{0}+H_{1}\sin(\omega_{0}t+\delta_{0})\ .$$ (37) Then, by the equation (35), the function $Q$ is given by: $$Q(t)=\frac{2}{\kappa^{2}(1+w_{m})}\left[H_{0}\omega^{2}\sin\omega t+3(1+w_{m})% h_{0}\omega\cos\omega t(H_{1}+H_{0}\sin\omega t))\right]\ .$$ (38) Then, the oscillated model (37) is reproduced by a coupling between matter and dark energy, which also oscillates. Some more complicated models may be constructed for complex functions $Q$. As an example let us consider the solution (20): $$H(t)=\frac{2\omega}{3(w_{1}+w_{0}\sin\omega t)}\ .$$ (39) The coupling function (36) takes the form: $$Q(t)=-\frac{4}{3\kappa^{2}}\frac{\omega^{3}w_{0}}{(1+w_{m})(w_{1}+w_{0}\sin% \omega t)^{3}}$$ $$\left[\sin\omega t(w_{1}+w_{0}\sin\omega t)^{2}+2w_{0}\cos^{2}\omega t-2(1+w_{% m})\cos\omega t\right]\ .$$ (40) This coupling function reproduces a oscillated behavior that unifies the different epochs in the Universe. Hence, it have been showed that for a constant EoS for the dark energy with $w=-1$, inflation and late-time acceleretion are given in a simple and natural way. IV SCALAR-TENSOR DESCRIPTION Let us now consider the solutions showed in the last sections through scalar-tensor description, such equivalence has been constructed in Ref. scalar-th-fluids . We assume, as before, a flat FRW metric, a Universe filled with a ideal matter fluid with EoS given by $p_{m}=w_{m}\rho_{m}$, and no coupling between matter and the scalar field. Then, the following action is considered: $$S=\int dx^{4}\sqrt{-g}\left[\frac{1}{2\kappa^{2}}R-\frac{1}{2}\omega(\phi)% \partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)+L_{m}\right]\ ,$$ (41) here $\omega(\phi)$ is the kinetic term and $V(\phi)$ represents the scalar potential. Then, the corresponding FRW equations are written as: $$H^{2}=\frac{\kappa^{2}}{3}\left(\rho_{m}+\rho_{\phi}\right)\ ,\quad\quad\dot{H% }=-\frac{\kappa^{2}}{2}\left(\rho_{m}+p_{m}+\rho_{\phi}+p_{\phi}\right)\ ,$$ (42) where $\rho_{\phi}$ and $p_{\phi}$ given by: $$\rho_{\phi}=\frac{1}{2}\omega(\phi)\,{\dot{\phi}}^{2}+V(\phi)\ ,\quad\quad p_{% \phi}=\frac{1}{2}\omega(\phi)\,{\dot{\phi}}^{2}-V(\phi)\ .$$ (43) By assuming: $$\displaystyle\omega(\phi)=-\frac{2}{\kappa^{2}}f^{\prime}(\phi)-(w_{m}+1)F_{0}% {\rm e}^{-3(1+w_{m})F(\phi)}\ ,$$ $$\displaystyle V(\phi)=\frac{1}{\kappa^{2}}\left[{3f(\phi)}^{2}+f^{\prime}(\phi% )\right]+\frac{w_{m}-1}{2}F_{0}\,{\rm e}^{-3(1+w_{m})F(\phi)}\ .$$ (44) The following solution is foundscalar2 ${}^{-}$scalarth3 : $$\phi=t\ ,\quad H(t)=f(t)\ ,$$ (45) which yields: $$a(t)=a_{0}{\rm e}^{F(t)},\qquad a_{0}=\left(\frac{\rho_{m0}}{F_{0}}\right)^{% \frac{1}{3(1+w_{m})}}.$$ (46) Then, we may assume solution (29), in such case the $f(\phi)$ function takes the form: $$f(\phi)=\frac{2}{3}\frac{\omega^{2}+\alpha^{2}}{w_{1}+w_{0}{\rm e}^{-\alpha% \phi}(\omega\sin\omega\phi-\alpha\cos\omega\phi)}\ ,$$ (47) And by (44) the kinetic term and the scalar potential are given by: $$\displaystyle\omega(\phi)=\frac{3}{\kappa^{2}}f^{2}(\phi)w_{0}{\rm e}^{-\alpha% \phi}\cos\omega\phi-(1+w_{m})F_{0}{\rm e}^{-3(1+w_{m})F(\phi)},$$ $$\displaystyle V(\phi)=\frac{3f^{2}(\phi)}{\kappa^{2}}\left(1-\frac{1}{2}w_{0}{% \rm e}^{-\alpha\phi}\cos\omega\phi\right)+\frac{w_{m}-1}{2}F_{0}{\rm e}^{-3(1+% w_{m})F(\phi)},$$ (48) where $F(\phi)=\int d\phi f(\phi)$ and $F_{0}$ is an integration constant. Then, the periodic solution (29) is reproduced in the mathematical equivalent formulation in scalar-tensor theories by the action (41) and explicit kinetic term and scalar potential, in this case, is given by (48). V DISCUSSIONS Along the paper, it has been presented a Universe model that reproduces in a natural way the early and late-time acceleration by a periodic behavior of the Hubble parameter. The late-time transitions are described by this model: the transition from deceleration to acceleration, and the possible transition from non-phantom to phantom epoch. The observational data does not restrict yet the nature and details of the EoS for dark energy, then the possibility that Universe behaves periodically is allowed. For that purpose, several examples have been studied in the present paper, some of them driven by an inhomogeneous EoS for dark energy, and others by a coupling between dark energy and matter which also may provide another possible constraint to look for. On the other hand, as it was commented at the introduction, one has to keep in mind that these kind of models, scalar theories or dark fluids, should be checked to probe if the unification presented is realistic, especially in order to reproduce the details of inflation, to realize the perturbations structure… However, that task is beyond the scopes of the present paper, whose main objective is to show the possibility of the reconstruction of an oscillating Universe from the descriptions detailed. Acknowledgements. I thank Emilio Elizalde and Sergei Odintsov for suggesting this problem, and for giving the ideas and fundamental information to carry out this task. This work was supported by MEC (Spain), project FIS2006-02842, and in part by project PIE2007-50/023. References (1) A. G. Riess, Astron. J. 116, 1009 (1998) (2) S. Perlmutter Astrophys. J. 517, 565 (1999) (3) Edmund Copeland, M. Sami, Shinji Tsujikawa, Int. J. Mod. Phys.D 15, 1753 (2006), [arxiv:hep-th/0603057] (4) T. Padmanabhan, arXiv:astro-ph/0603114; arXiv:astro-ph/0602117 (5) S. Nojiri and S. D. Odintsov, arXiv:hep-th/0601213 (6) L. Perivolaropoulos, arxiv:astro-ph/0601014 (7) H. Jassal, J. Bagla and T. Padmanabhan, arxiv:astro-ph/0506748 (8) S. Nojiri and S.D.Odintsov, Phys. Rev. D 72, 103522 (2005) [arxiv:hep-th/0505215] (9) S. Capozziello, V. Cardone, E. Elizalde, S. Nojiri and S.D. Odintsov, Phys. Rev. D 73, 043512 (2006), [astro-ph/0508350] (10) S. Nojiri and S. D. Odintsov, Phys. Lett. B 639, 144 (2006), [arxiv:hep-th/0606025] (11) I. Brevik, O.G. Gorbunova and A. V. Timoshkin, Eur.Phys.J.C51:179-183,(2007) [arxiv:gr-qc/0702089] (12) I.Brevik, E. Elizalde, O. Gorbunova. and A. V. Timoshkin Eur. Phys. J. C 52 223 (2007) [arxiv:gr-qc/0706.2072] (13) R. R. Caldwell, M. Kamionkowski, and N. N. Weinberg, Phys. Rev. Lett. 91, 071301 (2003) [arXiv:astro-ph/0302506] (14) B. McInnes, JHEP 0208, 029 (2002) [arXiv:hep-th/0112066 ;hep-th/0502209] (15) S. Nojiri and S. D. Odintsov, Phys. Lett. B 562, 147 (2003) [arXiv:hep-th/0303117] (16) S. Nojiri and S. D. Odintsov, Phys. Lett. B 565, 1 (2003) [arXiv:hep-th/0304131] (17) S. Nojiri and S. D. Odintsov, Phys. Lett. B 595, 1 (2004) [arXiv:hep-th0405078] (18) P. Gonzalez-Diaz, Phys. Lett. B586, 1 (2004) [arXiv:astro-ph/0312579]; arXiv:hep-th/0408225 (19) L. P. Chimento and R. Lazkoz, Phys. Rev. Lett. 91, 211301 (2003) [arXiv:gr-qc/0307111] (20) L. P. Chimento and R. Lazkoz, Mod. Phys. Lett. A19, 2479 (2004) [arXiv:gr-qc/0405020] (21) E. Babichev, V. Dokuchaev, and Yu. Eroshenko, Class. Quant. Grav. 22, 143 (2005) [arXiv:astro-ph/0407190] (22) X. Zhang, H. Li, Y. Piao, and X. Zhang, arXiv:astro-ph/0501652 (23) E. Elizalde, S. Nojiri, S. D. Odintsov, and P. Wang, Phys. Rev. D 71 (2005) 103504 [arXiv:hep-th/0502082]; (24) E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. REv. D 70 043539 (2004) [arxiv:hep-th/0405034] (25) M. Dabrowski and T. Stachowiak, arXiv:hep-th/0411199; (26) F. Lobo, arXiv:gr-qc/0502099; (27) R.-G. Cai, H.-S. Zhang, and A. Wang, arXiv:hep-th/0505186; (28) I. Ya. Arefeva, A. S. Koshelev, and S. Yu. Vernov, arXiv:astro-ph/0412619; arXiv:astro-ph/0507067; (29) W. Godlowski and M. Szydlowski, Phys. Lett. B623 (2005) 10; (30) J. Sola and H. Stefancic, arXiv:astro-ph/0505133; (31) B. Guberina, R. Horvat, and H. Nicolic, arXiv:astro-ph/0507666; (32) M. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. D 74 (2006) 044022; (33) E. Barboza and N. Lemos, arXiv:gr-qc/0606084; (34) M. Szydlowski, O. Hrycyna, and A. Krawiec, arXiv:hep-th/0608219; (35) W. Chakraborty and U. Debnath, arXiv:0802.3751[gr-qc]. (36) A. Vikman [arXiv:astro-ph/0407107] (37) S. Nojiri and S. D. Odintsov, Gen. Rel. Grav. 38, 1285 (2006) [arXiv:hep-th/0506212]; (38) S. Capozziello, S. Nojiri, and S. D. Odintsov, Phys. Lett. B 632, 597 (2006) [arXiv:hep-th/0507182]; (39) E. Elizalde, S. Nojiri, S. D. Odintsov, D. Sáez-Gómez and V. Faraoni, [arxiv:hep-th/0803.1311] (40) S. Nojiri and S. D. Odintsov, Phys. Lett. B 637:139 (2006) [arxiv:hep-th/0603062] (41) Bo Feng, Migzhe Li, Yung-Son Piao and Xinmin Zhang, Phys. Lett. B 634:101,(2006),[arxiv:astro-ph/0407432]; (42) G. Yang and A. Wang, Gen. Rel. Grav.37, 2201 (2005) [arxiv:astro-ph/0510006] (43) I. Aref’eva, P. H. Frampton and S. Matsuzaki [arxiv:hep-th/0802.1294] (44) Z. K. Guo, N. Ohta and S. Tsujikawa, Phys. Rev.  D 76 (2007) 023508 [arXiv:astro-ph/0702015]. (45) S. Nojiri, S. D. Odintsov and S. Tsujikawa, Phys. Rev. D 71 063004 (2005), [arxiv:hep-th/0501025]. (46) S. Capozziello, S. Nojiri and S. D. Odintsov, Phys. Lett. B 634, 93 (2006) [arxiv:hep-th/0512118]
\theorembodyfont\theoremheaderfont\theorempostheader : \theoremsep Optimizing Audio Augmentations for Contrastive Learning of Health-Related Acoustic Signals \NameLouis Blankemeier \Emailblankemeier@google.com \NameSebastien Baur \Emailsebastienbaur@google.com \NameWei-Hung Weng \Emailckbjimmy@google.com \NameJake Garrison \Emailjakegarrison@google.com \NameYossi Matias \Emailyossi@google.com \NameShruthi Prabhakara \Emailshruthip@google.com \NameDiego Ardila \Emailardila@google.com \NameZaid Nabulsi \Emailznabulsi@google.com \addrGoogle Research    USA Abstract Health-related acoustic signals, such as cough and breathing sounds, are relevant for medical diagnosis and continuous health monitoring. Most existing machine learning approaches for health acoustics are trained and evaluated on specific tasks, limiting their generalizability across various healthcare applications. In this paper, we leverage a self-supervised learning framework, SimCLR with a Slowfast NFNet backbone, for contrastive learning of health acoustics. A crucial aspect of optimizing Slowfast NFNet for this application lies in identifying effective audio augmentations. We conduct an in-depth analysis of various audio augmentation strategies and demonstrate that an appropriate augmentation strategy enhances the performance of the Slowfast NFNet audio encoder across a diverse set of health acoustic tasks. Our findings reveal that when augmentations are combined, they can produce synergistic effects that exceed the benefits seen when each is applied individually. keywords: health acoustics, audio augmentation, contrastive learning 1 Introduction Non-speech, non-semantic sounds, like coughing and breathing, can provide information for doctors to detect various respiratory diseases, cardiovascular diseases and neurological diseases (Boschi et al., 2017; Zimmer et al., 2022). Advances in deep learning-based machine learning (ML) allow us to develop medical assistants and continuous health monitoring applications by learning effective acoustic data representations (Alqudaihi et al., 2021). Current approaches for learning health acoustic representations are mostly trained and evaluated on specific tasks. For example, Botha et al. (2018); Larson et al. (2012); Tracey et al. (2011); Pahar et al. (2021) trained models to detect tuberculosis using cough sounds via supervised learning. However, it can be challenging to adopt these models directly for other health acoustic tasks. Retraining task specific health acoustic models requires manual data collection and labeling by clinical experts, which can be time consuming and costly. Researchers within the ML community have explored various self-supervised strategies to learn general purpose data representations that overcome the limitations of domain-specific representations (Balestriero et al., 2023). Among these approaches, contrastive learning has proven effective for generating robust representations across multiple data modalities, including images, videos, speech, audio, and periodic data (Chen et al., 2020a; Jiang et al., 2020; Qian et al., 2021; Oord et al., 2018; Yang et al., 2022). Selecting appropriate data augmentations is crucial for performant contrasting learning algorithms (Chen et al., 2020a) (see Related Works for details). Consequently, significant research has been conducted on the utility of various augmentations for images (Chen et al., 2020a), videos (Qian et al., 2021), and speech/audio (Al-Tahan and Mohsenzadeh, 2021; Jiang et al., 2020). However, the unique characteristics of health-related acoustic signals, such as coughs and breathing sounds, which differ in pitch and tone from speech and music, raise questions about the applicability of existing contrastive learning and augmentation strategies in this specialized domain. To address this research gap, our study systematically explores eight distinct audio augmentation techniques and their combinations in the context of health acoustic representation learning. We employ the self-supervised contrastive learning framework, SimCLR (Chen et al., 2020a), with a Slowfast NFNet backbone (Wang et al., 2022). After identifying the best combination of augmentations, we compare the performance of the resulting Slowfast NFNet against other state-of-the-art off-the-shelf audio encoders on 21 unique binary classification tasks across five datasets. This work offers two major contributions: (1) we identify augmentation parameters that work best when applied to health acoustics, and (2) we investigate the synergistic effects of combining audio augmentations for enhancing health acoustic representations using SimCLR. 2 Related Works In ML, data augmentation serves as a regularization technique to mitigate the risk of model overfitting (Zhang et al., 2021). Within the framework of contrastive learning, the objective is to learn data representations that minimize the distance between representations of semantically similar inputs and maximize the distance between representations of semantically dissimilar inputs. Data augmentations are critical for contrastive learning-based self-supervised learning (SSL), and eliminates the need for labeled data for representation learning. By applying a variety of augmentations to a single input, semantically consistent but distinct variations, commonly referred to as views, are generated (Von Kügelgen et al., 2021). The task then becomes pulling these related views closer together in the representational space, while concurrently pushing views derived from different, unrelated inputs farther apart, via a contrastive loss, such as InfoNCE in SimCLR (Chen et al., 2020a). This approach establishes a form of invariance in the model, rendering it robust to the augmentations applied during the training process. Augmentations have been widely explored as part of contrastive learning-based SSL methods such as SimCLR, BYOL (Grill et al., 2020), MoCo (Chen et al., 2020b), and SwAV (Caron et al., 2020). Data augmentations also enhance the performance of SSL methods broadly across different data modalities, including images (Chen et al., 2020a), videos (Qian et al., 2021), audio (Al-Tahan and Mohsenzadeh, 2021; Niizumi et al., 2021), speech (Jiang et al., 2020), and 1-dimensional signals (e.g., human physiological signals) (Yang et al., 2022). In this study, we turn our attention toward a relatively underexplored domain: the application of data augmentations strategies for contrastive learning of health acoustic signals. The most closely related area of research to our focus on health acoustics is the research investigating augmentation strategies for speech and audio data. Early research by Ko et al. (2015) explored creating two augmented speech signals with speeds relative to the original of 0.9 and 1.1. This yielded performance improvements across four speech recognition tasks. Jansen et al. (2018) expanded upon this by introducing a triplet loss for audio representation learning, incorporating random noise, time/frequency translation, example mixing, and temporal proximity augmentations. Jiang et al. (2020) employed an adaptation of SimCLR for speech data, termed Speech SimCLR, where they applied a diverse set of augmentations: random pitch shift, speed perturbation, room reverberation and additive noise to the original waveform, as well as time and frequency masking to the spectrogram. Niizumi et al. (2021) developed a comprehensive audio augmentation module including pre-normalization, foreground acoustic event mixup, random resize cropping and post-normalization. Fonseca et al. (2021c) investigated a multi-modal approach by adopting augmentations from both vision and audio domains, including random resized cropping, random time/frequency shifts, compression, SpecAugment (Park et al., 2019), Gaussian noise addition, and Gaussian blurring. They also used sound separation techniques for sound event detection to enable targeted data augmentations (Fonseca et al., 2021b). Shi et al. (2022) explored the impact of noise injection as an augmentation strategy to bolster the robustness of speech models. CLAR identified six augmentation operations: pitch shift, noise injection in frequency domain, and fade in/out, time masking, time shift, time stretching in the temporal domain, and explored their utility for audio contrastive learning (Al-Tahan and Mohsenzadeh, 2021). In this study, we build upon these ideas to systematically investigate the optimal combination and sequence of augmentation strategies, with a specific focus on developing robust representations for health acoustics. 3 Methods The study is structured into three phases. The first phase consists of finding the best parameters for each augmentation that we consider for use with SimCLR. In the second phase, we investigate various combinations of augmentations, where we apply one or two successive augmentations to create each view of the input. Here, we use the augmentation parameters that we select in the first phase. In the third phase, we compare the results of our best performing model to other state-of-the-art audio encoder models on the validation set used for comparing augmentations. We choose to hold out the test sets due to ongoing model development and these results may thus be optimistic. This evaluation involves 21 unique downstream tasks across five datasets and we investigate the quality of embeddings generated from each audio encoder using linear probing (Köhn, 2015). Our study employs SimCLR with a 63 million parameter SlowFast NFNet-F0 as the neural network backbone (Chen et al., 2020a; Wang et al., 2022). Audio Augmentations We investigate eight augmentations (Figure 1). These include the following time-domain augmentations: crop and pad, noising, Brownian tape speed (Weng et al., 2023), scaling, pitch shift, time stretch, and circular time shift. Additionally, we experiment with SpecAugment which is applied after the transformation of audio inputs into spectrograms (Park et al., 2019). A description of each augmentation strategy is provided in Appendix Table A1. Each of the augmentations offers a tunable parameter space to allow for varying degrees of transformational intensity. To identify the optimal hyperparameters for each specific augmentation, we first conduct an exhaustive grid search. After we determine the best augmentation parameters, we explore the potential synergistic effects from the sequential application of either one or two successive augmentations. Since we include 8 augmentations, experimenting with every permutation of one or two augmentations would result in 64 experiments. However, in this work, SpecAugment was only applied after the time domain augmentations which reduced the number of 2-step augmentations to 57. Datasets For this study, we curate a training dataset, YT-NS (YouTube Non-Semantic), consisting of two-second long audio clips extracted from one billion non-copyrighted YouTube videos, totalling about 255 million 2s clips or 142k hours. We apply a convolutional neural network-based health acoustic detector model, trained on two public health acoustic AudioSet derivatives, FSD50K and Flusense, as well as another health acoustic dataset. We use this model to filter two-second audio clips from these one billion videos for the following health acoustic signals: coughs, speech, laughing, throat-clearing, baby coughs, and breathing. Estimated numbers of each of these clips is provided in Appendix Table A2. The Slowfast NFNet encoder is trained solely using this dataset. For evaluation, we use five publicly available datasets, FSD50K (Fonseca et al., 2021a), Flusense (Al Hossain et al., 2020), PSG (Korompili et al., 2021), CoughVID (Orlandic et al., 2021), and Coswara (Bhattacharya et al., 2023). We describe evaluation datasets in Appendix Table A3. Evaluation 21 unique downstream binary classification tasks across five datasets are leveraged to evaluate the quality of health acoustic representations generated from the learned audio encoders, including 13 human acoustic event classifications, five sleep apnea-specific tasks, and three cough relevant tasks. The cough tasks include COVID detection, sex classification, and smoking status classification. For phases 1 and 2 of our study where we identify the best parameters for each augmentation, as well as the best combination of augmentations, we develop a composite score that aggregates performance across the various downstream tasks. The PSG, CoughVid, and Coswara datasets are segmented into two-second clips. For Flusense, we preprocess the data by segmenting variable length clips using the labeled timestamps. For FSD50K and Flusense, we adopt a lightweight evaluation strategy where we randomly sample a single two second long clip from each longer clip. We take the average area under the receiver operating characteristic curve (AUROC) across these tasks and use this composite measure to rank augmentation strategies. For phase 3, we segment the PSG data into 10 second clips, and for FSD50K and Flusense, we crop or zero pad each clip to 10 seconds. We adopt a sliding window approach for FSD50K, Flusense, and PSG, where embeddings are generated for two-second windows with a step size of one second. We apply mean pooling to the resulting embeddings to generate our final output embedding. For all phases, we use linear probing to evaluate the quality of the generated representations. We use logistic regression with cross-validated ridge penalty, which is trained to predict binary labels from the frozen precomputed embeddings (Köhn, 2015). We report AUROC for all tasks and use the DeLong method to compute the 95% confidence intervals (CIs) (DeLong et al., 1988). Baseline Models For comparative evaluation, we consider several off-the-shelf audio encoders, each trained on semantic or non-semantic speech data. Specifically, our baseline models include TRILL (Shor et al., 2020), which is a publicly available ResNet50 architecture trained on an AudioSet subset that is enriched with speech labels. FRILL (Peplinski et al., 2020) is a light-weight MobileNet-based encoder distilled from TRILL. BigSSL-CAP12 (Shor et al., 2022) leverages a Conformer-based architecture, trained on YouTube and LibriLight. 4 Results Optimal augmentation parameters In Appendix Table A1, we display the optimal parameters for each augmentation derived from the associated grid searches. We find that up to a certain threshold, generally more intense augmentation parameters yield better performance. Comparing augmentations Comparing the left and right panels of Figure 2 shows that many augmentations perform better in combination than individually. Our analysis indicates that the most effective single augmentation strategy is SpecAugment (left panel in Figure 2). The most effective 2-step augmentation strategy involves applying circular time shift , followed by time stretch, as depicted in Figure 2. Interestingly, circular time shift does not perform well on its own and each of these augmentations individually underperform SpecAugment. However, circular time shift and time stretch are synergistic when applied together. The right panel of Figure 2 shows that on average, time stretch is the most useful first augmentation, excluding SpecAugment which is always applied second or alone. SpecAugment is the most useful second augmentation on average. Comparing to baselines Appendix Tables 4, 5 demonstrate performance of the best SimCLR model versus the baseline models on the validation set used for the comparison of augmentations. Overall, the performance of the SimCLR model is similar to BigSSL-CAP12, despite training on about 10x less hours of data and using a model that is nearly 10x smaller, and outperforms off-the-shelf audio encoders. 5 Discussion and Conclusion We investigated a comprehensive list of augmentations for use in the health acoustic domain. We demonstrated the synergistic benefit of the circular time shift and time stretch augmentations. Circular time shift and time-stretching may synergistically improve model generalizability by introducing a diverse range of temporal patterns for the same sound. There are few limitations worth noting. We decided to keep our test sets held out for ongoing model development, thus our comparisons to baselines may be optimistic. We also confined our analysis to a single Slowfast NFNet architecture. This leaves open the possibility that different architectures could yield varying results. Future research may focus on other augmentations, including frequency domain augmentations, as well as augmentations that better leverage health acoustic inductive biases. Additionally, incorporating labels during training (Khosla et al., 2020), such as health signal type, may further improve the learned representations. \acks We thank Yun Liu from Google Research for his critical feedback, Shao-Po Ma for his preliminary work on the PSG dataset, CoughVID and Project Coswara teams for making the datasets publicly available, and the Google Research team for software and hardware infrastructure support. PSG, CoughVID and Coswara are licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License and follow the Disclaimer of Warranties and Limitation of Liability in the license. References Al Hossain et al. (2020) Forsad Al Hossain, Andrew A Lover, George A Corey, Nicholas G Reich, and Tauhidur Rahman. Flusense: a contactless syndromic surveillance platform for influenza-like illness in hospital waiting areas. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1):1–28, 2020. Al-Tahan and Mohsenzadeh (2021) Haider Al-Tahan and Yalda Mohsenzadeh. Clar: Contrastive learning of auditory representations. In International Conference on Artificial Intelligence and Statistics, pages 2530–2538. PMLR, 2021. Alqudaihi et al. (2021) Kawther S Alqudaihi, Nida Aslam, Irfan Ullah Khan, Abdullah M Almuhaideb, Shikah J Alsunaidi, Nehad M Abdel Rahman Ibrahim, Fahd A Alhaidari, Fatema S Shaikh, Yasmine M Alsenbel, Dima M Alalharith, et al. Cough sound detection and diagnosis using artificial intelligence techniques: challenges and opportunities. Ieee Access, 9:102327–102344, 2021. Balestriero et al. (2023) Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, et al. A cookbook of self-supervised learning. arXiv preprint arXiv:2304.12210, 2023. Bhattacharya et al. (2023) Debarpan Bhattacharya, Neeraj Kumar Sharma, Debottam Dutta, Srikanth Raj Chetupalli, Pravin Mote, Sriram Ganapathy, C Chandrakiran, Sahiti Nori, KK Suhail, Sadhana Gonuguntla, et al. Coswara: A respiratory sounds and symptoms dataset for remote screening of sars-cov-2 infection. Scientific Data, 10(1):397, 2023. Boschi et al. (2017) Veronica Boschi, Eleonora Catricala, Monica Consonni, Cristiano Chesi, Andrea Moro, and Stefano F Cappa. Connected speech in neurodegenerative language disorders: a review. Frontiers in psychology, 8:269, 2017. Botha et al. (2018) GHR Botha, Grant Theron, RM Warren, Marisa Klopper, Keertan Dheda, PD Van Helden, and TR Niesler. Detection of tuberculosis by automatic cough sound analysis. Physiological measurement, 39(4):045005, 2018. Caron et al. (2020) Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020. Chen et al. (2020a) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020a. Chen et al. (2020b) Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b. DeLong et al. (1988) Elizabeth R DeLong, David M DeLong, and Daniel L Clarke-Pearson. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics, pages 837–845, 1988. Fonseca et al. (2021a) Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra. Fsd50k: an open dataset of human-labeled sound events. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:829–852, 2021a. Fonseca et al. (2021b) Eduardo Fonseca, Aren Jansen, Daniel PW Ellis, Scott Wisdom, Marco Tagliasacchi, John R Hershey, Manoj Plakal, Shawn Hershey, R Channing Moore, and Xavier Serra. Self-supervised learning from automatically separated sound scenes. In 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pages 251–255. IEEE, 2021b. Fonseca et al. (2021c) Eduardo Fonseca, Diego Ortego, Kevin McGuinness, Noel E O’Connor, and Xavier Serra. Unsupervised contrastive learning of sound event representations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 371–375. IEEE, 2021c. Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020. Jansen et al. (2018) Aren Jansen, Manoj Plakal, Ratheet Pandya, Daniel PW Ellis, Shawn Hershey, Jiayang Liu, R Channing Moore, and Rif A Saurous. Unsupervised learning of semantic audio representations. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 126–130. IEEE, 2018. Jiang et al. (2020) Dongwei Jiang, Wubo Li, Miao Cao, Wei Zou, and Xiangang Li. Speech simclr: Combining contrastive and reconstruction objective for self-supervised speech representation learning. arXiv preprint arXiv:2010.13991, 2020. Khosla et al. (2020) Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661–18673, 2020. Ko et al. (2015) Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. Audio augmentation for speech recognition. In Sixteenth annual conference of the international speech communication association, 2015. Köhn (2015) Arne Köhn. What’s in an embedding? analyzing word embeddings through multilingual evaluation. 2015. Korompili et al. (2021) Georgia Korompili, Anastasia Amfilochiou, Lampros Kokkalas, Stelios A Mitilineos, Nicolas-Alexander Tatlas, Marios Kouvaras, Emmanouil Kastanakis, Chrysoula Maniou, and Stelios M Potirakis. Psg-audio, a scored polysomnography dataset with simultaneous audio recordings for sleep apnea studies. Scientific data, 8(1):197, 2021. Larson et al. (2012) Sandra Larson, Germán Comina, Robert H Gilman, Brian H Tracey, Marjory Bravard, and José W López. Validation of an automated cough detection algorithm for tracking recovery of pulmonary tuberculosis patients. 2012. Niizumi et al. (2021) Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, and Kunio Kashino. Byol for audio: Self-supervised learning for general-purpose audio representation. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2021. Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Orlandic et al. (2021) Lara Orlandic, Tomas Teijeiro, and David Atienza. The coughvid crowdsourcing dataset, a corpus for the study of large-scale cough analysis algorithms. Scientific Data, 8(1):156, 2021. Pahar et al. (2021) Madhurananda Pahar, Marisa Klopper, Byron Reeve, Rob Warren, Grant Theron, and Thomas Niesler. Automatic cough classification for tuberculosis screening in a real-world environment. Physiological Measurement, 42(10):105014, 2021. Park et al. (2019) Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779, 2019. Peplinski et al. (2020) Jacob Peplinski, Joel Shor, Sachin Joglekar, Jake Garrison, and Shwetak Patel. Frill: A non-semantic speech embedding for mobile devices. arXiv preprint arXiv:2011.04609, 2020. Qian et al. (2021) Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, and Yin Cui. Spatiotemporal contrastive video representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6964–6974, 2021. Shi et al. (2022) Bowen Shi, Wei-Ning Hsu, and Abdelrahman Mohamed. Robust self-supervised audio-visual speech recognition. arXiv preprint arXiv:2201.01763, 2022. Shor et al. (2020) Joel Shor, Aren Jansen, Ronnie Maor, Oran Lang, Omry Tuval, Felix de Chaumont Quitry, Marco Tagliasacchi, Ira Shavitt, Dotan Emanuel, and Yinnon Haviv. Towards learning a universal non-semantic representation of speech. arXiv preprint arXiv:2002.12764, 2020. Shor et al. (2022) Joel Shor, Aren Jansen, Wei Han, Daniel Park, and Yu Zhang. Universal paralinguistic speech representations using self-supervised conformers. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3169–3173. IEEE, 2022. Tracey et al. (2011) Brian H Tracey, Germán Comina, Sandra Larson, Marjory Bravard, José W López, and Robert H Gilman. Cough detection algorithm for monitoring patient recovery from pulmonary tuberculosis. In 2011 Annual international conference of the IEEE engineering in medicine and biology society, pages 6017–6020. IEEE, 2011. Von Kügelgen et al. (2021) Julius Von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, and Francesco Locatello. Self-supervised learning with data augmentations provably isolates content from style. Advances in neural information processing systems, 34:16451–16467, 2021. Wang et al. (2022) Luyu Wang, Pauline Luc, Yan Wu, Adria Recasens, Lucas Smaira, Andrew Brock, Andrew Jaegle, Jean-Baptiste Alayrac, Sander Dieleman, Joao Carreira, et al. Towards learning universal audio representations. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4593–4597. IEEE, 2022. Weng et al. (2023) Wei-Hung Weng, Sebastien Baur, Mayank Daswani, Christina Chen, Lauren Harrell, Sujay Kakarmath, Mariam Jabara, Babak Behsaz, Cory Y McLean, Yossi Matias, et al. Predicting cardiovascular disease risk using photoplethysmography and deep learning. arXiv preprint arXiv:2305.05648, 2023. Yang et al. (2022) Yuzhe Yang, Xin Liu, Jiang Wu, Silviu Borac, Dina Katabi, Ming-Zher Poh, and Daniel McDuff. Simper: Simple self-supervised learning of periodic targets. arXiv preprint arXiv:2210.03115, 2022. Zhang et al. (2021) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021. Zimmer et al. (2022) Alexandra J Zimmer, César Ugarte-Gil, Rahul Pathri, Puneet Dewan, Devan Jaganath, Adithya Cattamanchi, Madhukar Pai, and Simon Grandjean Lapierre. Making cough count in tuberculosis care. Communications medicine, 2(1):83, 2022. Appendix A SimCLR hyperparameters For training, we use 32 TPU-v3 cores with a batchsize of 4096. We use an AdamW optimizer with default parameters and a learning rate of 1.6e-3. We train all models for at least 300k steps, saving checkpoints every 5k steps. We select checkpoints that exhibit the best performance on the validation data after applying an exponential moving average, with a bias correction and a weight of 0.5, to the validation curves. Appendix B Appendix Tables
Branching-annihilating random walks in one dimension: Some exact results K. Mussawisade${}^{1}$    J. E. Santos${}^{2}$ and G. M. Schütz${}^{1}$ ${}^{1}$Institut für Festkörperforschung, Forschungszentrum Jülich, 52425 Jülich, Germany ${}^{2}$Department of Theoretical Physics, University of Oxford, 1 Keble Road, Oxford, OX1 3NP, United Kingdom (November 19, 2020) Abstract We derive a self-duality relation for a one-dimensional model of branching and annihilating random walkers with an even number of offsprings. With the duality relation and by deriving exact results in some limiting cases involving fast diffusion we obtain new information on the location and nature of the phase transition line between an active stationary state (non-zero density) and an absorbing state (extinction of all particles), thus clarifying some so far open problems. In these limits the transition is mean-field-like, but on the active side of the phase transition line the fluctuation in the number of particles deviates from its mean-field value. We also show that well within the active region of the phase diagram a finite system approaches the absorbing state very slowly on a time scale which diverges exponentially in system size.   PACS numbers: 05.70.Ln, 05.40.+j, 64.60.Ht, 02.50.-r [ ] I Introduction In a branching-annihilating random walk (BARW) particles hop on a lattice, annihilate pairwise on encounter, but may also spontaneously create offsprings on the same or on nearest neighbour lattice sites. Such models appear in a large variety of contexts, in particular in reaction-diffusion mechanisms and in non-equilibrium spin relaxation. A generic feature of these processes is a transition as a function of the annihilation and branching rates between an active stationary state with non-zero particle density and an absorbing, inactive state in which all particles are extinct. Numerical results gained from a large variety of systems suggest that the transition in models with a single (or an odd number) of offsprings fall into the universality class of directed percolation (DP) [1], whereas models with an even number of offsprings belong to a distinct parity-conserving (PC) universality class [2, 3]. A coherent picture of this scenario is provided from an renormalization point of view [4]. In this paper we use exact methods to derive a self-duality relation and to address some open questions for limiting cases of the BARW model of Ref. [3] which is a model for spin relaxation dynamics far from thermal equilibrium. One considers Ising spins in one dimension with generalized zero-temperature Glauber dynamics [5], but with an independent coupling to an infinite-temperature heat bath which allows for Kawasaki spin-exchange events [6] with rate $\alpha/2$. This spin-flip process can be visualized in the following way: $$\displaystyle\uparrow\;\downarrow\;\uparrow\;\to\;\uparrow\;\uparrow\;\uparrow% \;\mbox{ and }\downarrow\;\uparrow\;\downarrow\;\to\;\downarrow\;\downarrow\;\downarrow$$ with rate $$\displaystyle\lambda$$ $$\displaystyle\uparrow\;\uparrow\;\downarrow\;\rightleftharpoons\;\uparrow\;% \downarrow\;\downarrow\;\mbox{ and }\downarrow\;\downarrow\;\uparrow\;% \rightleftharpoons\;\downarrow\;\uparrow\;\uparrow$$ with rate $$\displaystyle D/2$$ (1) $$\displaystyle\uparrow\;\downarrow\;\rightleftharpoons\;\downarrow\;\uparrow\;% \mbox{ and }\downarrow\;\uparrow\;\rightleftharpoons\;\uparrow\;\downarrow$$ with rate $$\displaystyle\alpha/2$$ By identifying a domain wall ($\uparrow\;\downarrow$ or $\downarrow\;\uparrow$) with a particle of type $A$ on the dual lattice and two parallel spins with a vacancy $\emptyset$ [7] this process becomes a BARW with rates $$\displaystyle A\;A\;\to\;\emptyset\;\emptyset$$ with rate $$\displaystyle\lambda$$ $$\displaystyle\emptyset\;A\;\rightleftharpoons\;A\;\emptyset$$ with rate $$\displaystyle D/2$$ (2) $$\displaystyle\emptyset\;A\;\emptyset\;\rightleftharpoons\;A\;A\;A$$ with rate $$\displaystyle\alpha/2$$ $$\displaystyle\emptyset\;A\;A\;\rightleftharpoons\;A\;A\;\emptyset$$ with rate $$\displaystyle\alpha/2$$ Without branching (i.e. spin exchange) the system evolves into the single absorbing state with no particles at all. In spin language this is the totally ferromagnetic state with all spins up or all spins down. In the presence of the branching process an intricate competition between the zero-temperature ordering process (particle annihilation) and the disordering high-temperature branching process sets in. The result is a non-trivial phase diagram as a function of the system parameters. Starting from, say, a random initial state with an even number of particles the system evolves ultimately into the inactive empty lattice for dominant ordering dynamics, whereas it remains in an active state with finite density if the disordering branching process dominates. Numerical evidence suggests the phase transition to belong to the PC universality class [3]. We stress that these results are supposed to be valid only in the thermodynamic limit. In any finite system the unique stationary state is the absorbing inactive state (unless $\lambda=0$) because for $\lambda>0$ there is always a small probability of reaching this state from which the system cannot escape any more. However, intuitively, one expects the approach to this state to occur on a time scale $\tau_{act}\sim q^{L}$ which is exponentially large in system size $L$ if parameters are chosen to represent the active phase of the thermodynamic limit. In the absorbing phase, both exact analytical results for $\lambda=D$ [8] and renormalization group results on diffusion-limited annihilation ($\alpha=0$) [9] show that the approach to extinction is algebraic for the infinite system. For a finite system one can infer from these results a crossover time scale $\tau_{abs}\sim L^{2}$ to exponential decay of the particle density. Here we aim at obtaining information on the form of the phase transition line and on the dynamical and stationary behaviour of the system in various limiting cases involving fast diffusion of particles (or spins in terms of the spin-relaxation model). In Sec. II we derive a duality relation which maps the phase diagram onto itself in a non-trivial way. This duality is different from the domain-wall duality which maps the spin-relaxation dynamics to the BARW. On a self-dual line running across the phase diagram we obtain a relation between the (time-dependent) density expectation value for a half-filled random initial state and the survival probability of two single particles in an initially otherwise empty system. In Sections III and IV we adopt the strategy of considering separately the fluctuations in the total number of particles from the spatial correlations within a configuration of a given fixed number of particles. By separating the hopping time scale from the branching and annihilation time scales one can then gain insight in the behaviour of the system in the absence of spatial correlations. In Sec. III we study the system in the fast-diffusion limit $D\to\infty$ of the BARW (2) (III.1) and in its spin-relaxation formulation (1) in the dual limit of infinite spin-exchange rate $\alpha$. In these limiting cases all spatial correlations are washed out and one expects the PC transition to change into some other, mean-field-type phase transition. However, in contrast to a traditional mean-field approach, our treatment keeps track of the exact fluctuations of the total number of particles (or spins respectively). Our treatment is not an approximation, but yields rigorous results in these limiting cases for which we calculate the stationary density and density fluctuations (III.1) and fluctuations of the magnetization (III.2). Thus we are able to analyze to which extent the system deviates from mean-field behaviour and we also identify the exact phase transition point. In Sec. IV we investigate by similar means the dynamical behaviour of the finite system in the active region of the phase diagram. We show that for fast diffusion the relaxation to the absorbing state in a finite system is indeed exponentially slow, thus confirming the intuitive argument for the signature of the active region in a finite system. In Section V we conclude with some final remarks. II Duality relations We define the BARW in terms of a master equation for the probability $P(\eta;t)$ of finding, at time $t$, a configuration $\eta$ of particles on a lattice of $L$ sites. Here $\eta=\{\eta(1),\eta(2),\dots,\eta(L)\}$ where $\eta(x)=0,1$ are the integer-valued particle occupation numbers at site $x$. For definiteness we assume $L$ to be even. Using standard techniques [10] we express the time evolution given by the master equation in terms of a Hamiltonian $H$ of a one-dimensional quantum spin system (for a review see [11]). The idea is to represent each of the possible particle configurations $\eta$ by a column vector $|\,{\eta}\,\rangle$ which together with the transposed vectors $\langle\,{\eta}\,|$ form an orthonormal basis of a vector space $X=({\mathbb{C}}^{2})^{\otimes L}$. One represents the probability distribution by a state vector $|\,P(t)\,\rangle=\sum_{\eta\in X}P(\eta;t)\mbox{$|\,{\eta}\,\rangle$}$ and writes the master equation in the form $$\frac{d}{dt}P(\eta;t)=-\langle\,\eta\,|H|\,P(t)\,\rangle$$ (3) where the off-diagonal matrix elements of $H$ are the (negative) transition rates between states and the diagonal entries are the inverse of the exponentially distributed life times of the states. A state at time $t^{\prime}=t_{0}+t$ is given in terms of an initial state at time $t_{0}$ by $$|\,P(t_{0}+t)\,\rangle=\mbox{e}^{-Ht}|\,P(t_{0})\,\rangle.$$ (4) The expectation value $\rho_{k}(t)=\mbox{$\langle\,{s}\,|$}n_{k}\mbox{$|\,{P(t)}\,\rangle$}$ for the density at site $k$ is given by the projection operator $n_{k}$ which has value 1 if there is a particle at site $k$ and 0 otherwise. The summation vector $\mbox{$\langle\,{s}\,|$}=\sum_{\eta\in X}\mbox{$\langle\,{\eta}\,|$}$ performs the average over all possible final states of the stochastic time evolution. Below an initial distribution with $N$ particles placed on sites $k_{1},\dots,k_{N}$ is denoted by the column vectors $|\,{k_{1},\dots,k_{N}}\,\rangle$. The empty lattice is represented by the vector $|\,{0}\,\rangle$. The uncorrelated product distribution where on each lattice site the probability of finding a particle is equal to 1/2, is given in terms of the transposed of the summation vector as $\mbox{$|\,{1/2}\,\rangle$}=\mbox{$\langle\,{s}\,|$}^{T}/(2^{L})$. To obtain the Hamiltonian for the time evolution of the BARW (2) we note that we can represent any two-state particle system as a spin system by identifying a particle (vacancy) on site $k$ with a spin-up (down) state on this site. This allows for a representation of $H$ in terms of Pauli matrices where $n_{k}=(1-\sigma^{z}_{k})/2$ projects on states with a particle on site $k$ and $v_{k}=1-n_{k}$ is the projector on vacancies. The off-diagonal matrices $s^{\pm}_{k}=(\sigma^{x}_{k}\pm i\sigma^{y}_{k})/2$ create ($s^{-}_{k}$) and annihilate ($s^{+}_{k}$) particles. We stress that in the present context the “spins” are just convenient labels for particle occupancies which are conceptually entirely unrelated to the spins of the spin relaxation model (1) which is treated below. Using this pseudospin formalism one finds $$\displaystyle H$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\sum_{k}\left\{D(n_{k}v_{k+1}+v_{k}n_{k+1}-s^{+}_{k}s^% {-}_{k+1}-s^{-}_{k}s^{+}_{k+1})\right.$$ (5) $$\displaystyle\left.+2\lambda(n_{k}n_{k+1}-s^{+}_{k}s^{+}_{k+1})+\alpha(1-% \sigma^{x}_{k}\sigma^{x}_{k+1})n_{k}\right\}.$$ Each part of this stochastic Hamiltonian represents one of the elementary processes (2) and we may write $$H(D,\lambda,\alpha)=DH^{SEP}+\lambda H^{RSA}+\alpha H^{BARW}.$$ (6) Here $H^{SEP}$ represents hopping of hard-core particles, i.e. the symmetric exclusion process [12], the pair-annihilation process encoded in $H^{RSA}$ corresponds to random-sequential adsorption [13] and $H^{BARW}$ describes a special equilibrium branching-annihilating random walk where pair annihilation requires the presence of another particle and hopping occurs only in pairs (see (2). For $\lambda=D$ and $\alpha=0$ this process reduces to the exactly solvable process of diffusion-limited pair annihilation (DLPA) [14]. The time evolution conserves particle number modulo 2. Here we work only on the even subspace defined by the projector $(1+Q)/2$ where $Q=(-1)^{N}=\prod_{k}\sigma_{k}^{z}$. The projection on the even sector of the uncorrelated initial state with a density $1/2$ is given by the vector $\mbox{$|\,{1/2}\,\rangle$}^{even}=(1/2)^{L-1}\mbox{$|\,{s}\,\rangle$}^{even}$. Within the same framework the stochastic Hamiltonian for the spin-flip process (1) can be written in terms of Pauli matrices as follows $$\displaystyle H^{SF}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{4}\sum_{k}\left[(1-\sigma^{x}_{k})w_{k}(D,\lambda)+\right.$$ (7) $$\displaystyle\left.+\alpha(1-\sigma^{x}_{k}\sigma^{x}_{k+1})(1-\sigma^{z}_{k}% \sigma^{z}_{k+1})\right]$$ with the generalized Glauber spin flip rates encoded in $w_{k}(D,\lambda)=(2-\sigma^{z}_{k-1}\sigma^{z}_{k}-\sigma^{z}_{k}\sigma^{z}_{k% +1})(D+\lambda+(\lambda-D)\sigma^{z}_{k-1}\sigma^{z}_{k+1})/2$. For this process the spins represent the actual spin configurations of the spin-relaxation process. We note that the usual zero-temperature Glauber dynamics - equivalent to DLPA in particle language - correspond to $\lambda=D$, $\alpha=0$. For this model the domain wall correspondence [7] between the BARW and the spin-flip process can be rigorously derived as a similarity transformation on the level of the quantum Hamiltonian description. There exists a transformation ${\cal B}$ such that $H^{SF}={\cal B}H{\cal B}^{-1}$ [15]. The generalization $\lambda=D$, $\alpha>0$ corresponds to the exactly solvable process introduced in Ref. [8]. Consider now the transformation ${\cal D}_{\pm}$ which is, for the even particle sector, defined by $${\cal D}_{+}=\gamma_{1}\gamma_{2}\dots\gamma_{2L-1}$$ (8) where $$\displaystyle\gamma_{2k-1}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left[(1+i)\sigma_{k}^{z}-(1-i)\right]$$ (9) $$\displaystyle\gamma_{2k}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left[(1+i)\sigma_{k}^{x}\sigma_{k+1}^{x}-(1-i)\right].$$ (10) and defined by ${\cal D}_{-}=-{\cal D}_{+}\sigma_{L}^{x}$ for the odd particle sector [16, 17]. ${\cal D}_{\pm}$ is unitary and transforms Pauli matrices as follows: $$\displaystyle{\cal D}_{\pm}^{-1}\sigma_{k}^{x}\sigma_{k+1}^{x}{\cal D}_{\pm}$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}\sigma_{k}^{z}&k\neq L\\ Q\sigma_{L}^{z}&k=L\end{array}\right.$$ (11) $$\displaystyle{\cal D}_{\pm}^{-1}\sigma_{k+1}^{z}{\cal D}_{\pm}$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}\sigma_{k}^{x}\sigma_{k+1}^{x}&k\neq L% \\ \pm Q\sigma_{L}^{x}\sigma_{1}^{1}&k=L\end{array}\right..$$ (12) In Ref. [18] it was observed that this transformation maps $H^{DLPA}$, obtained from $H$ by setting $\lambda=D$ and $\alpha=0$, onto its transposed, $H^{DLPA}={\cal D}(H^{DLPA})^{T}{\cal D}^{-1}$ and thus generates a set of relations between various expectation values [19]. Here we go further and apply this transformation to the Hamiltonian $H=H(D,\lambda,\alpha)$ (5) and transpose the operator which results from the transformation. Using (11), (12) we find $$\displaystyle H^{SEP}$$ $$\displaystyle\to$$ $$\displaystyle H^{BARW},$$ $$\displaystyle H^{BARW}$$ $$\displaystyle\to$$ $$\displaystyle H^{SEP},$$ $$\displaystyle H^{RSA}$$ $$\displaystyle\to$$ $$\displaystyle H^{RSA}+H^{SEP}-H^{BARW}$$ and hence the relations $$\tilde{H}=\lambda H^{RSA}+(\lambda+\alpha)H^{SEP}+(D-\lambda)H^{BARW}.$$ (13) which has the same form as the original Hamiltonian (6), but with rates $$\displaystyle\tilde{\lambda}$$ $$\displaystyle=$$ $$\displaystyle\lambda,$$ $$\displaystyle\tilde{D}$$ $$\displaystyle=$$ $$\displaystyle\lambda+\alpha,$$ (14) $$\displaystyle\tilde{\alpha}$$ $$\displaystyle=$$ $$\displaystyle D-\lambda.$$ The transformation (13) is a duality transformation, we obtain the identity transformation if we apply the transformation twice. On the mean-field phase transition line $\lambda=D$ [20] the system is exactly solvable [8] and belongs in its entirety to the inactive phase. Hence the whole region $\lambda>D$ is in the inactive phase. Duality maps the interesting region $\lambda\leq D$ which contains the phase transition line non-trivially onto itself. Thus duality can be used to relate physical quantities at different points of the parameter space. This region contains a self-dual line $$D=\lambda+\alpha$$ (15) in which every point maps onto itself. In the notation of Ref. [3] $D=p_{rw}=\Gamma^{-1}(1-\delta)$, $\lambda=p_{an}=\Gamma^{-1}(1+\delta)$ and $\alpha=2p_{ex}=2(1-2\Gamma^{-1})$ is normalized such that $p_{ex}+p_{rw}+p_{an}=1$. With a parametrization in terms of $\delta$ and $p_{ex}$ the dual rates are given by $\tilde{\delta}=-2p_{ex}/[1+p_{ex}+\delta(1-p_{ex})]$ and $\tilde{p_{ex}}=-\delta(1-p_{ex})/[\delta(1-p_{ex})+2(1+p_{ex})]$. The self-dual line is given by the relation $\delta=-2p_{ex}/(1-p_{ex})$. The duality transformation not only maps the phase-diagram onto itself, but also generates relations between time-dependent expectation values. Consider the expectation value of the density $\rho_{k}(t)=\mbox{$\langle\,{s}\,|$}^{even}n_{k}e^{-Ht}\mbox{$|\,{1/2}\,% \rangle$}^{even}$, where $\mbox{$|\,{1/2}\,\rangle$}^{even}$ is a random initial state with density 1/2, projected over the even sector. This expectation value is defined at the point $(D,\lambda,\alpha)$ of the parameter space of the Hamiltonian. It is straightforward to verify the relations $$\displaystyle{\cal D}^{-1}\mbox{$|\,{s}\,\rangle$}^{even}$$ $$\displaystyle=$$ $$\displaystyle-i(i-1)^{L-1}\mbox{$|\,{0}\,\rangle$}$$ $$\displaystyle\mbox{$\langle\,{1/2}\,|$}^{even}{\cal D}$$ $$\displaystyle=$$ $$\displaystyle i(-i-1)^{L-1}\mbox{$\langle\,{0}\,|$}/2^{L-1}.$$ (16) So if we use these rules of transformation and the rules of transformation for the Pauli matrices, given by (11), (12), we can write the expectation value for the density in the even sector the form $$\displaystyle\mbox{$\langle\,{s}\,|$}n_{k}e^{-Ht}\mbox{$|\,{1/2}\,\rangle$}^{even}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\mbox{$\langle\,{0}\,|$}e^{-\tilde{H}t}(1-\sigma_{k-1}% ^{x}\sigma_{k}^{x})\mbox{$|\,{0}\,\rangle$}$$ (17) $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left(1-\mbox{$\langle\,{0}\,|$}e^{-\tilde{H}t}\mbox{$% |\,{k,k+1}\,\rangle$}\right)$$ where we have used (13) and the fact that the expectation value for the density $\rho_{k}(t)$ is a real number. The transformed initial state is a superposition of the steady state (the empty lattice) and the two-particle state with particles at sites $k,k+1$. The quantity in the right hand side of equation (17) is one-half times the probability that the state with two particles initially placed at sites $k$ and $k+1$ has not decayed at time $t$ to the empty state, measured with the transformed rates $\tilde{\lambda},\tilde{D},\tilde{\alpha}$ (14). This is a specific result for the time-dependent density starting from a random initial state with density 1/2. More general transformation properties of time-dependent correlation functions can be obtained following the strategy of Ref. [18]. We conclude this section by pointing out that analogous enantiodromy relations can be derived for a discrete-time version of the process which corresponds to a sublattice parallel updating scheme rather than the random sequential updating represented by the stochastic Hamiltonians (5), (7). Such a parallel updating scheme (which we expect to retain all the universal features of the model) consists of four steps. In a first updating sweep, update all spins on the even sublattice in parallel according to the generalized Glauber rules, but with the rates $\lambda,D$ now taken as actual probabilities. In a second step one updates the odd sublattice. In a third step one applies a sublattice parallel pair-updating scheme to implement the Kawasaki spin-exchange process with probability $\alpha/2$: In a first round one divides the lattice into even/odd pairs $(2k,2k+1)$ and exchanges spins within each pair with probability $\alpha/2$. Finally one updates with the odd/even pairs. This completes a full updating cycle. The stochastic time evolution of this process may be written in terms of the transfer matrix $$T^{SF}=T^{K}_{odd}(\alpha)\;T^{K}_{even}(\alpha)\;T^{G}_{odd}(\lambda,D)\;T^{G% }_{even}(\lambda,D)$$ (18) where $$T^{G}_{even}(\lambda,D)=\prod_{k=1}^{L/2}\left[1-(1-\sigma^{x}_{k})w_{k}(D,% \lambda)/4\right]$$ (19) and an analogous expression for $T^{K}_{odd}$. The spin exchange transfer matrix $T^{K}_{odd}T^{K}_{even}$ is the well-known transfer matrix of the six-vertex model [21] defined on a diagonal square lattice. One has $$T^{K}_{even}=\prod_{k=1}^{L/2}\left[1-\alpha(1-\sigma^{x}_{2k}\sigma^{x}_{2k+1% })(1-\sigma^{z}_{2k}\sigma^{z}_{2k+1})/4\right].$$ (20) The transfer matrix for the related BARW model can be obtained by applying the similarity transformation ${\cal B}$ of Ref. [15]. One can then derive a duality relation in the way described above. The transformed process has the same elementary transitions, but with a different updating sequence. III Phase transition for fast diffusion It is intuitively clear that the PC phase transition in the system originates in the complicated structure of the density correlations which are build up by the competing processes of branching and annihilation. For a better understanding, consider first $\lambda=0$. This reduced process includes (besides diffusion) branching $A\to 3A$ and conditional pair annihilation $3A\to A$ which both require the presence of a surviving particles to take place. As a result, there are two stationary distributions: the empty lattice, and the random distribution where each particle configuration is equally likely. Since there is no transition channel from the occupied lattice to the empty lattice, the system is in the active phase. On the other hand, for $\alpha=0$ the system is in the absorbing phase, the only stationary distribution is the empty lattice. We conclude that the unconstrained pair annihilation process $2A\to 0$ with rate $\lambda$ is responsible for the phase transition to take place. This scenario is captured in a simple mean-field approach. The exact equations of motion for the expected particle number $\langle\,{N(t)}\,\rangle$ reads $$\frac{d}{dt}\mbox{$\langle\,{N(t)}\,\rangle$}=\sum_{i}\left[\alpha\mbox{$% \langle\,{n_{i}(t)}\,\rangle$}-2(\alpha+\lambda)\mbox{$\langle\,{n_{i}(t)n_{i+% 1}(t)}\,\rangle$}\right].$$ (21) Replacing the correlators by the product of the density $\rho(t)=\mbox{$\langle\,{n_{i}(t)}\,\rangle$}$ yields the mean-field equation $$\frac{d}{dt}\mbox{$\langle\,{N(t)}\,\rangle$}=\alpha\mbox{$\langle\,{N(t)}\,% \rangle$}-\frac{2(\alpha+\lambda)}{L}\mbox{$\langle\,{N(t)}\,\rangle$}^{2}$$ (22) with the stationary mean-field solution for the active phase $$\mbox{$\langle\,{N}\,\rangle$}^{\ast}_{mf}=\frac{\alpha}{2(\alpha+\lambda)}L.$$ (23) Since each lattice site can take only one particle and therefore $n_{i}=0,1$, one can use $n_{i}^{2}=n_{i}$ to show that the mean-field fluctuations $\Delta^{\ast}_{mf}=\mbox{$\langle\,{N^{2}}\,\rangle$}^{\ast}_{mf}-\left(\mbox{% $\langle\,{N}\,\rangle$}^{\ast}_{mf}\right)^{2}$ around the mean are given in terms of the density $\rho^{\ast}_{mf}=\mbox{$\langle\,{N}\,\rangle$}^{\ast}_{mf}/L$ by $$\Delta^{\ast}_{mf}=\rho^{\ast}_{mf}(1-\rho^{\ast}_{mf})L.$$ (24) We conclude that the mean-field phase transition point is given by $\alpha/\lambda=0$, consistent with the considerations above. By duality (14) we also recover the mean-field phase transition line of Ref. [20]. There are several questions that we want to address in this context. The first is the form of the exact phase transition line if both $\alpha$ and $\lambda$ are very small compared to the diffusion rate $D$. The second question is the nature of the phase transition in this limit. If $D\gg\alpha,\lambda$ the spatial correlations build up by the annihilation/branching process are wiped out very quickly by diffusive mixing, leaving a transition which we cannot expect to be a PC transition anymore. Finally, in the next section we study the crossover time scales on which a large, but finite system reaches the absorbing state. III.1 Phase transition in the BARW To tackle these questions we observe that for fast diffusion the process simplifies dramatically: In the absence of spatial correlations the state of the system is fully characterized by the total particle number $N$. For fixed $N$, each particle configuration occurs with equal probability $N!(L-N)!/(L)!$ which is just the inverse of the number of possibilities of placing $N$ particles on a lattice of $L$ sites. As a result, the dynamics reduce to a random walk on the integer set $0,2,4,\dots,2K,\dots,L$ of total particle number $N=2K$. Thus we may represent the dynamics as a random walk on a one-dimensional lattice of $L/2+1$ sites, where the position of the random walker marks the number of particles of the BARW process and $0$, representing the empty lattice, is an absorbing point. It remains only to calculate the hopping rates $r_{N}$ and $\ell_{N}$ from site $N$ to the right ($N+2$) and left ($N-2$) respectively. The state of the system is then given by the solution of the master equation $$\displaystyle\frac{d}{dt}P_{N}(t)$$ $$\displaystyle=$$ $$\displaystyle r_{N-2}P_{N-2}(t)+\ell_{N+2}P_{N+2}(t)$$ (25) $$\displaystyle-(r_{N}+\ell_{N})P_{N}(t)$$ for the probability of finding $N$ particles in the system. The average particle number is given by $\mbox{$\langle\,{N(t)}\,\rangle$}=\sum_{N}NP_{N}(t)$ [22]. By counting the number of possibilities of finding two vacancies on neighbouring sites of an occupied site in a random state of $N$ particles one readily finds $$r_{N}=\frac{\alpha}{2}\frac{N(L-N)(L-N-1)}{(L-1)(L-2)}$$ (26) as contribution from the branching process with rate $\alpha/2$. An analogous consideration gives $$\ell_{N}=\frac{1}{2}\left[\frac{\alpha N(N-1)(N-2)}{(L-1)(L-2)}+\frac{2\lambda N% (N-1)}{(L-1)}\right]$$ (27) as the contribution from the annihilation processes $3A\to A$ and $2A\to 0$ respectively. These rates represent a biased random walk which in the thermodynamic limit $L\to\infty$ and for $N$ fixed reduces to a directed random walk in positive direction with increasing hopping rate $r_{N}=\alpha N/2$. Since for $D\to\infty$ the branching process is not diffusion-limited, the particle number increases exponentially in time, $\mbox{$\langle\,{N(t)}\,\rangle$}=\mbox{$\langle\,{N(0)}\,\rangle$}e^{\alpha t}$. Thus for any $\alpha>0$ the system is in the active phase, i.e., the phase transition is at $\alpha=0$ which is consistent with the mean-field result (23). For a large, but finite system with a small initial number of particles one expects a slowing down of the exponential growth when a finite density is reached, i.e. on a time scale of the order $\ln{(L)}/\alpha$. Ultimately, though, the finite system will reach, by a rare fluctuation in the number of particles, the absorbing empty lattice. This second crossover time to absorption is discussed in the next section. To study the stationary behaviour $d/(dt)P_{N}(t)=0$ of the system we rescale the lattice to unit length and expand the r.h.s. of the master equation (25) in a Taylor series in the lattice spacing $1/L$. Setting $x=N/L$ and keeping the leading order term yields the equation $c=2(l_{x}-r_{x})P^{\ast}_{x}$. The integrability condition $\int_{0}^{1}dxP^{\ast}_{x}=1$ on the stationary probability distribution requires the integration constant $c$ to vanish. The resulting equation has the solution $P^{\ast}_{x}=\delta(x)$, corresponding to the absorbing state. The only other integrable solution on the interval $[0,1]$ is the delta-function $P^{\ast}_{x}=\delta(x-\rho^{\ast})$ with $$\rho^{\ast}=\frac{\alpha}{2(\alpha+\lambda)}.$$ (28) This gives the exact stationary density $\rho^{\ast}$ of the active phase which, not very surprisingly, coincides with the mean-field value (23). To determine whether the system in the infinite-diffusion limit actually is a mean-field system we investigate the fluctuations around the mean (28). The mean field result (24) requires studying the fluctuations on a length scale of order $y=\sqrt{L}(x-\rho^{\ast})$. Keeping in the Taylor expansion of the master equation around $x=\rho^{\ast}$ all terms to this order gives the ordinary differential equation $$\frac{d}{dy}P^{\ast}_{y}=-\frac{y}{2\rho^{\ast}(1-\rho^{\ast})^{2}}P^{\ast}_{y}$$ (29) The solution of this equation is a Gaussian which gives the exact fluctuations in the particle number $$\Delta^{\ast}=2\rho^{\ast}(1-\rho^{\ast})^{2}L$$ (30) Except for $\lambda=0$ ($\rho^{\ast}=1/2$) this expression is in disagreement with the mean-field result (24), indicating a non-trivial effect of the unconstrained pair-annihilation process even in the fast diffusion limit. We conclude that the system undergoes a mean-field transition, but with fluctuations in the particle number which deviate from those predicted by mean-field. III.2 Spin-relaxation formulation We consider the system in the dual limit $\alpha\to\infty$ where we can study the phase transition between the active phase and the absorbing phase in terms of the dimensionless variable $u=(D-\lambda)/(D+\lambda)$. In the spin-relaxation picture this is the limit of fast Kawasaki spin-exchange where the system is spatially uncorrelated and hence completely characterized by the total magnetization $M=\sum_{k}\sigma_{k}^{z}/2$. The dynamics of the process (1) reduce in this limit to a random walk in the magnetization variable $M$, ranging from $-L/2$ to $L/2$. The master equation reads $$\displaystyle\frac{d}{dt}P_{M}(t)$$ $$\displaystyle=$$ $$\displaystyle r_{M}P_{M-1}(t)+\ell_{M+1}P_{M+1}(t)$$ (31) $$\displaystyle-(r_{M}+\ell_{M})P_{M}(t).$$ The transition rates for this random walk with absorbing boundaries at $M=\pm L/2$ are readily calculated as $$\displaystyle r_{M}$$ $$\displaystyle=$$ $$\displaystyle\frac{D+\lambda}{2L-2}\left(1-\frac{2Mu}{L-2}\right)\left(\frac{L% ^{2}}{4}-M^{2}\right)$$ (32) $$\displaystyle\ell_{M}$$ $$\displaystyle=$$ $$\displaystyle\frac{D+\lambda}{2L-2}\left(1+\frac{2Mu}{L-2}\right)\left(\frac{L% ^{2}}{4}-M^{2}\right).$$ (33) We find a bias towards to the boundaries $M=\pm L/2$, i.e., the fully magnetized absorbing states with all spins up or down resp., for $u<0$. For $u>0$ the system is biased to the center, corresponding to the active phase. For an initial state which is symmetric under spin-flip $s_{i}\to-s_{i}$ the mean $\langle\,{M}\,\rangle$ vanishes for all times in both the active and absorbing phase and hence is not suitable to characterize the system. For the same reason an naive mean-field approach by setting $\mbox{$\langle\,{\sigma_{i}^{z}(t)\sigma_{j}^{z}(t)}\,\rangle$}=\mbox{$\langle% \,{\sigma_{i}^{z}(t)}\,\rangle$}\mbox{$\langle\,{\sigma_{j}^{z}(t)}\,\rangle$}$ would not give any information on the dynamics of the spin-fluctuations. The quantity that characterizes the phase transition are the fluctuations $\mbox{$\langle\,{M^{2}}\,\rangle$}=\sum_{M}M^{2}P_{M}(t)$ in the magnetization, i.e. the mean-square displacement of the random walk. In the active regime this quantity is proportional to the system size whereas in an ordered state $\mbox{$\langle\,{M^{2}}\,\rangle$}\sim L^{2}$. First consider the phase transition point $u=0$. From the considerations above we know that the stationary state is inactive. The only question of interest is the approach to stationarity from some random initial state. From (31) one obtains $d/(dt)\mbox{$\langle\,{M^{2}}\,\rangle$}=2\lambda(L^{2}/4-\mbox{$\langle\,{M^{% 2}}\,\rangle$})/(L-1)$ which is readily solved by $$\mbox{$\langle\,{M^{2}(t)}\,\rangle$}=L^{2}/4+(\mbox{$\langle\,{M^{2}(0)}\,% \rangle$}-L^{2}/4)e^{-2\lambda t/(L-1)}.$$ (34) The approach to the stationary value is exponential on a time scale $$\tau=\frac{L-1}{2\lambda}.$$ (35) For large system size and initial times $\tau\ll L$, the fluctuations in the magnetization grow linearly in time. For $u\neq 0$ the equations of motion for the moments $\langle\,{M^{2k}}\,\rangle$ are too complicated for direct analysis. We define $\hat{M}=M/\sqrt{L}$ and study only the thermodynamic limit. Using the master equation (31) the stationarity condition $d/(dt)\mbox{$\langle\,{\hat{M}^{2k}}\,\rangle$}=0$ for the moments of $\hat{M}$ yields the recursion relation $$\mbox{$\langle\,{\hat{M}^{2k}}\,\rangle$}=(2k-1)\mbox{$\langle\,{\hat{M}^{2k-2% }}\,\rangle$}/(4u)$$ (36) which shows that the stationary distribution in the active phase $u>0$ is Gaussian with variance $1/(4u)$: $$P^{\ast}(\hat{M})=\sqrt{\frac{2u}{\pi}}e^{-2u\hat{M}^{2}}.$$ (37) This yields the final result $$\mbox{$\langle\,{\hat{M}^{2}}\,\rangle$}=\left\{\begin{array}[]{cc}1/(4u)&u>0% \\ \infty&u\leq 0\end{array}\right.$$ (38) All other stationary moments in the active regime follow from the Gaussian nature (37) of the statistics. We read off a critical exponent $\kappa=1$ for the divergence of $\langle\,{\hat{M}^{2}}\,\rangle$ with $u$ as the system approaches the critical point $u=0$. IV Relaxational behaviour in finite systems The exact solution [8] for the dynamics of the spin-spin correlation function on the line $\lambda=D$ implies a crossover time $\tau\sim L^{2}$ from a power law relaxation to exponential relaxation to the absorbing state. One then expects this to hold throughout the inactive phase. On the other hand, any finite system has only one stationary state, which is the empty lattice in particle language, corresponding to the magnetically ordered states with all spins up or all spins down. It is therefore of interest to study the relaxation towards this state in that region of parameter space that constitutes the active phase of the infinite system. The precise location of the phase transition line is not known, but we know that the line $\lambda=0$ (no unrestricted pair annihilation) belongs to the active phase and thus we may get some insight by studying the system in the immediate neighbourhood $\lambda\ll\alpha,D$ of this line. To this end we adopt a similar strategy as in the previous section by assuming $\lambda$ to be so small that the system had sufficient time to relax to its $\lambda=0$ stationary distribution between two successive pair-annihilation events. This limiting procedure can be made rigorous by taking $\alpha\to\infty$ and keeping $\lambda,D$ fixed. In this limit the system reduces effectively to a two-state system, i.e. the system is completely characterized by stating whether the system is empty (state $|\,{0}\,\rangle$) or not (which we denote by $|\,{1}\,\rangle$). The latter state represents the stationary distribution of the system with $\lambda=0$ in which, because of detailed balance for this reduced process, all states with an even, non-zero number of particles have equal probability $p=1/[2^{(L-1)}-1]$. The transition rates between these two states characterizing the system are then trivial to work out: The transition from $|\,{0}\,\rangle$ to $|\,{1}\,\rangle$ is zero because $|\,{0}\,\rangle$ is an absorbing state. On the other hand, counting the number of states represented by $|\,{1}\,\rangle$ for which a pair annihilation event leads to the empty lattice yields a transition rate $$1/\tau_{act}=(\lambda L)/[2^{(L-1)}-1]$$ (39) for transitions from state 1 to state 0. Hence, at time $t$ the system is in the absorbing state with probability $P_{0}(t)=1-e^{-t/\tau_{act}}$ and in each non-empty state with probability $P_{1}(t)=e^{-t/\tau_{act}}/[2^{(L-1)}-1]$. For the particle density this behaviour implies the exact result $$\rho(t)=\frac{e^{-t/\tau_{act}}}{2-(1/2)^{L}}.$$ (40) Because of the fast intermediate relaxation to the equilibrium state of the $\alpha=\infty$ process the density and the density-correlations have no spatial dependence and the diffusion rate does not enter. The crossover time $\tau_{act}$ for reaching the absorbing state in the active region of the phase diagram for the infinite system (small $\lambda$) diverges exponentially in system size. Finally, we study the dynamical behaviour of the system for large, but finite $L$ in the limit of fast diffusion dicussed in Sec. III.1. We recall relation (17) which relates the decay of the particle density to the survival probability of two neighouring particles in an empty lattice. This quantity can be interpreted as a first-passage-time distribution for two annihilating and branching random walkers: When two random walkers in an empty lattice annihilate for the first time, the dynamics stop. Therefore the density decay equals one half this first-passage-time distribution and the mean-first-passage-time (MFPT) $$\displaystyle\tau$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}dt\mbox{$\langle\,{0}\,|$}e^{-Ht}\mbox{$|\,{k,k+% 1}\,\rangle$}$$ (41) $$\displaystyle=$$ $$\displaystyle\lim_{c->0}\mbox{$\langle\,{0}\,|$}(H+c)^{-1}\mbox{$|\,{k,k+1}\,% \rangle$}$$ gives the crossover time scale on which the system reaches the absorbing state. This quantity can be evaluated numerically for any point in parameter space by inverting the time evolution operator $c+H$ for finite system size, then taking the matrix element (41) and finally calculating the limit $c\to 0$. For an analytical treatment for large $D$ we note that the MFPT from some site $k$ to an absorbing site $k=0$ for a general random walk with nearest neighbour hops on $L+1$ sites can be expressed in terms of the hopping rates [23]. In the mapping of the BARW to the random walk the MFPT $\tau$ is equal to the MFPT of the random walker starting at site 2. With the hopping rates (26), (27) we find after some rearrangement of terms $$\tau=\frac{1}{\lambda L}\sum_{k=0}^{L/2-1}c_{k}$$ (42) with $$c_{k}=\frac{L!\,k!\,\Gamma(\lambda(L-2)/\alpha+1)}{(2k+2)!\,(L-2k-2)!\,\Gamma(% \lambda(L-2)/\alpha+k+1)}.$$ (43) We note first that in the limit $\alpha\to\infty$ the MFPT coincides with the relaxation time (39), as expected from duality. To study the asymptotic behaviour of $\tau$ for finite $\alpha$ (active phase) we determine the value $k_{0}$ for which $c_{k}$ gives the largest contribution to sum on the r.h.s. of (42). We find $k_{0}=\rho^{\ast}L/2$ with the stationary density given by (28). Using the Stirling formula for the Gamma-function and expanding $c_{k}$ around $k_{0}$ yields for non-vanishing density $\rho^{\ast}$ the asymptotic form of the crossover time $$\tau_{act}\sim\frac{1}{\lambda}\left[\frac{(1-2\rho^{\ast})^{(1-2\rho^{\ast})/% (2\rho^{\ast})}}{(1-\rho^{\ast})^{(1-\rho^{\ast})/(\rho^{\ast})}}\right]^{L}$$ (44) up to subleading power-law corrections in system size. Therefore, in the active region of the phase diagram the crossover to absorption in a finite system takes place on a time scale which is exponentially large in system size, with a density-dependent amplitude. For $\rho^{\ast}=0$, i.e. in the absorbing phase, the MFPT can be read off directly from (42), (43), since only the term with $k=0$ contributes. One finds $$\tau_{abs}=\frac{(L-1)}{2\lambda}$$ (45) This power law differs from the crossover behaviour $\tau_{abs}\sim L^{2}/D$ for finite diffusion constant D. The MFPT for this point in parameter space coincides with the relaxation time (35) in the dual point. V Final remarks The duality relation (13) divides the parameter space into two distinct regions separated by the self-dual line (15). Both regions are mapped onto each other and hence have the same physical properties. This is of practical usefulness for a numerical survey of the system and for the determination of the phase transition line since only part of the parameter space needs to be investigated. In particular, the line $\alpha=0$ maps onto the line $D=\lambda$ and the fast-diffusion limit to the limit $\alpha\to\infty$. The observation that for large $D$ any small $\alpha$ brings the system into the active phase translates into a phase transition at $D=\lambda$ in the limit $\alpha\to\infty$. This is a new result which definitely clarifies the unresolved issue of the location of the phase transition line for large $\alpha$. Numerical analysis of the model for large rates $\alpha$ is reported to be very difficult [3, 20]. Our exact result confirms the conjecture of Ref. [3] on the location of the phase transition point in this numerically untractable limit. In the $\delta\;-\;p_{ex}$ phase diagram of Ref. [3] the limit $\alpha\to\infty$ corresponds to $p_{ex}\to 1$ and the phase transition point $D=\lambda$ corresponds to $\delta=0$. The dual limit $D\to\infty$ covers the neighbourhood of the point $\delta=-1,p_{ex}=0$. Our result translates into an infinite slope of the phase transition line in this representation at this point. In the active phase of this region the exact stationary particle number distribution is Gaussian with a stationary density $\rho^{\ast}$ given by the mean-field value (23). The density fluctuations (30) deviate from the mean-field result (24) by a factor $2(1-\rho^{\ast})$. On the self-dual line we find from (17) that the density expectation value $\rho_{k}(t)$ equals one-half the survival probability at time $t$ of two particles placed initially at two neighbouring sites in an otherwise empty lattice. Hence the phase transition from the absorbing phase to the active phase may be rephrased as a mean-first-passage-time (MFPT) transition for random walkers which branch and annihilate. We expect the MFPT in a finite system to change from a power-law divergence (in system size) to an exponential divergence not only at infinite diffusion rate (Sec. IV), but also for finite $D$. Thus this numerically accessible quantity provides an alternative way of determining and characterizing the PC phase transition. VI Acknowledgements We thank J. Cardy, N. Menyhárd, Z. Rácz and U. Täuber for useful discussions. G.M.S. thanks the Institute for Theoretical Physics of the Eötvös University for hospitality and partial support (Grant OTKA OTKA T 019451). References [1] T. E. Harris, Ann. Prob. 2, 969 (1974); H. Takayasu and A. Yu. Tretyakov 68, 3060 (1992); I. Jensen, Phys. Rev E 47, R1 (1993), Phys. Rev. Lett. 70, 1465 (1993). [2] P. Grassberger, F. Krause and T. von der Twer, J. Phys. A 17, L105 (1984); P. Grassberger, J. Phys. A 22, L1103 (1989); M. H. Kim and H. Park, Phys. Rev. Lett. 73, 2579 (1994); I. Jensen, J. Phys. A 26, 3921 (1993), Phys. Rev. E 50, 3623 (1994); D. ben-Avraham, F. Leyvraz and S. Redner, Phys. Rev. E 50, 1843 (1994). Some of the references consider BARW’s with both even and odd number of offsprings. [3] N. Menyhárd, J. Phys. A 27, 6139 (1994). [4] J. Cardy and U. Täuber, Phys. Rev. Lett. 77, 4780 (1996); J. Stat. Phys. 90 (1998) (in press). [5] R. J. Glauber, Math. Phys. 4, 294 (1963). [6] K. Kawasaki, Phys. Rev. 145, 224 (1966). [7] Z. Rácz, Phys. Rev. Lett. 55, 1707 (1985). [8] M. Droz, Z. Rácz and J. Schmidt, Phys. Rev. A 39, 2141 (1989). [9] B. P. Lee, J. Phys. A 27, 2633 (1994). [10] E. Siggia, Phys. Rev. B 16, 2319 (1977); S. Sandow and S. Trimper, Europhys. Lett. 21, 799 (1993); F. C. Alcaraz, M. Droz, M. Henkel and V. Rittenberg, Ann. Phys. (N.Y.) 230, 250 (1994); G. M. Schütz, J. Stat. Phys. 79, 243 (1995). [11] G. M. Schütz, Integrable Reaction-Diffusion Processes and Quantum Spin Chains (to be published). [12] F. Spitzer, Adv. Math. 5, 246 (1970). [13] J. W. Evans, in Nonequilibrium Statistical Mechanics in One Dimension, ed. V. Privman (Cambrige University Press, Cambridge UK, 1996). [14] D. C. Torney and H. M. McConnell, J. Phys. Chem. 87, 1941 (1983). [15] J. E. Santos J. Phys. A 30, 3249 (1997). [16] D. Levy, Phys. Rev. Lett. 67, 1971 (1991). [17] G. Schütz J. Phys. A 26, 4555 (1993). [18] G. M. Schütz, Z. Phys. B 104, 583 (1997); G. M. Schütz and K. Mussawisade, Phys. Rev. E 57 (1998) (in press). [19] Such a relationship between a stochastic processes is called an enantiodromy relation, as opposed to a similarity transformation like the domain-wall duality which relates one stochastic Hamiltonian to another, rather than to its transposed. [20] N. Menyhárd and G. Ódor J. Phys. A 28, 4505 (1995). [21] D. Kandel, E. Domany and B. Nienhuis, J. Phys. A 23, L755 (1990). [22] This and the other limits of fast rates discussed in this paper can be treated rigorously by taking the limit of large rates in the formal solution 4 of the master equation, see [11] for details. [23] G. H. Weiss, J. Stat. Phys. 24, 587 (1981); K. P. N. Murthy and K. W. Kehr, Phys. Rev. A 40, 2082 (1989), and erratum in Phys. Rev. A 41, 1160 (1990).
Scintillation and Attenuation Modelling of Atmospheric Turbulence for Terahertz UAV Channels Weijun Gao Shanghai Jiao Tong University, China. Email: {gaoweijun, chong.han}@sjtu.edu.cn Chong Han Shanghai Jiao Tong University, China. Email: {gaoweijun, chong.han}@sjtu.edu.cn Zhi Chen University of Electronic Science and Technology of China, China. Email: chenzhi@uestc.edu.cn Abstract Terahertz (THz) wireless communications have the potential to realize ultra-high-speed and secure data transfer with miniaturized devices for unmanned aerial vehicle (UAV) communications. Existing THz channel models for aerial scenarios assume a homogeneous medium along the line-of-sight propagation path. However, the atmospheric turbulence due to random airflow leads to temporal and spatial inhomogeneity of the communication medium, motivating analysis and modelling of the THz UAV communication channel. In this paper, we statistically modelled the scintillation and attenuation effect of turbulence on THz UAV channels. Specifically, the frequency- and altitude-dependency of the refractive index structure constant, as a critical statistical parameter characterizing the intensity of turbulence, is first investigated. Then, the scintillation characteristic and attenuation of the THz communications caused by atmospheric turbulence are modelled, where the scintillation effect is modelled by a Gamma-Gamma distribution, and the turbulence attenuation as a function of altitude and frequency is derived. Numerical simulations on the refractive index structure constant, scintillation, and attenuation in the THz band are presented to quantitatively analyze the influence of turbulence for the THz UAV channels. It is discovered that THz turbulence can lead to at most $10~{}\textrm{dB}$ attenuation with frequency less than $1~{}\textrm{THz}$ and distance less than $10~{}\textrm{km}$. I Introduction With the increasing demand for faster and more secure wireless communications, Terahertz ($0.1-10~{}\textrm{THz}$) communications have attracted great research attention for 6G and beyond wireless communication networks [1]. Thanks to the sub-millimeter wavelength and multi-tens-of-GHz continuous bandwidth, the THz band can support fast, highly directional, and secure wireless links. These unique spectrum features are beneficial to wireless communications among unmanned aerial vehicles (UAVs). First, aerial communication applications such as UAV-aided ubiquitous coverage and UAV-based relaying rely on transmissions of high-resolution images and low-latency commands, which requires ultra-high-speed data transmission. Second, miniaturized communication devices can lighten the load of small UAVs. Finally, the high directivity and the enhanced communication security can effectively prevent the UAVs from being eavesdropped through the aerial channels with bare obstructions. Modeling of the THz UAV channel model is one of the most fundamental topics in studying THz wireless communications between flying UAVs. Due to the lack of multi-paths in aerial communication scenarios, the THz UAV channel is assumed to transmit through the line-of-sight (LoS) path between the transceivers. In prior studies [2], the LoS signal attenuation can be modelled by the summation of the free-space path loss, the molecular absorption effect led by molecules mainly consisting of water vapor, and the scattering effect due to small particles like raindrops, dust, and snowflakes under extreme weather conditions. These channel models assume that the propagation medium is homogeneous while for aerial communication scenarios, atmospheric turbulence led by wind can randomly change the temperature, pressure, and molecule component of the propagation medium. These lead to inhomogeneity of the medium and makes the existing models based on homogeneous medium invalid. Some experimental research studies such as [3] have discovered that turbulence can impair the data rate of THz communications and its influence is essential to be investigated. For the aforementioned motivations, it is necessary to analyze and model the effect of atmospheric turbulence in THz UAV communications. Since the turbulence is deterministically governed by the Navier-Stokes equations [4], the modeling of scintillation and attenuation caused by turbulence meets mathematical difficulty in solving this non-linear equation. As a result, only statistical models are accessible to characterize the turbulence. Early statistical studies on the influence of turbulence mainly focus on the visible light frequency band for free-space optical communications rather than THz communications [5]. In this paper, we statistically model the scintillation and attenuation effect caused by atmospheric turbulence for THz UAV channels. Specifically, we first analyze the refractive index structure constant (RISC), which is a key parameter to characterize turbulence. We develop the model of RISC at different frequencies and altitudes in the THz band based on the statistical turbulence model in the visible light frequency band. Second, we model the scintillation caused by turbulence by using a Gamma-Gamma distribution, which is a universal model applicable to the turbulence of various intensities. Finally, the scintillation and attenuation caused by turbulence at different propagation distances and RISC are evaluated to quantitatively demonstrate the influence of turbulence on THz UAV channels. The remainder of the paper is organized as follows. In Sec. II, the THz LoS channel models in the homogeneous and inhomogeneous media are described and modelled, where the molecular absorption and scattering effect are presented. Particularly in inhomogeneous medium, the atmospheric turbulence in the THz band is statistically characterized. In Sec. III, the scintillation and attenuation effects of turbulence on the THz UAV communications are investigated and computed. Numerical results for the turbulence including the altitude-dependent refractive index structure constant, Rytov variance, scintillation, and attenuation are evaluated in Sec. IV to quantitatively measure the influence of atmospheric turbulence on THz communications. The paper is concluded in Sec. V. II System Model We consider a THz UAV point-to-point wireless link as shown in Fig. 1. Since there are few obstacles in the aerial communication scenarios and THz wireless communications are highly directional, we assume that the LoS path between the transmitter and the receiver is not blocked and no multi-path effect is considered. Besides free-space spreading loss, THz LoS signals experience absorption and scattering loss due to the interaction between the electromagnetic wave and the molecules and small particles in the medium. On one hand, molecules such as water vapor and oxygen can be excited by the THz wave and absorb part of the energy of the EM wave, which is referred to as the molecular absorption effect [6]. On the other hand, the small particles caused by extreme weather like rain, snow, dust, and fog can scatter the EM wave and lead to additional scattering loss. The signal attenuation caused by these two effects has been well studied in many previous studies on THz communications [6, 7], which assume that the propagation medium is homogeneous along the propagation medium. However, in UAV with airflow, the THz propagation medium is not ideally homogeneous along the signal transmission path. At different altitudes, environmental parameters such as temperature, pressure, and moisture are different, which leads that the cross-altitude THz wave propagation having a spatially inhomogeneous medium. The inhomogeneity is mainly caused by turbulence, which is caused by random airflow in the atmosphere. Unlike the homogeneous medium, the refractive index of the inhomogeneous medium is not uniform and the transmitted LoS signal can thus be randomly distorted. II-A Terahertz Line-of-Sight Wave Propagation in Homogeneous Medium Since the component of air varies slowly with altitude, the propagation medium can be viewed as homogeneous for THz links with short vertical distances. THz wave propagation in a homogeneous medium experiences free-space path loss, molecular absorption loss, and scattering loss. The LoS channel impulse response due to absorption and scattering can be respectively expressed by $$\displaystyle L_{\textrm{abs}}$$ $$\displaystyle=e^{k_{\textrm{abs}}(f,h)L},$$ (1) $$\displaystyle L_{\textrm{sca}}$$ $$\displaystyle=e^{k_{\textrm{sca}}(f)L},$$ (2) where $k_{\textrm{abs}}$ and $k_{\textrm{sca}}$ stand for the attenuation coefficient caused by the molecular absorption effect and the scattering effect, respectively. The molecular absorption coefficient $k_{\textrm{abs}}$ is characterized in [6], which is a frequency- and altitude-dependent. $f$ stands for the frequency and $h$ is the altitude. $L$ denotes the propagation distance. Water vapor dominates the molecular absorption with six orders of magnitude higher than oxygen and others, and thus we can express $k_{\textrm{abs}}(f,h)$ as $$\displaystyle k_{\textrm{abs}}(f,h)\approx k_{H_{2}O,\textrm{grd}}(f)\alpha_{H_{2}O}(h),$$ (3) where $k_{H_{2}O,\textrm{grd}}(f)$ denotes the terrestrial water vapor absorption coefficient and $\alpha_{H_{2}O}(h)$ represents the ratio of water vapor density at altitude $h$ to the terrestrial one. The scattering effect between the EM wave and a certain type of particle can be classified into two cases, namely Rayleigh scattering and Mie scattering, depending on the relationship between the wavelength and the size of the particle. If the wavelength is larger than the size of the particle, the scattering is Rayleigh scattering, or otherwise, it becomes Mie scattering. At THz frequencies, the wavelength at millimeters or sub-millimeters is smaller than the radium of common particles such as rain, fog, and snow on the order of $10-100~{}\textrm{mm}$ [8], and therefore the scattering in the THz band is mainly Mie scattering. The Mie scattering loss coefficient $k_{\textrm{sca}}$ in dB/km can be represented by $$k_{\textrm{sca}}[\textrm{dB/km}]=4.343\int_{0}^{\infty}\sigma_{\textrm{ext}}(r)N(r)dr,$$ (4) where $r$ denotes the radius of the particle, and $N(r)$ represents the molecular size distribution commonly modelled by an exponential distribution $N(r)=N_{0}\exp(-\rho_{0}r)$. $\sigma_{\textrm{ext}}(r)$ stands for the extinction cross-section in Mie theory [9], which can be expressed by $$\displaystyle\sigma_{\textrm{ext}}$$ $$\displaystyle=\frac{2\pi}{\chi^{2}}\sum_{m=1}^{\infty}(2m+1)\textrm{Re}(a_{m}+b_{m})$$ (5) $$\displaystyle\approx\frac{2\pi}{\chi^{2}}\sum_{m=1}^{x+4x^{1/3}+2}(2m+1)\textrm{Re}(a_{m}+b_{m}),$$ where $\chi=\frac{2\pi}{\lambda}$ is the wave number, and the threshold parameter is given by $x=\frac{2\pi r}{\lambda}$. $\textrm{Re}(\cdot)$ returns the real part of a complex number. $a_{m}$ and $b_{m}$ represent Mie scattering coefficients [9]. In summary, the total path loss of the THz LoS channel in a homogeneous medium including free-space path loss, molecular absorption effect, and scattering can be expressed by $$L_{\textrm{tot}}^{\textrm{hom}}[\textrm{dB}]=20\log_{10}\left(\frac{4\pi fL}{c}\right)+0.434(k_{\textrm{abs}}+k_{\textrm{sca}})L,$$ (6) where $c$ stands for the speed of light. II-B Terahertz Line-of-Sight Wave Propagation in Atmospheric Turbulence Atmospheric turbulence caused by airflow leads to random fluctuations of temperature, pressure, and water vapor density in propagation medium in time and space. These fluctuations lead to a random fluctuation of the refractive index of the medium, which randomly distorts the EM wave. Characterizing the turbulent flow is critical in analyzing its influence on THz wave propagation. Navier-Stokes equations, as fundamental to describe the motion of viscous fluid in hydrodynamics, are the most straightforward way to model atmospheric turbulence. However, due to the difficulty in mathematically solving those nonlinear equations, it is impractical to directly solve them to model the effect of turbulence on wave propagation. Moreover, different from how the attenuation and scattering in the homogeneous medium are characterized, it is difficult to precisely acquire the refractive index along the propagation path in the turbulent flow at any time or at any position. Therefore, only statistical models are feasible for THz wave propagation analysis in a turbulent flow. As the most fundamental statistical theory, Kolmogorov’s theory is widely used to model the turbulence [10], where turbulence can be regarded as a lot of small unstable air masses called eddies. It is assumed that the feature of each eddy is statistically isotropic and homogeneous. The size of such eddies ranges from an inner turbulent scale $l_{0}$ to an outer turbulent scale $L_{0}$. Furthermore, according to dimension analysis and the law of conservation of energy, it is discovered that the temperature, velocity, and refractive index of two points $i$ and $j$ satisfy the “2/3” law that $$D_{TT}(L_{ij})=\begin{cases}C_{T}^{2}L_{ij}^{2/3},&l_{0}<L_{ij}<L_{0}\\ C_{T}^{2}l_{0}^{-4/3}L_{ij}^{2},&L_{ij}\leq l_{0},\end{cases}$$ (7) $$D_{nn}(L_{ij})=\begin{cases}C_{n}^{2}L_{ij}^{2/3},&l_{0}<L_{ij}<L_{0}\\ C_{n}^{2}l_{0}^{-4/3}L_{ij}^{2},&L_{ij}\leq l_{0},\end{cases}$$ (8) where $\langle\cdot\rangle$ represents the statistical mean. $T_{i}$ and $T_{j}$ represent the temperatures of points $i$ and $j$. $n_{i}$ and $n_{j}$ represent the refractive indices of points $i$ and $j$, respectively. $L_{ij}$ denotes the distance between $i$ and $j$. $C_{n}^{2}$ and $C_{T}^{2}$ stand for the coefficients in (7) and (8) in $\textrm{K}\textrm{m}^{-2/3}$ and $\textrm{m}^{-2/3}$, respectively, which are known as the refractive index structure constant (RISC) and the temperature structure index. II-C Modelling of Refractive Index Structure Constant in the Terahertz Band $C_{n}^{2}$ is widely used to characterize the intensity of atmospheric turbulence. Statistically, the RISC is uniform along a horizontal path, and the RISC at different altitudes $C_{n,\textrm{vis}}^{2}$ in the visible light frequency band with wavelength around $0.5~{}\mu m$ can be modelled according to the Hafnagle-Valley model [11] as $$\displaystyle C_{n,\textrm{vis}}^{2}(h)$$ $$\displaystyle=0.00594(v/27)^{2}(10^{-5}h)^{10}e^{-\frac{h}{1000}}$$ (9) $$\displaystyle+2.7\times 10^{-16}e^{-\frac{h}{1500}}+Ae^{-\frac{h}{100}},$$ where $h$ denotes the altitude in m, $v$ represents the average wind velocity in $\textrm{m}/\textrm{s}$, and $A$ is the terrestrial RISC in $\textrm{m}^{-2/3}$. However, the RISC model in the THz band has not been investigated due to the lack of measurement data. Instead, we propose a THz statistical RISC model based on the model in the visible light frequency band. This is reasonable due to the following justifications. First, based on the fact that the environmental temperature does not depend on frequency, the temperature structure constant is also frequency-invariant according to the definition (7). Therefore, the temperature structure constant in the THz band is equal to that in the visible light band, i.e., $C_{T,\textrm{thz}}^{2}=C_{T,\textrm{vis}}^{2}$. Second, given the relationship between the refractive index and temperature in the two frequency bands $n_{\textrm{thz}}(T)$ and $n_{\textrm{vis}}(T)$, we can express the RISC model in the THz band as $$C_{n,\textrm{thz}}^{2}(h)=C_{n,\textrm{vis}}^{2}(h)*\Big{(}\frac{\partial n_{\textrm{thz}}(T)}{\partial T}\Big{)}^{2}\Big{/}\Big{(}\frac{\partial n_{\textrm{vis}}(T)}{\partial T}\Big{)}^{2}.$$ (10) By substituting the relationship between the refractive index versus temperature in the THz and visible light frequency bands, i.e., equations (3) and (4) in [3], we can transform the RISC model in the visible light frequency band into a RISC model in the THz band according to (10). III Scintillation and Attenuation Modelling of Turbulence in the Terahertz Band Based on the characterized turbulence model with RISC, we investigate two statistical features of turbulence on THz wave propagation including scintillation and attenuation. The scintillation of turbulence characterizes the random power fluctuation of the received signal, and the attenuation of turbulence represents an additional attenuation besides the free-space path loss, molecular absorption, and scattering. III-A Scintillation of Turbulence in the Terahertz Band Unlike traditional multi-path fading led by random constructive or destructive interference of multi-path signals, turbulence scintillation is due to the random direction distortion of the LoS signal. By assuming the received signal power as $P_{r}$, we define the scintillation parameter $I$ as the instantaneous ratio of the signal intensity to its statistical average, i.e., $I={P_{r}}/{\langle P_{r}\rangle}$. Previous studies have proposed several statistical models to characterize the turbulence scintillation [12] corresponding to the different strengths of turbulence, including the log-normal for weak turbulence, the K distribution for strong turbulence, and the exponential distribution corresponding to the saturation regime [12]. As the criterion parameter for classification, the strength of turbulence can be distinguished according to the Rytov variance, given by $$\sigma_{R}^{2}=0.5C_{n}^{2}k^{7/6}L^{11/6},$$ (11) where $k=\frac{2\pi}{\lambda}$ denotes the wave number. The three conditions $\sigma_{R}^{2}\ll 1$, $\sigma_{R}^{2}\sim 1$, and $\sigma_{R}^{2}\gg 1$ correspond to the case of weak turbulence, strong turbulence, and saturated regime, respectively. In extreme cases, e.g., when the propagation distance is $100~{}\textrm{km}$ and the RISC is $C_{n}^{2}=10^{-11}$, the Rytov variance at $300~{}\textrm{GHz}$ is as large as $\sigma_{R}^{2}=396$. Therefore, the strength of turbulence can experience all three conditions in the THz band, and thus it is necessary to use a universal distribution as the THz turbulence scintillation model. A universal model applicable under any weak turbulence, strong turbulence or saturated regime conditions is the Gamma-Gamma distribution [13]. By applying Kolmogorov’s theory, the Gamma-Gamma model assumes that the fluctuation of turbulence is caused by large-scale eddies and small-scale eddies, i.e., $I=I_{a}I_{b}$, and the two terms are governed by independent Gamma distributions, given by $$\displaystyle p_{I_{a}}(I_{a})$$ $$\displaystyle=\frac{\alpha(\alpha I_{a})^{\alpha-1}}{\Gamma(\alpha)}\exp(-\alpha I_{a}),I_{a}>0,\alpha>0,$$ (12) $$\displaystyle p_{I_{b}}(I_{b})$$ $$\displaystyle=\frac{\beta(\beta I_{a})^{\beta-1}}{\Gamma(\beta)}\exp(-\beta I_{a}),I_{a}>0,\beta>0,$$ (13) where $\alpha$ represents the effective number of large-scale cells, and $\beta$ represents the effective number of small-scale ones. According to the chain rule and the independency of $I_{a}$ and $I_{b}$, the scintillation parameter $I$ follows a Gamma-Gamma distribution, which is expressed by $$p(I)=\frac{2(\alpha\beta)^{(\alpha+\beta)/2}}{\Gamma(\alpha)\Gamma(\beta)}I^{(\alpha+\beta)/2-1}K_{\alpha-\beta}\big{[}\sqrt{2(\alpha\beta I)}\big{]},$$ (14) where $\Gamma(\cdot)$ denotes the Gamma function. $K_{p}(\cdot)$ is the modified Bessel function of the second kind of $p^{\textrm{th}}$ order. The large-scale and small-scale scintillation parameters $\alpha$ and $\beta$ can be modelled according to Andrew’s method [13], which are expressed as $$\displaystyle\alpha$$ $$\displaystyle=\left[\exp\left(\frac{0.49\sigma_{R}^{2}}{(1+0.18D^{2}+0.56\sigma_{R}^{12/5})^{7/6}}\right)-1\right]^{-1}$$ (15) $$\displaystyle\beta$$ $$\displaystyle=\left[\exp\left(\frac{0.51\sigma_{R}^{2}(1+0.69D^{2}\sigma_{R}^{12/5})^{-5/6}}{(1+0.9D^{2}+0.62\sigma_{R}^{12/5})^{7/6}}\right)-1\right]^{-1}$$ (16) where $D=\sqrt{kl_{ra}^{2}/4L}$, and $l_{ra}$ represents the diameter of the antenna receiving aperture. Given that the effective area of the received antenna is $A_{\textrm{eff}}=\lambda^{2}/4\pi$, $l_{ra}=\lambda/\pi$. The four statistical models for turbulence scintillation are summarized in Table. I. The relationship between the Gamma-Gamma distribution with the log-normal model, K distribution, and exponential distribution are elaborated as follows. • When $\sigma_{R}^{2}<1$, i.e., in the case of weak turbulence, we have $\alpha\gg 1$ and $\beta\gg 1$ and the Gamma-Gamma distribution is approximately a log-normal distribution. • When $\sigma_{R}^{2}>1$, i.e., in the case of strong turbulence, we have $\beta\approx 1$, and the Gamma-Gamma distribution shrinks to a K distribution. • When $\sigma_{R}^{2}\to\infty$, i.e., in the saturated regime turbulence, we have $\alpha\gg 1$ and $\beta\approx 1$, and the Gamma-Gamma distribution approximately follows an exponential distribution. III-B Attenuation Effect of Turbulence in the Terahertz Band Similar to the scintillation model of turbulence, the attenuation model lacks deterministic and closed-form solutions due to the complexity and difficulty of solving the Navier-Stokes equations. An empirical formula developed by Larry C. Andrews for the turbulent attenuation $L_{\textrm{tur}}$ can be expressed as $$L_{\textrm{tur}}=10\log\left|1-\sqrt{\sigma_{I}^{2}}\right|,$$ (17) where $L_{\textrm{tur}}$ denotes the attenuation caused by turbulence. $\sigma_{I}^{2}\triangleq\langle I^{2}\rangle$ denotes the variance of $I$ since $\langle I^{2}\rangle=1$ by definition. Given the Gamma-Gamma distribution of the scintillation parameter $I$, we have $$\displaystyle\sigma_{I}^{2}$$ $$\displaystyle=\langle I_{a}^{2}\rangle\langle I_{b}^{2}\rangle$$ (18) $$\displaystyle=\frac{1}{\alpha}+\frac{1}{\beta}+\frac{1}{\alpha\beta}.$$ By substituting (15) and (16) into (18), the attenuation caused by turbulence in the THz band can be expressed in (19) at the bottom of the next page. IV Numerical Results In this section, we perform numerical evaluations of the effect of atmospheric turbulence on THz UAV wireless communications. Specifically, the altitude dependency of the RISC, as a critical parameter to characterize the turbulence, is first investigated. Then, the Rytov variance as an intermediate value determining the strength of turbulence at different frequencies and distances is evaluated. Furthermore, the THz scintillation and attenuation model caused by the turbulence is analyzed. IV-A Refractive Index Structure Constant The RISC at the different altitudes modelled in (10) is shown in Fig. 2 with varying terrestrial RISC $A$ and average wind speed $v$. We observe that the RISC shows a decreasing trend as the altitude increases. In the low-altitude regions ($h<1~{}\textrm{km}$), the terrestrial RISC $A$ primarily governs the RISC value due to the continuity of the RISC, while in the high-altitude region ($h>1~{}\textrm{km}$), the wind speed dominates the trend of RISC. IV-B Rytov Variance The Rytov variance determines the strength of the attenuation effect of atmospheric turbulence on THz wave propagation. The Rytov variance under different frequencies in the THz band are shown in Fig. 3. In Fig. 3 for varying propagation distance $L$ and RISC $C_{n}^{2}$, the RISC is $C_{n}^{2}=10^{-11}~{}\textrm{m}^{-2/3}$, and in Fig. 3, the propagation distance is $L=10~{}\textrm{km}$. We approximately regulate the range of weak turbulence, strong turbulence, and saturated regime as $\sigma_{R}^{2}<0.1$, $0.1\leq\sigma_{R}^{2}\leq 10$, and $\sigma_{R}^{2}>10$, respectively. IV-C Scintillation Caused by Atmospheric Turbulence The probability density function (PDF) of the Gamma-Gamma-distributed turbulence scintillation is shown in Fig. 4. We plot the scintillation PDF with the varying Rytov variance as 0.1, 1, and 10, which corresponds to the weak turbulence, strong turbulence, and saturated regime, respectively. As we can observe, the PDFs for the three cases approximately follow log-normal, K distribution, and exponential distribution as we analyzed in Sec. III-A. For weak turbulence where the Rytov variance is 0.1, we have $\alpha=20.76$ and $\beta=19.75$. This indicates that both the numbers of effective large-scale and small-scale cells are large, and the log-normal distribution is reasonable due to the law of large number. When $\sigma_{R}^{2}=1$, the turbulence is strong and we have $\alpha=2.95$ and $\beta=2.46$. For the saturated regime where $\sigma_{R}^{2}=10$, we have $\alpha=2.48$ and $\beta=0.98$. IV-D Attenuation Caused by Atmospheric Turbulence The attenuation caused by turbulence versus frequency with varying propagation distance $L$ and RISC $C_{n}^{2}$ is shown in Fig. 5. In Fig. 5, the RISC is taken as $10^{-13}~{}\textrm{m}^{-2/3}$, and in Fig. 5, we use $L=1~{}\textrm{km}$. As the propagation distance and the RISC increase, the strength of the turbulence increases. Specifically, the turbulence attenuation at $1~{}\textrm{km}$ and $C_{n}^{2}=10^{-13}~{}\textrm{m}^{-2/3}$ is about $1~{}\textrm{dB}$. However, we observe that the increased turbulence strength does not necessarily lead to an increased turbulence attenuation. When the turbulence is weak, i.e., at a short propagation distance or low RISC, the attenuation increases with frequency, distance, and RISC. As the strength of turbulence changes from weak to strong, the attenuation of turbulence first increases and then decreases. In the THz band with frequency less than $1~{}\textrm{THz}$, the attenuation caused by turbulence within $10~{}\textrm{km}$ is less than $10~{}\textrm{dB}$. V Conclusion In this paper, we have investigated the THz UAV channel model in the inhomogeneous medium, and modelled the effect of atmospheric turbulence on the THz wave propagation. Specifically, the environmental parameter RISC in the THz band characterizing the intensity of turbulence is first analyzed. Then, the scintillation and attenuation characteristics caused by atmospheric turbulence are studied. The PDF of the turbulence scintillation is modelled as a Gamma-Gamma distribution, and the attenuation model based on the Gamma-Gamma scintillation is derived in a closed-form expression. Finally, numerical results demonstrate that the turbulence attenuation at $1~{}\textrm{km}$ and $C_{n}^{2}=10^{-13}~{}\textrm{m}^{-2/3}$ is approximately $1~{}\textrm{dB}$, which increases with the propagation distance, frequency, and RISC under the weak turbulence condition. As the strength of turbulence changes from weak to strong, the attenuation of turbulence first increases and then decreases. In the THz band with frequency less than $1~{}\textrm{THz}$, the attenuation caused by turbulence within $10~{}\textrm{km}$ is less than $10~{}\textrm{dB}$. References [1] C. Han, Y. Wang, Y. Li, Y. Chen, N. A. Abbasi, T. Kürner, and A. F. Molisch, “Terahertz wireless channels: A holistic survey on measurement, modeling, and analysis,” IEEE Commun. Surveys & Tutorials, vol. 24, no. 3, pp. 1670–1707, 2022. [2] Y. Li, N. Li, and C. Han, “Ray-tracing simulation and hybrid channel modeling for low-terahertz uav communications,” in ICC 2021-IEEE International Conference on Commun.   IEEE, 2021, pp. 1–6. [3] J. Ma, L. Moeller, and J. F. Federici, “Experimental comparison of terahertz and infrared signaling in controlled atmospheric turbulence,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 36, no. 2, pp. 130–143, 2015. [4] R. Temam, Navier-Stokes equations: theory and numerical analysis.   American Mathematical Soc., 2001, vol. 343. [5] M. A. Esmail, “Experimental performance evaluation of weak turbulence channel models for fso links,” Opt. Commun., vol. 486, p. 126776, 2021. [6] J. M. Jornet and I. F. Akyildiz, “Channel modeling and capacity analysis for electromagnetic wireless nanonetworks in the terahertz band,” IEEE Trans. on Wireless Commun., vol. 10, no. 10, pp. 3211–3221, 2011. [7] ITU-R, “Attenuation by atmospheric gases and related effects,” Recommendation ITU-R P.676-12, Aug. 2019. [8] J. Ma, J. Adelberg, R. Shrestha, L. Moeller, and D. M. Mittleman, “The effect of snow on a terahertz wireless data link,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 39, no. 6, pp. 505–508, 2018. [9] F. Norouzian, E. Marchetti, M. Gashinova, E. Hoare, C. Constantinou, P. Gardner, and M. Cherniakov, “Rain attenuation at millimeter wave and low-thz frequencies,” IEEE Trans. on Antennas and Propagation, vol. 68, no. 1, pp. 421–431, 2019. [10] R. H. Kraichnan, “Kolmogorov’s hypotheses and eulerian turbulence theory,” The Physics of Fluids, vol. 7, no. 11, pp. 1723–1734, 1964. [11] R. K. Tyson, “Adaptive optics and ground-to-space laser communications,” Applied optics, vol. 35, no. 19, pp. 3640–3646, 1996. [12] L. Dordová and O. Wilfert, “Calculation and comparison of turbulence attenuation by different methods,” Radioengineering, vol. 19, no. 1, pp. 162–167, 2010. [13] A. Al-Habash, L. C. Andrews, and R. L. Phillips, “Mathematical model for the irradiance probability density function of a laser beam propagating through turbulent media,” Optical engineering, vol. 40, no. 8, pp. 1554–1562, 2001.
The first BritGrav meeting, Southampton, 27/28 March 2001 Many relativists now working in Britain have good memories of the Pacific Coast, Midwest or Nickel and Dime meetings from their postdocs in the US. So it seemed natural to establish a similar annual meeting in Britain, and the Southampton GR group gave it a try. The two distinguishing features of the US regional meetings are very short talks, and keeping it simple and cheap. This concept proved a success east of the Atlantic too: 81 people attended, giving 47 10-minute plenary talks over the two days. 12 of the talks were by PhD students, and 8 by postdocs: a proportion we hope to increase in the future! On the two days, talks were only roughly grouped by subject. The distribution of topics differed noticeably from those of recent US regional meetings. Below are the abstracts of all talks in the order in which they were given. (The electronic preprint references have been added by the organizers at the request of xxx admin, and are indicative only.) The BritGrav02 meeting will be organised by Henk van Elst and Reza Tavakol at Queen Mary and Westfield College, London. All enquiries to them: H.van.Elst@qmw.ac.uk and r.tavakol@maths.qmw.ac.uk. 1. Carlos Sopuerta, U Portsmouth, carlos.sopuerta@port.ac.uk Dynamics of irrotational dust matter in the long wavelength approximation We report results on the long wavelength iteration of the general relativistic equations for irrotational dust matter in the covariant fluid approach. In particular, we discuss the dynamics of these models during the approach to any spacelike singularity where a BKL-type evolution is expected, studying the validity of this approximation scheme and the role of the magnetic part of the Weyl tensor. 2. Spyros S Kouris, U York, ssk101@york.ac.uk Large-distance behavior of graviton two-point functions in de Sitter spacetime (gr-qc/0004097) It has been observed that the graviton two-point functions in de Sitter spacetime in various gauges grow as the distance between the two points increases. We show that this behavior is a gauge artifact in a non-covariant gauge. We argue that it is also a gauge artifact in a two-parameter family of covariant gauges. In particular, we show that the two-point function of the linearized Weyl tensor is well-behaved at large distances. 3. Stanislav Babak, U Cardiff, Stanislav.Babak@astro.cf.ac.uk Finite-range gravity and its role in cosmology, black holes and gravitational waves The Field Theoretical approach to gravity provides us with a natural way to modify general relativity. In this paper we have considered a two parameter family of theories of a finite-range gravitational field. To give a proper physical interpretation, we have considered the exact solutions of linearised equations. They describe plane gravitational waves and static spherically symmetric gravitational field. A certain choice of sign of the free parameters allows us to associate these free parameters with the rest masses of longitudinal and transverse gravitons. In the static and spherically symmetric problem we have obtained Yukawa-type gravitational potentials instead of Coulomb-type and the gravitational field becomes finite ranged. Applying the theory of finite-range gravitational field to the homogeneous and isotropic Universe, we have shown that even a very small mass of longitudinal graviton can drastically alter the late-time evolution of the Universe. According to the sign of the free parameters, the expansion of the Universe either slows down or gains an additional acceleration. Numerical and semi-analytical solutions of the exact field equations for the static and spherically symmetric problem (Schwarzschild-like solution) were obtained. It has been demonstrated that the event horizon occurs at the location of the physical singularity. That is, a regular event horizon is unstable with respect to ascribing graviton with a non-zero rest mass. 4. Cristiano Germani, U Portsmouth, Cristiano.Germani@port.ac.uk Gravitational collapse in the brane Abstract: We discuss some aspects of gravitational collapse in the brane world scenario, focusing on the 4-D brane and new features arising from the modified Einstein equations. In particular we report results on collapse of a pure Weyl field and the possible formation of a pure Weyl-charged black hole whose metric is formally that of Reissner-Nordstrom, but with no mass and a negative charge term. 5. Jorma Louko, U Nottingham, jorma.louko@maths.nottingham.ac.uk Brane worlds with bolts We construct a brane-world model that has one compact extra dimension on the brane, two extra dimensions in the bulk, and a nonvanishing bulk magnetic field. The main new feature is that the bulk has no horizons that could develop singularities upon the addition of perturbations. The static scalar propagator is calculated on the brane and shown not to see the extra dimensions in the large distance limit. We argue, in part on grounds of an exact nonlinear gravitational wave solution on the brane-world background, that a similar result should hold for linearised gravity. 6. M.L. Fil’chenkov, Peoples’ Friendship University, Moscow, fil@agmar.ru Tunnelling models of creation and collapse The early Universe and late collapse are considered in terms of quantum tunnelling through some potential barrier constructed from Einstein’s equations. Wave functions, energy levels and a penetration factor are calculated for these quantum systems. Applications to creation of a universe in the laboratory, observational cosmology and miniholes are discussed. A possibility of the creation of open and flat models as well as a role of quintessence (de Sitter vacuum, domain walls and strings) in these processes are investigated. 7. Henk van Elst, Queen Mary and Westfield College, henk@gmunu.mth.uct.ac.za Scale-invariant dynamics for Abelian G2 perfect fluid cosmologies A dynamical formulation at a derivative level $\partial^{2}g$ for Abelian G2 perfect fluid cosmologies is introduced that employs scale-invariant autonomous evolution systems of symmetric hyperbolic format. This allows for a transparent isolation of (i) the physical degrees of freedom in both the gravitational and the matter source fields and (ii) the gauge degrees of freedom associated with the time slicing. In addition, the self-similar (asymptotic) states can be determined systematically. Various applications are highlighted. 8. Sonny Khan, U Aberdeen, S.Khan@maths.abdn.ac.uk Projective symmetries in space-times The existence of proper projective vector fields is discussed in Einstein-Maxwell and spherically symmetric static space-times. The problem is resolved for null Einstein-Maxwell space-times (where none can exist) and under certain restrictions in the non-null case (where none have so far been found to exist). Examples of such vector fields are provided in spherically symmetric static space-times, where a general solution (under a loose restriction) is presented. 9. Ghulam Shabbir, U Aberdeen, shabbir@maths.abdn.ac.uk Curvature collineations for certain space-time metrics A approach is suggested using the 6x6 form of the curvature tensor to find the complete set of curvature collineations (CCs) in space- times which possess certain types of (metric) symmetry. This approach immediately rules out the possibilities where proper CCs cannot exist and suggests how to find CCs when they do. The space-times considered include those with plane and spherical symmetric, static symmetry. 10. Graham Hall, U Aberdeen, gsh@maths.abdn.ac.uk Orbits of symmetries in space-times The subject of the orbits of symmetries in general relativity is usually discussed in an ad-hoc way. The object of this talk is to try to clarify the position and to offer precise definitions and outline rigorous proofs. The questions to be answered (totally or partially) include (i) exactly what are these symmetries, how are they and their orbits described and in what sense,if any, are they groups? (ii) how do the standard geometrical invariants behave on an orbit? (iii)what are ”fixed points” of symmetries, what happens there and where can they occur? (iv) are there ”well behaved” and ”badly behaved” orbits and if so, how does one distinguish between them and do the well behaved ones behave as in the ”folklore” of the subject? (v) what rules determine the dimension and type of an orbit once the symmetries are specified? 11. Raul Vera, Queen Mary and Westfield College, R.Vera@qmw.ac.uk Matching preserving the symmetry In the literature, the matchings between spacetimes have been most of the times implicitly assumed to preserve the symmetry. But no definition for such a kind of matching was given until very recently. Loosely speaking, the matching hypersurface is restricted to be tangent to the orbits of a desired group of isometries admitted at both sides of the matching and thus admitted by the whole matched spacetime. This restriction can lead to conditions on the properties of the preserved group of isometries, such as its algebraic type and the geometrical xproperties of the vector fields that generate that group. 12. Bill Bonnor, QMW, 100571.2247@compuserve.com Equilibrium of classical spinning particles Using an approximation method I investigate the stationary axisymmetric solution for two spinning mass particles. It contains, as expected, a conical singularity between the particles representing a strut preventing collapse. However, there is a second singularity which seems to represent a torque preserving the spins of the particles. For certain values of the spins no torque is needed. It does not seem possible to explain this solution in terms of classical mechanics. 13. Alan Barnes, Aston University, barnes@aston.ac.uk On some perfect fluid solutions of Stephani Some years ago Stephani derived several solutions for a geodesic perfect fluid flow with constant pressure. In this paper Stephani’s solutions with non-zero rotation are generalised; all are of Petrov type D and the magnetic part of the Weyl tensor vanishes. In general the solutions admit no Killing vectors and the fluid flow is shearing, twisting and expanding. The solutions can all be matched across a time-like hypersurface of constant curvature to a de Sitter or Minkowski spacetime. A generalisation of Stephani’s ansatz is also considered. The general solution in this case has not yet been derived, but some very simple exact solutions for a fluid with spherical symmetry have been obtained. 14. Brian Edgar, U Linkoping, bredg@mai.liu.se Tetrads and symmetry Two of the main successful tools in the search for exact solutions of Einstein’s equations are tetrad formalisms and symmetry groups. However, when these two methods are used together there is a lot of redundancy in the calculations, and the two methods do not complement each other. Chinea, Collinson and Held have in the past looked at the possibility of introducing symmetry conditions at tetrad level, and more recently Fayos and Sopuerto have proposed a new approach to integrating tetrads and symmetry. Building on the results of Chinea, Collinson and Held we propose and illustrate a method which analyses easily and efficiently the Killing vector structure of metrics as they are calculated in tetrad formalisms. 15. Fredrik Andersson, U Linkoping frand@mai.liu.se Potentials and superpotentials of symmetric spinor fields In 1988 Illge proved that an arbitrary symmetric (n,0)-spinor field always has an (n-1,1)-spinor potential, which is symmetric over its n-1 unprimed indices. In particular this gives an alternative proof for the existence of a Lanczos potential of the Weyl spinor. Illge also considered the problem of finding completely symmetric spinor potentials for completely symmetric spinor fields having both primed and unprimed indices. Because of algebraic inconsistencies it turned out to be impossible to prove a general existence theorem for these potentials. However, in one important special case it is possible to prove existence of completely symmetric spinor potentials. In an Einstein spacetime it turns out that if we look for potentials for spinors having only one primed index, the algebraic inconsistencies collapse into differential conditions which can be satisfied using gauge freedom. Thus, in Einstein spacetimes, completely symmetric (n,1)-spinor fields always has a completely symmetric (n-1,2)-spinor potential. This means that the Weyl spinor of an Einstein spacetime always has a completely symmetric (2,2)- spinor potential. This ’superpotential’ seems to be related to quasi-local momentum of the Einstein spacetime. 16. Annelies Gerber, Imperial College, annelies.gerber@ic.ac.uk, and Patrick Dolan, Imperial College, pdolan@inctech.com The Lanczos curvature potential problems with applications The Weyl- and Riemann curvature tensors have both been analysed in terms of a tensor potential $L_{abc}$. The Weyl-Lanczos system of PDE’s is always in involution but the Riemann-Lanczos system needs prolongation to be in involution in general. Examples to illustrate these problems are given. 17. Robin Tucker, U Lancaster, r.tucker@lancaster.ac.uk On the detection of scalar field induced spacetime torsion (gr-qc/0104050) It is argued that the geodesic hypothesis based on autoparallels of the Levi-Civita connection may need refinement in the Brans-Dicke theory of gravitation. Based on a reformulation of this theory in terms of a connection with torsion determined dynamically in terms of the gradient of the Brans-Dicke scalar field, we compute the perihelion shift in the orbit of Mercury on the alternative hypothesis that its worldline is an autoparallel of a connection with torsion. If the Brans-Dicke scalar field couples significantly to matter and test particles move on such worldlines, the current time keeping methods based on the conventional geodesic hypothesis may need refinement. 18. Julian Barbour, jbarbour@online.rednet.co.uk Relativity without relativity (gr-qc/0012089) I shall give a brief review of the above paper by myself and Brendan Foster and Niall Ó Murchadha. We give a new derivation of general relativity based entirely on three dimensional principles. We start with a parametrisation invariant, Jacobi-type action on superspace. This will be the product of a square root of a potential times the square root of a kinetic energy term. All we demand is that the action have nontrivial solutions. We find that the only viable action is the Baierlein-Sharp-Wheeler Lagrangian and thus we recover G.R. We impose no spacetime conditions whatsoever. We extend this to include scalar and vector fields. We recover causality (everything travels at the same speed), Maxwellian electrodynamics, and the gauge principle. Thus we derive a large part of modern physics from a purely three dimensional point of view. (gr-qc/0012089) 19. Petros Florides, Trinity College Dublin, florides@maths.tcd.ie The Sagnac effect and the special theory of relativity Contrary to the recent claim by Dr A.G. Kelly and Professor J.P. Vigier, it is shown, in two distinct ways, that the Sagnac Effect and Special Relativity are in complete and perfect harmony. 20. John Barrett, U Nottingham, John.Barrett@nottingham.ac.uk Quantum Gravity and the Lorentz group I will give a brief summary of the state of progress of models of 4d quantum gravity based on the representation theory of the Lorentz group, and its future prospects. 21. Christopher Steele, U Nottingham, Christopher.Steele@maths.nottingham.ac.uk Asymptotics of relativistic spin networks I will discuss Relativistic Spin Networks based on the representation theory of the 4 dimensional rotation group. I will present asymptotic formulae for the evaluation of particular networks and provide a geometrical interpretation. 22. Robert Low, U Coventry, mtx014@coventry.ac.uk Timelike foliations and the shape of space What is the shape of space in a space-time? In the familar case of a globally hyperbolic space-time, one natural answer is to consider the topology of a Cauchy surface. However, there are other approaches which one might also consider. One is to consider edgeless spacelike submanifolds of the space-time; another is to foliate the space-time by timelike curves, and consider the quotient space obtained by identifying points lying on the same curve. I will describe conditions on the family of timelike curves, and on a vector field whose integral curves they are for this to give rise to a meaningful shape of space, and briefly discuss the relationship between this approach and that of considering edgeless spacelike submanifolds. 23. Jonathan Wilson, U Southampton, jpw@maths.soton.ac.uk Generalised hyperbolicity in singular space-times (gr-qc/0001079, gr-qc/0101018) A desirable property of any physically plausible space-time is global hyperbolicity. It is shown that a weaker form of hyperbolicity, defined according to whether the scalar wave equation admits a unique solution, is satisfied in certain space-times with weak singularities such as those containing thin cosmic strings or shells of matter. It therefore evident that such weak singularities may be regarded as internal points of space-time. 24. Rod Halburd, U Loughborough, R.G.Halburd@lboro.ac.uk Painleve analysis in General Relativity Speaker: Rod Halburd, Dept Mathematical Sciences, Loughborough University Painleve analysis uses the singularity structure of solutions of a differential equation in the complex domain as an indicator of the integrability (solvability) of a differential equation. A large class of charged spherically symmetric models will be identified and solved using this method. 25. Magnus Herberthson, U Linköping, maher@mai.liu.se A nice differentiable structure at spacelike infinity (gr-qc/9712058) By a conformal rescaling and compactification of the (asymptotically flat) physical space-time, spacelike infinity is represented by a single point. It it known that the regularity of the manifold at that point cannot be smooth, and various differentiable structures have been suggested. In this talk we report that in the case of a Kerr solution, the standard C¿1-structure can be extended to include both spacelike and null directions from spacelike infinity. 26. Jonathan Thornburg, U Vienna, jthorn@thp.univie.ac.at Episodic self-similarity in critical gravitational collapse (gr-qc/0012043) I report on a new behavior found in numerical simulations of spherically symmetric gravitational collapse in self-gravitating SU(2) $\sigma$ models at intermediate gravitational coupling constants: The critical solution (between black hole formation and dispersion) closely approximates the continuously self-similar (CSS) solution for a finite time interval, then departs from this, and then returns to CSS again. This cycle repeats several times, each with a different CSS accumulation point. The critical solution is also approximately discretely self-similar (DSS) throughout this whole process. 27. Jose Maria Martin Garcia, U Southampton, jmm@maths.soton.ac.uk Stability of Choptuik spacetime in the presence of charge and angular momentum. We show that Choptuik spacetime is a codimension-1 exact solution of the full Einstein - Maxwell - Klein-Gordon problem. That is, electromagnetic field perturbations, charged scalar perturbations and perturbations with angular momentum all decay. Only the well known spherical neutral perturbation linking Chopuik spacetime with Schwarzschild and Minkowski is unstable. We calculate critical exponents for charge and angular momentum for near critical collapse. 28. Elizabeth Winstanley, U Sheffield, e.winstanley@sheffield.ac.uk Update on stable hairy black holes in AdS Black holes in anti-de Sitter space can support gauge field hair which is stable under spherically symmetric perturbations. This talk discusses recent work showing that these black holes remain stable under non-spherically symmetric perturbations in the odd-parity sector. 29. VS Manko, CINVESTAV - IPN, VladimirS.Manko@fis.cinvestav.mx Equilibrium configurations of aligned black holes The existence of multi-black hole equilibrium configurations in different axisymmetric systems is discussed. 30. Colin Pendred, U Nottingham, Colin.Pendred@maths.nottingham.ac.uk Black hole formation in (2+1)-dimensional relativity The non-spinning BTZ black hole is introduced and shown that it can be formed by the collision of two point particles in (2+1)-dimensional spacetimes of negative cosmological constant. The more general, spinning BTZ black hole is then considered. 31. Atsushi Higuchi, U York, ah28@york.ac.uk Low-energy absorption cross sections of stationary black holes (gr-qc/0011070) We present a special-function free derivation of the fact shown first by Das, Gibbons and Mathur that the low-energy massless scalar absorption cross section of a spherically symmetric black hole is universally given by the horizon area. Our derivation seems to generalize to any stationary black holes. 32. Brien Nolan, Dublin City U, brien.nolan@dcu.ie Stability of naked singularities in self-similar collapse (gr-qc/0010032) We show that spherically symmetric self-similar space-times possessing naked singularities are stable in the class of spherically symmetric self-similar space-times obeying the strong and dominant energy conditions. The discussion is restricted to space-times obeying a ’no pure outgoing radiation’ condition. 33. Richard I Harrison, U Oxford, harrison@maths.ox.ac.uk A numerical study of the Schroedinger-Newton equation I wish to report on a numerical study of the Schroedinger-Newton equations, that is the set of nonlinear partial differential equations, consisting of the Schroedinger equation coupled with the Poisson equation. The nonlinearity arises from using as potential term in the Schroedinger equation the solution of the Poisson equation with source proportional to the probability density. Penrose [1] has suggested that the stationary solutions of the Schroedinger Newton equation might be the ’preferred basis’ of endpoints for the spontaneous reduction of the quantum-mechanical wave-function. I have computed stationary solutions in the spherically symmetric and the axially symmetric cases, and then tested the linear stability of these solutions. All solutions are unstable except for the ground state. In the spherically symmetric case, I have considered the general time evolution which confirms the picture from linear theory and shows that the general evolution leaves a lump of probability in the ground-state, while the rest disperses to infinity. In the z-independent time evolution, initial indications are that lumps of probability orbit around each other before dispersing. [1] R Penrose Phil.Trans.R.Soc.(Lond.)A 356 (1998) 1927 34. Paul Tod, U Oxford, tod@maths.ox.ac.uk Causality and Legendrian-linking A point $p$ in Minkowski space $M$ can be determined by its ’sky’, which is to say the set $S_{p}$ of null-geodesics through it in the space $N$ of all null-geodesics. It was suggested by Penrose, and proved in his thesis by Robert Low [1], that causal relations in $M$ are reflected by linking in $N$. Thus two points $p$ and $q$ are time-like separated in $M$ if their skies are linked in $N$, and space-like separated if their skies are unlinked (if they are null separated then evidently their skies meet). Penrose also suggested that this relationship should continue to hold for curved but, say, globally-hyperbolic space-times $\cal{M}$. This is much harder. It was explored by Low and later by my student José Natario. It seems to be true in $2+1$-dimensions but is not true in $3+1$, where one has explicit counter-examples. Rather than give up, one can change the question: spaces of null geodesics are contact manifolds and skies are Legendrian submanifolds, so one can ask instead are points causally-related iff their skies are Legendrian-linked - that is, can they be unlinked while remaining Legendrian? There are partial answers and various interesting developments here, and I will describe progress on this programme. [1] RJ Low Twistor linking and causal relations Class.Quant.Grav.7(1990) 177-187 35. Tim Sumner, Imperial College, t.sumner@ic.ac.uk Fundamental physics experiments in space There is a wide interest in the UK in carrying out, so-called, ’Fundamental Physics’ experiments in space. Earlier this year the space community produced a summary document for the PPARC SSAC. This talk will summarise that document, which contains suggestions for a number of experiments to do with gravity as this is one area in which the use of space is particularly beneficial. 36. Mike Plissi, U Glasgow, m.plissi@physics.gla.ac.uk The GEO 600 gravitational wave detector A number of interferometer-based gravitational wave detectors are currently being constructed in several countries. The GEO 600 detector, which is being built near Hannover, Germany, is a 600 m baseline instrument that utilises a Michelson interferometric scheme. The instrument will target the frequency band above about 50 Hz. A basic description of the detector will be given with a report of its current status. 37. Oliver Jennrich, U Glasgow, o.jennrich@physics.gla.ac.uk LISA: A ESA Cornerstone mission to detect low frequency gravitational waves. LISA, a space-borne interferometric gravitational wave detector has been recently approved as a ESA Cornerstone Mission. LISA makes use of interferometry very similar to the ground-based detectors (LIGO, VIRGO, GEO600, TAMA) but is designed to detect gravitational waves in a much lower frequency band of 0.1 mHz to 100 mHz. The main objective of the LISA mission is to learn about the formation, growth, space density and surroundings of massive black holes for which there is a compelling evidence to be present in the centers of most galaxies, including our own. Observations of signals from these sources would test General Relativity and particularly black-hole theory to unprecedented accuracy. 38. Mike Cruise, U Birmingham, amc@star.sr.bham.ac.uk Very high frequency gravitational wave detectors A number of theoretical models of the early Universe predict spectra of stochastic gravitational waves rising with frequency. Such spectra satisfy all the known observational upper limits. Detectors are needed in the Megahertz and Gigahertz ranges to detect this radiation. A prototype of one such detector is now in operation and the prospects of it achieving useful sensitivities will be discussed. 39. Edward Porter, U Cardiff, Edward.Porter@astro.cf.ac.uk An improved model of the gravitational wave flux for inspiralling black holes While the orbital energy for an inspiralling binary is known exactly for both the Schwarzchild and Kerr cases, an exact expression for the gravitational wave flux remains elusive. All current analytical models rely on a Post-Newtonian expansion. This has given us a Taylor expansion for the flux in the test-mass case to $v^{1}1$ for the Schwarzchild case, and to $v^{8}$ for the Kerr case. The problem with the Taylor approximation is the slow rate of convergence at various approximations. It has been shown that using Pade Approximation gives a better convergence for the flux in the Schwarzchild case. In this work I propose a method for improving the convergence of the flux in the Schwarzchild case by using a modified Pade Approximation and extend the previous work to the Kerr case. The most interesting result from the Kerr case is that we may be able to closely model a Kerr system with some real value of the spin parameter ’a’ with a Pade Approximation using a ’wrong’ value of ’a’. We also provide scaling laws at various approximations to recover the true spin of the system from the Pade Approximation. 40. Anna Watts, U Southampton Neutron stars as a source of gravitational waves We examine the evolution of the r-mode instability for a magnetized neutron star accreting large amounts of remnant matter in the immediate aftermath of the supernova. We discuss the implications for neutron star spin rate and gravitational wave signal. 41. John Miller, SISSA / U Oxford, miller@sissa.it Non-stationary accretion onto black holesx Update on our project for using computer simulations to investigate different pictures for non-stationary accretion onto black holes. This has relevance for explaining observed time-varying behaviour of galactic X-ray sources and AGN and, possibly, the formation of jets. 42. Uli Sperhake, U Southampton, us@maths.soton.ac.uk A new numerical approach to non-linear oscillations of neutron stars Radial oscillations of neutron stars are studied by decomposing the fundamental variables into a background contribution (taken to be the static TOV-solution) and time dependent perturbations. The perturbations are not truncated at some finite order, but are evolved according to the fully nonlinear evolution equations, which can be written in quasi linear form in our case. The seperation of the background allows us to study oscillations over a wide range of amplitudes with high accuracy. We monitor the onset of nonlinear effects as the amplitude is gradually increased. Problems encountered at the surface in any Eulerian formulation, i.e. the singular behavior of the equations in the nonlinear as well as the linearised case and its impact on our results is briefly discussed. 43. Philippos Papadopoulos, U Portsmouth, philippos.papadopoulos@port.ac.uk Non-linear black hole oscillations (gr-qc/0104024) The dynamics of isolated black hole spacetimes is explored in the non-linear regime using numerical simulations. The geometric setup is based on ingoing light cone foliations centered on the black hole. The main features of the framework and the current status of the computations will be presented. 44. Felipe Mena, Queen Mary and Westfield College, F.Mena@qmw.ac.uk Cosmic no hair: second order perturbations of de Sitter universe We study the asymptotic behaviour of second order perturbations in a flat Friedmann-Roberstson-Walker universe with dust plus a cosmological constant, a model which is asymptotically de Sitter. We find that as in the case of linear perturbations, the nonlinear perturbations also tend to constants, asymptotically in time. This shows that the earlier results concerning the asymptotic behaviour of linear perturbations is stable to nonlinear (second order) perturbations. It also demonstrates the validity of the cosmic no-hair conjecture in such nonlinear inhomogenuous settings. 45. Kostas Glampedakis, U Cardiff, Costas.Glampedakis@astro.cf.ac.uk Scattering of scalar waves by rotating black holes (gr-qc/0102100) We study the scattering of massless scalar waves by a Kerr black hole, by letting plane monochromatic waves impinge on the black hole. We calculate the relevant scattering phase-shifts using the Prüfer phase-function method, which is computationally efficient and reliable also for high frequencies and/or large values for the angular multipole indices (l,m). We use the obtained phase-shifts and the partial-wave approach to determine differential cross sections and deflection functions. Results for off-axis scattering (waves incident along directions misaligned with the black hole’s rotation axis) are obtained for the first time. Inspection of the off-axis deflection functions reveals the same scattering phenomena as in Schwarzschild scattering. In particular, the cross sections are dominated by the glory effect and the forward (Coulomb) divergence due to to the long-range nature of the gravitational field. In the rotating case the overall diffraction pattern is “frame-dragged” and as a result the glory maximum is not observed in the exact backward direction. We discuss the physical reason for this behaviour, and explain it in terms of the distinction between prograde and retrograde motion in the Kerr gravitational field. Finally, we also discuss the possible influence of the so-called superradiance effect on the scattered waves. 46. Reinhard Prix, U Southampton, rp@maths.soton.ac.uk Covariant multi-constituent hydrodynamics (gr-qc/0004076) I will discuss the covariant formulation of hydrodynamics derived from a ”convective” variational principle by Carter. This approach allows a convenient generalisation to several interacting fluids (incorporating the effect known as ”entrainment”) and to superfluids. Such a framework is therefore very well suited to neutron star applications, some of which I will briefly describe here. 47. Ian Jones, U Southampton, dij@maths.soton.ac.uk Gravitational waves from freely precessing neutron stars (gr-qc/0008021) The free precession of neutron stars has long been cited as a possible source of detectable gravitational radiation. In this talk we will examine the problem of calculating the gravitational radiation reaction on a star, which we model as an elastic shell containing a fluid core. We will conclude by assessing the likely gravitational wave amplitudes of precessing neutron stars in our Galaxy. The following talks were also scheduled, but the speakers had to cancel: 48. Bernard S. Kay, U York, bsk2@york.ac.uk New paradigm for decoherence and for thermodynamics, new understanding of quantum black holes (hep-th/9802172, hep-th/9810077) I outline my recent proposed new explanation for decoherence and for entropy increase based on my new postulate that the ”quantum gravitational field is unobservable” and on my related new postulate that ”physical entropy is matter-gravity entanglement entropy”. I also recall how this proposal offers a resolution to a number of black-hole puzzles, including the ”information loss puzzle”. 49. Alberto Vecchio, U Birmingham, vecchio@aei-potsdam.mpg.de Searching for binary systems undergoing precession with GEO and LIGO (gr-qc/0011085) The search for binary systems containing rapidly spinning black holes poses a tremendous computational challange for the data analysis of gravitational wave experiments. We present a short review of our present understanding of the key issues and discuss possible strategies to tackle efficiently the problem.
Gap statistics and higher correlations for geometric progressions modulo one Christoph Aistleitner, Simon Baker, Niclas Technau, and Nadav Yesha Christoph Aistleitner: Institute of Analysis and Number Theory, TU Graz Steyrergasse 30, 8010 Graz Austria aistleitner@math.tugraz.at Simon Baker: School of Mathematics, University of Birmingham Birmingham, B15 2TT UK simonbaker412@gmail.com Niclas Technau: School of Mathematical Sciences, Tel Aviv University Tel Aviv 69978 Israel; Department of Mathematics University of Wisconsin–Madison 480 Lincoln Dr, Madison WI-53706 USA niclast@mail.tau.ac.il; technau@wisc.edu Nadav Yesha: Department of Mathematics, University of Haifa Haifa 3498838 Israel nyesha@univ.haifa.ac.il (Date:: January 10, 2021) Abstract. Koksma’s equidistribution theorem from 1935 states that for Lebesgue almost every $\alpha>1$, the fractional parts of the geometric progression $(\alpha^{n})_{n\geq 1}$ are equidistributed modulo one. In the present paper we sharpen this result by showing that for almost every $\alpha>1$, the correlations of all finite orders and hence the normalized gaps of $(\alpha^{n})_{n\geq 1}$ mod 1 have a Poissonian limit distribution, thereby resolving a conjecture of the two first named authors. While an earlier approach used probabilistic methods in the form of martingale approximation, our reasoning in the present paper is of an analytic nature and based upon the estimation of oscillatory integrals. This method is robust enough to allow us to extend our results to a natural class of sub-lacunary sequences. 2010 Mathematics Subject Classification: 11K99, 60G55 CA is supported by the Austrian Science Fund (FWF), projects F-5512, I-3466, I-4945 and Y-901. NT received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No. 786758), and Austrian Science Fund (FWF) from project J 4464-N. NY is supported by the ISRAEL SCIENCE FOUNDATION (grant No. 1881/20). 1. Introduction A sequence $(\vartheta_{n})_{n\geq 1}\subseteq\left[0,1\right)$ is called uniformly distributed (or equidistributed) if each test interval $I\subseteq\left[0,1\right)$ contains asymptotically its “fair share” of points, that is, $(\vartheta_{n})_{n\geq 1}$ is equidistributed when $$\frac{\#\left\{n\leq N:\vartheta_{n}\in I\right\}}{N}\underset{N\rightarrow% \infty}{\longrightarrow}\lambda\left(I\right)$$ for all intervals $I\subseteq\left[0,1\right)$, where $\lambda$ denotes the Lebesgue measure. A sequence $(\vartheta_{n})_{n\geq 1}$ of numbers in $\mathbb{R}$ is called uniformly distributed modulo one if the sequence of fractional parts $(\{\vartheta_{n}\})_{n\geq 1}$ is uniformly distributed in $[0,1)$. The classical theory of uniform distribution modulo one dates back to the early twentieth century, when Weyl [25] laid its foundations in his famous paper of 1916. One of the basic results in the area is Koksma’s equidistribution theorem [14], which states that for $\lambda$-almost every $\alpha>1$, the sequence corresponding to the geometric progression $(\alpha^{n})_{n\geq 1}$ is uniformly distributed modulo one. Such sequences with a “typical” value of $\alpha$ have been famously proposed by Knuth in his monograph The art of computer programming [13] as examples of sequences showing strong pseudorandomness properties. Koksma’s equidistribution theorem has been extended to so-called complete uniform distribution by Niederreiter and Tichy [17], and quantitative equidistribution estimates were obtained in [1]. A version of Koksma’s equidistribution theorem for self-similar measures was proved in [4]. Describing the behaviour of $(\alpha^{n})_{n\geq 1}$ for specific values of $\alpha$ is a challenging problem. A well known and open problem due to Mahler asks for the range of $(\{\xi(3/2)^{n}\})_{n\geq 1}$, where $\xi>0$ is a real parameter. For more on this topic, and the study of the sequence $(\alpha^{n})_{n\geq 1}$ modulo one, we refer the reader to [5, 6, 7, 10] and the references therein. While the classical notion of equidistribution modulo one addresses the “large-scale” behaviour of the fractional parts of a sequence (counting the number of points in fixed intervals), the study of the fine-scale statistics of sequences modulo one, i.e. statistics on the scale of the mean gap $1/N$, has attracted growing attention in recent years. Among the most popular fine-scale statistics are the $k$-point correlations and the nearest-neighbour gap distribution, which are defined as follows. Let $\vartheta=(\vartheta_{n})_{n\geq 1}\subseteq\mathbb{R}$ be a sequence, and let $k\geq 2$ be an integer. Let $\mathcal{B}_{k}=\mathcal{B}_{k}(N)$ denote the set of integer $k$-tuples $(x_{1},\dots,x_{k})$ such that all components are in the range $\{1,\dots,N\}$ and no two components are equal. For a compactly supported function $f:\mathbb{R}^{k-1}\to\mathbb{R}$, the $k$-point correlation sum $R_{k}\left(f,\vartheta,N\right)$ is defined to be (1.1) $$R_{k}\left(f,\vartheta,N\right)\overset{\mathrm{def}}{=}\frac{1}{N}\sum_{% \mathbf{x}\in\mathcal{B}_{k}}\sum_{\mathbf{m}\in\mathbb{Z}^{k-1}}f\left(N\left% (\Delta\left(\mathbf{x},\vartheta\right)-\mathbf{m}\right)\right)$$ where $\Delta\left(\mathbf{x},\vartheta\right)$ denotes the difference vector (1.2) $$\Delta\left(\mathbf{x},\vartheta\right)=\left(\vartheta_{x_{1}}-\vartheta_{x_{% 2}},\vartheta_{x_{2}}-\vartheta_{x_{3}},\ldots,\vartheta_{x_{k-1}}-\vartheta_{% x_{k}}\right)\in\mathbb{R}^{k-1}.$$ Let $C_{c}^{\infty}(\mathbb{R}^{k-1})$ denote the space of real-valued, smooth, compactly supported functions on $\mathbb{R}^{k-1}$. If $$\lim\limits_{N\to\infty}R_{k}\left(f,\vartheta,N\right)=\int_{\mathbb{R}^{k-1}% }f(\mathbf{x})\leavevmode\nobreak\ \textup{d}\mathbf{x}$$ for all $f\in C_{c}^{\infty}(\mathbb{R}^{k-1})$ (equivalently, if $R_{k}\left(1_{\Pi},\vartheta,N\right)\to\textup{vol}(\Pi)$ as $N\to\infty$ for all axis-parallel boxes $\Pi$, where $1_{\Pi}$ is the indicator function of $\Pi$), then we say that the $k$-point correlation of $(\{\vartheta_{n}\})_{n\geq 1}$ is “Poissonian”. This notion alludes to the fact that such behaviour is in accordance with the (almost sure) behaviour of a Poisson process with intensity one. To define the distribution of the so-called level spacings or nearest-neighbour gaps, i.e. gaps between consecutive elements of $(\{\vartheta_{n}\})_{n\geq 1}$, we need to consider the reordered elements $$\vartheta_{(1)}^{N}\leq\vartheta_{(2)}^{N}\leq\dots\leq\vartheta_{(N)}^{N}\leq% \vartheta_{(N+1)}^{N},$$ which we obtain as a reordering of $\{\vartheta_{1}\},\dots,\{\vartheta_{N+1}\}$. Assume that the limit $N\to\infty$ of the function $$G(s,\vartheta,N)=\frac{1}{N}\#\left\{n\leq N:\leavevmode\nobreak\ N\left(% \vartheta_{(n+1)}^{N}-\vartheta_{(n)}^{N}\right)\leq s\right\}$$ exists for all $s\geq 0$. Then the limit function $G(s)$ is called the asymptotic distribution function of the level spacings (or, alternatively, of the nearest-neighbour gaps) of $(\{\vartheta_{n}\})_{n\geq 1}$. We say that the level spacings are Poissonian when $G(s)=1-e^{-s}$, which is in agreement with the well-known fact that the waiting times in the Poisson process are exponentially distributed. The $k$-point correlation of order $k=2,3,\dots$, is also called the pair correlation, triple correlation, etc. Poissonian behaviour of these local statistics can be seen as a pseudorandomness property, since a sequence $X_{1},X_{2},\dots$ of independent, identically distributed random variables with uniform distribution on $[0,1)$ will almost surely have Poissonian correlations/gap distributions. Note that equidistribution is also traditionally seen as a pseudorandomness property, albeit on a “global” rather than on a “local” level. Recently, the first two authors of the present paper proved that $(\{\alpha^{n}\})_{n\geq 1}$ has Poissonian pair correlation for almost all $\alpha>1$; see [2]. This is a refinement of Koksma’s equidistribution theorem mentioned earlier, since it is known that a sequence with Poissonian pair correlations is necessarily equidistributed [3, 11, 16]. In [2] it was conjectured that for almost all $\alpha>1$, the $k$-point correlation of $(\{\alpha^{n}\})_{n\geq 1}$ should also be Poissonian for all $k\geq 2$, and that as a consequence the level spacings of $(\{\alpha^{n}\})_{n\geq 1}$ are Poissonian as well. The main purpose of the present paper is to prove this conjecture. Theorem 1.1. For almost every $\alpha>1$, the $k$-point correlation of $(\{\alpha^{n}\})_{n\geq 1}$ is Poissonian for all $k\geq 2$. It is known that if the $k$-point correlation is Poissonian for all $k\geq 2$, then the level spacings are also Poissonian (see Appendix A of [15]). Thus as a direct consequence of Theorem 1.1 we obtain that for almost all $\alpha>1$, the level spacings of $(\{\alpha^{n}\})_{n\geq 1}$ are Poissonian. The same principle applies to other ordered statistics, such as the second-to-nearest neighbour gaps etc.; whenever all $k$-point correlations are Poissonian, then these ordered statistics also behave in accordance with the Poissonian model. We deduce Theorem 1.1 from a more general result which, due to the robustness of our method, comes at essentially no extra cost. This more general result is the following. Theorem 1.2. Let $\left(a_{n}\right)_{n\geq 1}$ be an increasing sequence of positive real numbers such that (1.3) $$\lim_{n\rightarrow\infty}\frac{a_{n}}{\log n}=\infty,$$ and such that (1.4) $$a_{n+1}-a_{n}\geq n^{-C}$$ for some $C>0$ for all sufficiently large $n$. Then for almost every $\alpha>0$, the $k$-point correlation of the sequence $(\{e^{\alpha a_{n}}\})_{n\geq 1}$ is Poissonian for all $k\geq 2$. Theorem 1.1 follows upon letting $a_{n}=n$ in Theorem 1.2, and observing that the map $\alpha\mapsto e^{\alpha}$ from $(0,\infty)$ to $(1,\infty)$ preserves measure zero sets. We remark that in Theorem 1.2, the sequence $(e^{\alpha a_{n}})_{n\geq 1}$ can very well be a sequence of sub-exponential growth, that is $$\lim_{n\rightarrow\infty}\frac{e^{\alpha a_{n+1}}}{e^{\alpha a_{n}}}=1,$$ and still have a gap statistic following the Poissonian model for almost all $\alpha>0$, e.g. by taking $a_{n}=\sqrt{n}$, $a_{n}=(\log(n+1))^{2}$, $a_{n}=(\log\log(n+2))\log(n+1)$ etc. To put our results into perspective, we mention some earlier related results. Fundamental work on the correlations of sequences in the unit interval was carried out by Rudnick, Sarnak and Zaharescu; see for example [18, 19, 20]. As a general principle, proving Poissonian behaviour of the $k$-point correlation of a sequence becomes increasingly difficult when $k$ becomes large. For example, it is known [18] that $(\{n^{2}\alpha\})_{n\geq 1}$ has Poissonian pair correlation for almost all $\alpha$; the same is conjectured to be true for the triple correlation (and probably all higher correlations), but only partial results exist in this direction [22]. Rudnick and Zaharescu proved [21] that for a lacunary sequence of integers $(a_{n})_{n\geq 1}$, i.e. a sequence satisfying $\liminf\limits_{n\to\infty}\frac{a_{n+1}}{a_{n}}>1$, for almost all $\alpha$, the $k$-point correlation of $(\{a_{n}\alpha\})_{n\geq 1}$ is Poissonian for all $k\geq 2$. Recently, the last two authors of the present paper proved [23] that for every $k\geq 2$, the $k$-point correlation of $(\{n^{\alpha}\})_{n\geq 1}$ is Poissonian for almost all $\alpha>4k^{2}-4k-1$; in the notation of Theorem 1.2 this corresponds to $a_{n}=\log n$, so that assumption (1.3) fails to hold. Results of a non-metric nature are particularly sparse. A marked exception is the sequence $(\{\sqrt{n}\})_{n\geq 1}$, for which the level spacings distribution is not Poissonian, as was shown by Elkies and McMullen using methods from ergodic theory [9]; somewhat surprisingly, the pair correlation is, in fact, Poissonian [8]. It is conjectured that the sequence $(\{n^{2}\alpha\})_{n\geq 1}$ has Poissonian pair correlation for every $\alpha$ which cannot be too well approximated by rational numbers, but again only partial results exist [12, 24]. 2. Outline of the argument For the remaining part of this manuscript, we will only be dealing with the sequence $\vartheta\left(\alpha\right)=\left(e^{\alpha a_{n}}\right)_{n\geq 1}$; we shall simply write $\Delta\left(\mathbf{x},\vartheta\right)$ instead of $\Delta\left(\mathbf{x},\vartheta(\alpha)\right)$, and $R_{k}\left(f,\alpha,N\right)$ instead of $R_{k}\left(f,\vartheta(\alpha),N\right)$. The strategy to prove Theorem 1.2 is much in the spirit of [23] and will now be detailed. To begin with, we restrict our attention to intervals of the special form (2.1) $$\mathcal{J}=\mathcal{J}\left(A\right)\overset{\mathrm{def}}{=}\left[A,A+1% \right],\qquad\left(A>0\right)$$ which will remain fixed throughout the proof. It is certainly enough to demonstrate that for each $k\geq 2$ and $A>0$ the assertion of Theorem 1.2 holds, for almost every $\alpha\in\mathcal{J}$. Note that $\mathcal{J}$, equipped with Borel sets and Lebesgue measure, forms a probability space, so it is natural to speak about expectations, variances, etc., of real-valued functions defined on $\mathcal{J}$. To prove Theorem 1.2, we show via a variance estimate that $R_{k}$ concentrates around its mean value $\int_{-\infty}^{\infty}f\left(\mathbf{x}\right)\>\mathrm{d}\mathbf{x}$ with a reasonable error term; Theorem 1.2 will then follow from a routine argument (see [23, Proposition 7.1]). Using Poisson summation, we can phrase the variance estimate in terms of oscillatory integrals of the form: (2.2) $$I\left(\mathbf{u},\mathbf{t}\right)\overset{\mathrm{def}}{=}\int_{\mathcal{J}}% e\left(\phi\left(\mathbf{u},\mathbf{t},\alpha\right)\right)\,\mathrm{d}\alpha$$ where $e(z)\overset{\mathrm{def}}{=}e^{2\pi iz}$, and the phase function $\phi$ is given by (2.3) $$\phi\left(\mathbf{u},\mathbf{t},\alpha\right)\overset{\mathrm{def}}{=}\sum_{i% \leq 2k}u_{i}e^{\alpha a_{t_{i}}}\qquad\mathbf{u}=\left(u_{1},\ldots,u_{2k}% \right),\quad\mathbf{t}=\left(t_{1},\ldots,t_{2k}\right).$$ There are additional constraints on the integer vectors $\mathbf{u},\mathbf{t}$ which naturally arise from the analysis. More precisely, fixing $\varepsilon>0$, we will have $\mathbf{u}=(\mathbf{v},\mathbf{w})$, $\mathbf{t}=(\mathbf{x},\mathbf{y})$ where $\mathbf{x},\mathbf{y}\in\mathcal{B}_{k}$, and $\mathbf{v},\mathbf{w}\in\mathcal{U}_{k}^{\varepsilon}$, where (2.4) $$\mathcal{U}_{k}^{\varepsilon}=\mathcal{U}_{k}^{\varepsilon}\left(N\right)% \overset{\mathrm{def}}{=}\{{\bf u}=\left(u_{1},\dots,u_{k}\right)\in\mathbb{Z}% ^{k}:\,1\leq\left\|\mathbf{u}\right\|_{\infty}\leq 2N^{1+\varepsilon},\,u_{1}+% \dots+u_{k}=0\}.$$ Our desired variance estimate can, after a simple computation, be phrased as a bound for an average of these integrals. It will then be shown that (2.5) $$V_{k}\left(N,\mathcal{J},\varepsilon\right)\overset{\mathrm{def}}{=}\frac{1}{N% ^{2k}}\sum_{\mathbf{z}=\left({\bf u},\mathbf{t}\right)\in\left(\mathcal{U}_{k}% ^{\varepsilon}\right)^{2}\times\mathcal{B}_{k}^{2}}\left|I\left({\bf u},{\bf t% }\right)\right|=O(N^{-1+\varepsilon}).$$ To prove such a bound, we use a variant of Van der Corput’s lemma which requires us to guarantee that at each point $\alpha\in\mathcal{J}$ at least some derivative of $\phi$ with respect to $\alpha$ is large. Such a “repulsion” property is captured by the function (2.6) $$\mathrm{Van}_{\ell}\phi\left(\alpha\right)\overset{\mathrm{def}}{=}\max_{i\leq% \ell}|\phi^{(i)}(\alpha)|,$$ and we shall derive an acceptable lower bound on $\mathrm{Van}_{\ell}\phi$, uniformly throughout $\mathcal{J}$. The aforementioned repulsion principle (see Lemma 4.2 below) is the driving force behind the argument, and the only part of the proof where assumptions (1.3) and (1.4) are used. Moreover, this way of reasoning is robust, and the arithmetic that we require is quite simple and essentially just the structure of the real numbers plus quantitative growth and spacing conditions. A technical complication which often arises in the study of $k$-point correlation sums (see e.g. [23, 21]) is that we have to deal with “degenerate” configurations where $\mathbf{u}$ and $\mathbf{t}$ are such that some of the terms in the function $\phi\left(\mathbf{u},\mathbf{t},\alpha\right)$ vanish; this will be handled by a combinatorial argument (see Proposition 5.5 below). 3. Preliminaries In this section we collect the tools that we will use later and introduce further notation. 3.1. Notation Throughout the rest of this manuscript the implied constants may depend on the sequence $(a_{n})_{n\geq 1}$ from the statement of Theorem 1.2, as well as on $k,f,\mathcal{J},\varepsilon,\eta$ and we shall not indicate this dependence explicitly. The dependence on any other parameter will be indicated. The Bachmann–Landau $O$ symbol, or interchangeably the Vinogradov symbols $\ll$ and $\gg$, have their usual meaning. Throughout the manuscript, $k$ is a fixed integer satisfying $k\geq 2$. 3.2. Oscillatory integrals The bulk of our work is concerned with understanding the magnitude of the one-dimensional oscillatory integrals $$I\left(\phi,\mathcal{J}\right)\overset{\mathrm{def}}{=}\int_{\mathcal{J}}e% \left(\phi\left(\alpha\right)\right)\,\mathrm{d\alpha}$$ where $\phi:\mathcal{J}\rightarrow\mathbb{R}$ is a $C^{\infty}$-function, the so called phase function. The phase functions we are required to understand are of the shape $\phi(\alpha)=\phi\left(\mathbf{u},\mathbf{t},\alpha\right)$ as in (2.3). We need the following variant of Van der Corput’s lemma: Lemma 3.1. Let $\phi:\mathcal{J}\rightarrow\mathbb{R}$ be a $C^{\infty}$-function. Fix $\ell\geq 1$, and suppose that $\phi^{\left(\ell\right)}\left(\alpha\right)$ has at most $C$ zeros, and that the inequality $\mathrm{Van}_{\ell}\phi\left(\alpha\right)\geq\lambda>0$ holds throughout the interval $\mathcal{J}$. Then the bound $$I\left(\phi,\mathcal{J}\right)\ll_{\ell,C}\lambda^{-1/\ell}$$ holds when $\ell\geq 2$, or when $\ell=1$ and $\phi^{\prime}$ is monotone on $\mathcal{J}$. Proof. This can be found in [23, Lemma 3.3]. ∎ Lemma 3.1 requires a bound on the number of zeros for the derivatives of $\phi$. For this we prove the following which is a very minor modification of [23, Lemma 4.3]. Lemma 3.2. Let $\psi(\alpha)=\sum_{i\leq\ell}u_{i}e^{\alpha x_{i}}$ for $\mathbf{u}=(u_{1},\ldots,u_{\ell})\in\mathbb{R}^{\ell}_{\neq 0}$ and $\mathbf{x}=(x_{1},\ldots,x_{\ell})\in\mathbb{R}^{\ell}$ such that $x_{1}<\cdots<x_{\ell}$. Then $\psi$ has at most $\ell-1$ zeros in $\mathbb{R}$. Proof. We argue by induction on $\ell$. For $\ell=1$ the correctness of the statement is clear. Assume that the lemma is true for $\ell-1$ ($\ell\geq 2)$, and let $$\psi\left(\alpha\right)=\sum_{i\leq\ell}u_{i}e^{\alpha x_{i}}.$$ The zeros of $\psi$ are exactly the zeros of the function $$\tilde{\psi}\left(\alpha\right)=\sum_{i\leq\ell-1}\tilde{u}_{i}e^{\alpha\tilde% {x}_{i}}+1,$$ where $\tilde{u}_{i}=\frac{u_{i}}{u_{\ell}}$, and $\tilde{x}_{i}=x_{i}-x_{\ell}$ ($1\leq i\leq\ell-1$), since $\psi\left(\alpha\right)=u_{\ell}e^{\alpha x_{\ell}}\tilde{\psi}\left(\alpha\right)$. Moreover, $$\tilde{\psi}^{\prime}\left(\alpha\right)=\sum_{i\leq\ell-1}v_{i}e^{\alpha% \tilde{x}_{i}},$$ where $v_{i}=\tilde{u}_{i}\tilde{x}_{i}$ ($1\leq i\leq\ell-1$). Clearly, the numbers $v_{1},\dots,v_{\ell-1}$ are nonzero, and the $\tilde{x}_{1},\dots,\tilde{x}_{\ell-1}$ are distinct. Therefore, by the induction hypothesis, $\tilde{\psi}^{\prime}$ has at most $\ell-2$ zeros. Hence, by Rolle’s theorem, $\tilde{\psi}$ has at most $\ell-1$ zeros, completing the proof. ∎ 4. The repulsion principle Lemma 4.1. Let $\ell$ be a positive integer. Let $\gamma>0$. Let $0<x_{1}<x_{2}<\ldots<x_{\ell}$ be real numbers such that $x_{i+1}-x_{i}\geq\gamma$ for $1\leq i\leq\ell-1$. Then the matrix (4.1) $$M=M\left(x_{1},\ldots,x_{\ell}\right)=\begin{pmatrix}x_{1}&\ldots&x_{\ell}\\ \vdots&\ddots&\vdots\\ x_{1}^{\ell}&\ldots&x_{\ell}^{\ell}\end{pmatrix}$$ is invertible and the operator norm $\left\|\cdot\right\|_{\infty}$ of its inverse satisfies $$\left\|M^{-1}\right\|_{\infty}\ll_{\ell}x_{\ell}^{\ell-1}x_{1}^{-1}\left(\frac% {1}{\gamma}\right)^{\ell-1}.$$ Proof. The conclusion is trivial when $\ell=1$, so we will assume that $\ell\geq 2$. The matrix $M$ is the transpose of a scaled Vandermonde matrix; the entry $m_{ij}$ of its inverse $M^{-1}$ is given by (see, e.g. [13, Ex. 40]) $$m_{ij}=(-1)^{j-1}\frac{\sum\limits_{\begin{subarray}{c}1\leq m_{1}<\dots<m_{% \ell-j}\leq\ell,\\ m_{1},\dots,m_{\ell-j}\neq i\end{subarray}}x_{m_{1}}\cdots x_{m_{\ell-j}}}{x_{% i}\prod\limits_{\begin{subarray}{c}1\leq m\leq\ell,\\ m\neq i\end{subarray}}(x_{m}-x_{i})}.$$ Hence (4.2) $$|m_{ij}|\ll_{\ell}x_{\ell}^{\ell-1}x_{1}^{-1}\left(\frac{1}{\gamma}\right)^{% \ell-1}$$ for all $1\leq i,j\leq\ell$. It is well-known that the maximum norm $\left\|\cdot\right\|_{M}$ (maximal absolute value of a matrix entry) dominates the operator norm $\left\|\cdot\right\|_{\infty}$, i.e. we have $\left\|\cdot\right\|_{\infty}\ll_{\ell}\left\|\cdot\right\|_{M}$. Thus, (4.2) gives $$\left\|M^{-1}\right\|_{\infty}\ll_{\ell}x_{\ell}^{\ell-1}x_{1}^{-1}\left(\frac% {1}{\gamma}\right)^{\ell-1},$$ as desired. ∎ As a consequence, we are now able to prove the enunciated repulsion principle. In the statement of the following lemma, as throughout the proof, $(a_{n})_{n\geq 1}$ is the sequence from the statement of Theorem 1.2. Recall that by assumption $(a_{n})_{n\geq 1}$ satisfies (1.3) and (1.4), which will be used in the proof of the lemma. Recall also the definition of $\phi\left(\mathbf{u},\mathbf{t},\alpha\right)$ in (2.3) and the definition of $\mathrm{Van}_{\ell}$ in (2.6). Lemma 4.2 (Repulsion principle). Let $\ell$ be a positive integer such that $\ell\leq 2k$. Let $\mathbf{u}\in\mathbb{Z}_{\neq 0}^{\ell}$, and let $\mathbf{t}=(t_{1},\ldots,t_{\ell})\in\mathbb{N}^{\ell}$ be such that $t_{1}<\dots<t_{\ell}$. Then for any (arbitrarily large) $\eta>0$, (4.3) $$\min_{\alpha\in\mathcal{J}}\mathrm{Van}_{\ell}\left(\phi\left(\mathbf{u},% \mathbf{t},\alpha\right)\right)\gg t_{\ell}^{\eta}.$$ The implied constant in (4.3) depends on $\eta$, the sequence $(a_{n})$, the interval $\mathcal{J}$ and the parameter $k$, which throughout the proof are assumed to be fixed. Proof. Let $\alpha\in\mathcal{J}$. To make the underlying structure more transparent, we denote $\boldsymbol{\tau}=(\partial_{\alpha}^{j}\phi\left(\mathbf{u},\mathbf{t},\alpha% \right))_{j=1,\ldots,\ell}$, $\mathbf{w}=(u_{i}e^{\alpha a_{t_{i}}})_{i=1,\ldots,\ell}$, and $M=M(a_{t_{1}},\dots,a_{t_{\ell}})$ as in (4.1). Then $$\mathrm{Van}_{\ell}\left(\phi\left(\mathbf{u},\mathbf{t},\alpha\right)\right)=% \|\boldsymbol{\tau}\|_{\infty},$$ and (4.4) $$\boldsymbol{\tau}=M\mathbf{w}.$$ Note that we have $a_{t_{i+1}}-a_{t_{i}}\gg t_{\ell}^{-C}$ for some fixed positive constant $C$ by assumption (1.4). Thus by Lemma 4.1 and (4.4) we have $$\left\|\mathbf{w}\right\|_{\infty}\leq\left\|M^{-1}\right\|_{\infty}\left\|% \boldsymbol{\tau}\right\|_{\infty}\ll a_{t_{\ell}}^{\ell-1}t_{\ell}^{C(\ell-1)% }\left\|\boldsymbol{\tau}\right\|_{\infty}$$ (we have used the bound $a_{t_{1}}^{-1}\ll a_{1}^{-1}\ll 1$). Now note that $$\left\|\mathbf{w}\right\|_{\infty}\geq|u_{\ell}|e^{\alpha a_{t_{\ell}}}\geq e^% {\alpha a_{t_{\ell}}}.$$ Combining the two equations above we obtain (4.5) $$e^{\alpha a_{t_{\ell}}}\ll a_{t_{\ell}}^{\ell-1}t_{\ell}^{C(\ell-1)}\left\|% \boldsymbol{\tau}\right\|_{\infty}.$$ Recall that the interval $\mathcal{J}$ and therefore $\alpha$ are bounded away from 0 by assumption, so that $a_{t_{\ell}}^{\ell-1}\ll e^{\frac{\alpha a_{t_{\ell}}}{2}}$, and hence (4.5) gives (4.6) $$\left\|\boldsymbol{\tau}\right\|_{\infty}\gg e^{\frac{\alpha a_{t_{\ell}}}{2}}% t_{\ell}^{-C(\ell-1)}.$$ By assumption (1.3) it follows that $e^{\frac{\alpha a_{t_{\ell}}}{2}}\gg t_{\ell}^{\eta+C(\ell-1)}$. Inserting this into (4.6) gives the required bound (4.3). ∎ As a corollary, we get the required bound for the integral $I(\mathbf{u},\mathbf{t})$ (recall the notation (2.2)). Corollary 4.3. Let $\ell$ be a positive integer such that $\ell\leq 2k$. Let $\mathbf{u}\in\mathbb{Z}_{\neq 0}^{\ell}$, and let $\mathbf{t}=(t_{1},\ldots,t_{\ell})\in\mathbb{N}^{\ell}$ be such that $t_{1}<\dots<t_{\ell}$. Then for any (arbitrarily large) $\eta>0$, (4.7) $$I(\mathbf{u},\mathbf{t})\ll t_{\ell}^{-\eta}.$$ Proof. This is an immediate consequence of Lemma 3.1, Lemma 3.2 and Lemma 4.2. ∎ 5. Variance estimates We begin this section with our definition of the variance of the $k$-point correlation sum $R_{k}\left(f,\alpha,N\right)$ with respect to $\alpha\in\mathcal{J}$. Recall that $\mathcal{B}_{k}=\mathcal{B}_{k}\left(N\right)$ is the set of integer $k$-tuples $\left(x_{1},\dots,x_{k}\right)$ such that $1\leq x_{i}\leq N$ for all $i=1,\dots,k$ and such that no two components $x_{i}$ are equal. Definition 5.1. The variance of the $k$-point correlation sum $R_{k}\left(f,\alpha,N\right)$ with respect to the interval $\mathcal{\mathcal{J}}$ is defined as $$\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)\overset{% \mathrm{def}}{=}\int_{\mathcal{\mathcal{\mathcal{J}}}}\left(R_{k}\left(f,% \alpha,N\right)-C_{k}\left(N\right)\int_{\mathbb{R}^{k-1}}f\left({\bf x}\right% )\,\text{d}{\bf x}\right)^{2}\,\mathrm{d}\alpha$$ where (5.1) $$C_{k}\left(N\right)\overset{\mathrm{def}}{=}\frac{\#\mathcal{B}_{k}}{N^{k}}=% \left(1-\frac{1}{N}\right)\cdots\left(1-\frac{k-1}{N}\right).$$ The reason for the combinatorial factor (5.1) will be apparent in the proof below. The goal of this section is to show that the variance $\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)$ decays polynomially in $N$: Proposition 5.2. For all $\varepsilon>0$, we have $$\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)=O(N^{-1+% \varepsilon}).$$ The first routine step will be to express $R_{k}$ in terms of an exponential sum. Fix $\varepsilon>0$, and set $$\mathcal{N}_{k-1}^{\varepsilon}=\mathcal{N}_{k-1}^{\varepsilon}\left(N\right)=% \left\{\mathbf{n}\in\mathbb{Z}^{k-1}:\leavevmode\nobreak\ 1\leq\|\mathbf{n}\|_% {\infty}\leq N^{1+\varepsilon}\right\}.$$ For the statement of the following lemma, recall the definition of the difference vector $\Delta\left(\mathbf{x},\alpha\right)=\Delta\left(\mathbf{x},\vartheta(\alpha)\right)$ in (1.2). Lemma 5.3. For all $\eta>0$ we have (5.2) $$\displaystyle R_{k}\left(f,\alpha,N\right)$$ $$\displaystyle=C_{k}\left(N\right)\int_{\mathbb{R}^{k-1}}f\left({\bf x}\right)% \,\mathrm{d}{\bf x}$$ $$\displaystyle+\frac{1}{N^{k}}\sum_{\mathbf{x}\in\mathcal{B}_{k}}\sum_{\mathbf{% n}\in\mathcal{N}_{k-1}^{\varepsilon}}\widehat{f}\left(\frac{\mathbf{n}}{N}% \right)e\left(\left\langle\Delta\left(\mathbf{x},\alpha\right),\mathbf{n}% \right\rangle\right)+O(N^{-\eta})$$ as $N\to\infty$. Proof. By the Poisson summation formula, $$R_{k}\left(f,\alpha,N\right)=\frac{1}{N^{k}}\sum_{\mathbf{x}\in\mathcal{B}_{k}% }\sum_{\mathbf{n}\in\mathbb{Z}^{k-1}}\widehat{f}\left(\frac{\mathbf{n}}{N}% \right)e\left(\left\langle\Delta\left(\mathbf{x},\alpha\right),\mathbf{n}% \right\rangle\right).$$ Formula (5.2) now easily follows by separating the zero-th term and using the fact that the Fourier coefficients of any $f\in C_{c}^{\infty}(\mathbb{R}^{k-1})$ decay to zero faster than the reciprocal of any polynomial, see the proof of [23, Lemma 3.4]. ∎ Given ${\bf n}\in\mathbb{Z}^{k-1}$, we define the vector $\mathbf{h}\left({\bf n}\right)=\left(h_{1}\left({\bf n}\right),\dots,h_{k}% \left({\bf n}\right)\right)\in\mathbb{Z}^{k}$ by the rule $$h_{i}\left(\mathbf{n}\right)\overset{\mathrm{def}}{=}\begin{cases}n_{1},&% \mathrm{if}\,i=1,\\ n_{i}-n_{i-1},&\mathrm{if}\,2\leq i\leq k-1\\ -n_{k-1},&\mathrm{if}\,i=k.\end{cases},$$ This definition is motivated by the identity (5.3) $$\left\langle\Delta\left(\mathbf{x},\alpha\right),\mathbf{n}\right\rangle=\phi% \left({\bf h}\left(\mathbf{n}\right),\mathbf{x},\alpha\right).$$ Note that the linear map ${\bf n}\mapsto{\bf h}\left({\bf n}\right)$ is injective. Moreover, it satisfies the bound (5.4) $$\left\|{\bf{\bf h}}\left({\bf n}\right)\right\|_{\infty}\leq 2\left\|{\bf n}% \right\|_{\infty}$$ and the relation (5.5) $$\sum_{i=1}^{k}h_{i}\left(\mathbf{n}\right)=0.$$ Let $$\mathcal{U}_{k}^{\varepsilon}=\mathcal{U}_{k}^{\varepsilon}\left(N\right)=% \left\{{\bf u}=\left(u_{1},\dots,u_{k}\right)\in\mathbb{Z}^{k}:\,1\leq\left\|% \mathbf{u}\right\|_{\infty}\leq 2N^{1+\varepsilon},\,u_{1}+\dots+u_{k}=0\right\},$$ and note that the relations (5.4), (5.5) imply that ${\bf h}\left({\bf n}\right)\in\mathcal{U}_{k}^{\varepsilon}$ whenever ${\bf{n}}\in\mathcal{N}_{k-1}^{\varepsilon}$. Lemma 5.4. For all $\eta>0$, we have (5.6) $$\displaystyle\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)$$ $$\displaystyle\ll V_{k}\left(N,\mathcal{J},\varepsilon\right)+N^{-\eta}$$ as $N\to\infty$, where $V_{k}\left(N,\mathcal{J},\varepsilon\right)$ is given in (2.5). Proof. By Lemma 5.3, for all $\tilde{\eta}>0$ we have $$\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)=\int_{% \mathcal{J}}\Bigl{(}N^{-k}\sum_{\mathbf{x}\in\mathcal{B}_{k}}\sum_{\mathbf{n}% \in\mathcal{N}_{k-1}^{\varepsilon}}\widehat{f}\left(\frac{\mathbf{n}}{N}\right% )e\left(\left\langle\Delta\left(\mathbf{x},\alpha\right),\mathbf{n}\right% \rangle\right)+O(N^{-\tilde{\eta}})\Bigr{)}^{2}\>\mathrm{d}\alpha.$$ Expanding the square and taking $\tilde{\eta}$ sufficiently large, the bound $\widehat{f}\ll 1$ readily implies that for all $\eta>0$, $$\displaystyle\mathrm{Var}\left(R_{k}\left(f,\cdot,N\right),\mathcal{J}\right)=% I+O\left(N^{-\eta}\right)$$ where $$I=N^{-2k}\sum_{\begin{subarray}{c}\mathbf{x},{\bf y}\in\mathcal{B}_{k},\\ \mathbf{n},{\bf m}\in\mathcal{N}_{k-1}^{\varepsilon}\end{subarray}}\widehat{f}% \left(\frac{\mathbf{n}}{N}\right)\widehat{f}\left(\frac{{\bf m}}{N}\right)\int% _{\mathcal{J}}e\left(\left\langle\Delta\left(\mathbf{x},\alpha\right),\mathbf{% n}\right\rangle+\left\langle\Delta\left({\bf y},\alpha\right),{\bf m}\right% \rangle\right)\>\mathrm{d}\alpha.$$ By the identity (5.3) and by the injectivity of the map ${\bf n}\mapsto{\bf h}\left({\bf n}\right)$ we conclude that $$I\ll N^{-2k}\sum_{\begin{subarray}{c}\mathbf{x},{\bf y}\in\mathcal{B}_{k},\\ \mathbf{n},{\bf m}\in\mathcal{N}_{k-1}^{\varepsilon}\end{subarray}}\left|\int_% {\mathcal{J}}e\left(\phi\left({\bf h}\left(\mathbf{n}\right),\mathbf{x},\alpha% \right)+\phi\left({\bf h}\left(\mathbf{m}\right),{\bf y},\alpha\right)\right)% \>\mathrm{d}\alpha\right|\ll V_{k}\left(N,\mathcal{J},\varepsilon\right)$$ which gives the claimed bound. ∎ We will now bound $V_{k}\left(N,\mathcal{J},\varepsilon\right)$ using a combinatorial argument. Combined with Lemma 5.4, this will give Proposition 5.2. Proposition 5.5. We have $$V_{k}\left(N,\mathcal{J},\varepsilon\right)=O(N^{-1+\left(2k-1\right)% \varepsilon}).$$ Proof. Let $$\left[k\right]\overset{\mathrm{def}}{=}\left\{1,\dots,k\right\}.$$ Let $\mathcal{I}_{1},\mathcal{I}_{1}^{\prime},\mathcal{I}_{2},\mathcal{I}_{2}^{% \prime},\mathcal{I}_{3},\mathcal{I}_{3}^{\prime}\subseteq\left[k\right]$ be (possibly empty) sets of indices. Fixing $$\boldsymbol{\tau}\overset{\mathrm{def}}{=}\left(\mathcal{I}_{1},\mathcal{I}_{1% }^{\prime},\mathcal{I}_{2},\mathcal{I}_{2}^{\prime},\mathcal{I}_{3},\mathcal{I% }_{3}^{\prime}\right),$$ we denote by $\mathcal{V}_{k}^{\varepsilon}\left(\boldsymbol{\tau}\right)$ the set of vectors $\left({\bf u},{\bf t}\right)=\left(\left({\bf v},{\bf{\bf w}}\right),\left({% \bf x},{\bf y}\right)\right)\in\left(\mathcal{U}_{k}^{\varepsilon}\right)^{2}% \times\mathcal{B}_{k}^{2}$ for which $$\displaystyle\{i\in[k]:\,\exists_{j(i)\in[k]}:\,x_{i}=y_{j(i)}\}$$ $$\displaystyle=\mathcal{I}_{1},$$ $$\displaystyle\{j\in[k]:\,\exists_{i(j)\in[k]}:\,x_{i\left(j\right)}=y_{j}\}$$ $$\displaystyle=\mathcal{I}_{1}^{\prime},$$ $$\displaystyle\{i\in\mathcal{I}_{1}:v_{i}+w_{j(i)}=0,\mathrm{where}\,j(i)\,% \mathrm{is}\,\mathrm{s.t.}\,x_{i}=y_{j(i)}\}$$ $$\displaystyle=\mathcal{I}_{2},$$ $$\displaystyle\{j\in\mathcal{I}_{1}^{\prime}:v_{i\left(j\right)}+w_{j}=0,% \mathrm{where}\,i(j)\,\mathrm{is}\,\mathrm{s.t.}\,x_{i\left(j\right)}=y_{j}\}$$ $$\displaystyle=\mathcal{I}_{2}^{\prime},$$ $$\displaystyle\{i\in\left[k\right]\setminus\mathcal{I}_{1}:v_{i}=0\}$$ $$\displaystyle=\mathcal{I}_{3},$$ $$\displaystyle\{j\in\left[k\right]\setminus\mathcal{I}_{1}^{\prime}:w_{j}=0\}$$ $$\displaystyle=\mathcal{I}_{3}^{\prime}.$$ If $\mathcal{V}_{k}^{\varepsilon}\left(\boldsymbol{\tau}\right)$ is non-empty, then clearly $\#\mathcal{I}_{1}=\#\mathcal{I}_{1}^{\prime}$ and $\#\mathcal{I}_{2}=\#\mathcal{I}_{2}^{\prime}$. Amongst the list of $2k$ variables $x_{1},\dots,x_{k},y_{1},\dots,y_{k},$ exactly $2k-\#\mathcal{I}_{1}$ distinct variables appear (to see this, recall that by the definition of $\mathcal{B}_{k}$ all numbers $x_{1},\dots,x_{k}$ are distinct, and similarly all numbers $y_{1},\dots,y_{k}$ are distinct). As such if we group similar terms in the corresponding phase function we have (5.7) $$\displaystyle\phi\left({\bf u},{\bf t},\alpha\right)$$ $$\displaystyle=v_{1}e^{\alpha x_{1}}+\dots+v_{k}e^{\alpha x_{k}}+w_{1}e^{\alpha y% _{1}}+\dots+w_{k}e^{\alpha y_{k}}$$ $$\displaystyle=\sum_{i\in\left[k\right]\setminus\left(\mathcal{I}_{1}\cup% \mathcal{I}_{3}\right)}v_{i}e^{\alpha x_{i}}+\sum_{j\in\left[k\right]\setminus% \left(\mathcal{I}_{1}^{\prime}\cup\mathcal{I}_{3}^{\prime}\right)}w_{j}e^{% \alpha y_{j}}+\sum_{i\in\mathcal{I}_{1}\setminus\mathcal{I}_{2}}\left(v_{i}+w_% {j(i)}\right)e^{\alpha x_{i}},$$ and the number of non-vanishing terms is $$l\overset{\mathrm{def}}{=}2k-\#\mathcal{I}_{1}-\#\mathcal{I}_{2}-\#\mathcal{I}% _{3}-\#\mathcal{I}_{3}^{\prime}.$$ Now let us consider the constraints on the variables $v_{1}\dots,v_{k},w_{1},\dots,w_{k}$: • The constraints $v_{i}=0$ $(i\in\mathcal{I}_{3})$ and $v_{1}+\dots+v_{k}=0$ (recall that ${\bf v}\in\mathcal{U}_{k}^{\varepsilon}$) determine $\#\mathcal{I}_{3}+1$ of the variables $v_{1},\dots,v_{k}$ in terms of the other variables; note that ${\bf v}\neq(0,\dots,0)$, so that $\#\mathcal{I}_{3}<k-1.$ • The constraints $w_{j}=0$ $(j\in\mathcal{I}_{3}^{\prime}$) and $w_{j}=-v_{i\left(j\right)}$ ($j\in\mathcal{I}_{2}^{\prime}$) determine $\#\mathcal{I}_{2}^{\prime}+\#\mathcal{I}_{3}^{\prime}$ of the variables $w_{1},\dots,w_{k}$ in terms of the variables $v_{i}$. • Since $w\in\mathcal{U}_{k}^{\varepsilon}$, we also have the constraint $w_{1}+\dots+w_{k}=0$ which is either contained in the previous constraints (this happens if and only if $l=0$), or determines one more variable. To conclude, the constraints on the variables $v_{1},\dots,v_{k},w_{1},\dots,w_{k}$ determine at least $$\left(\#\mathcal{I}_{3}+1\right)+\left(\#\mathcal{I}_{2}^{\prime}+\#\mathcal{I% }_{3}^{\prime}\right)$$ many of these variables. As such if we denote by $m$ the number of independent variables remaining then (5.8) $$m\leq 2k-\#\mathcal{I}_{3}-\#\mathcal{I}_{2}^{\prime}-\#\mathcal{I}_{3}^{% \prime}-1.$$ We relabel these independent variables by $u_{1},\dots,u_{m}.$ Suppose $u_{1},\dots,u_{m}$ are given, then we let $\mathbf{u}^{*}(u_{1},\dots,u_{m})$ denote the unique element of $\mathbb{Z}^{2k}$ for which the conditions corresponding to $\mathcal{I}_{2}^{\prime}$, $\mathcal{I}_{3}$ and $\mathcal{I}_{3}^{\prime}$ are satisfied, and the equations $v_{1}+\cdots+v_{k}=0$ and $w_{1}+\cdots+w_{k}=0$ are satisfied. We now proceed via a case analysis based upon the value of $l$ to obtain a uniform upper bound for $\sum_{\left({\bf u},{\bf t}\right)\in\mathcal{V}_{k}^{\varepsilon}\left(% \boldsymbol{\tau}\right)}\left|I\left({\bf u},{\bf t}\right)\right|$. Case 1, $l=0$. If $l=0$ then $\phi(\mathbf{u},\mathbf{t},\alpha)=0$ so that $I(\mathbf{u},\mathbf{t})=1$. By the above considerations we may conclude that $$\displaystyle\sum_{\left({\bf u},{\bf t}\right)\in\mathcal{V}_{k}^{\varepsilon% }\left(\boldsymbol{\tau}\right)}\left|I\left({\bf u},{\bf t}\right)\right|$$ $$\displaystyle\ll\sum_{\stackrel{{\scriptstyle|u_{1}|,\dots,|u_{m}|\leq 2N^{1+% \varepsilon}}}{{\mathbf{u}^{*}(u_{1},\dots,u_{m})\in(\mathcal{U}_{k}^{\epsilon% })^{2}}}}\sum_{\stackrel{{\scriptstyle 1\leq t_{1},\dots,t_{2k-\#\mathcal{I}_{% 1}}\leq N}}{{t_{i}\neq t_{j}}}}1$$ $$\displaystyle\ll N^{m\left(1+\varepsilon\right)+2k-\#\mathcal{I}_{1}}\leq N^{% \left(2k-1\right)\left(1+\varepsilon\right)}.$$ In the final line we used (5.8) and the fact that $l=0$. Case 2, $l\geq 1$. Assuming $l\geq 1$ we relabel the distinct variables $x_{i},y_{j}$ appearing on the r.h.s. of (5.7) by $t_{1},t_{2},\dots,t_{l}.$ We also relabel by $s_{1},\dots,s_{r}$ the (5.9) $$r\overset{\mathrm{def}}{=}\#\mathcal{I}_{2}+\#\mathcal{I}_{3}+\#\mathcal{I}_{3% }^{\prime}$$ variables $x_{i},y_{j}$ from our list of distinct variables which do not appear on the r.h.s. of (5.7) because their corresponding exponentials $e^{\alpha s_{i}}$ have zero coefficients. Moreover we denote by $\mathbf{t}^{*}(t_{1},\ldots,t_{l},s_{1},\ldots,s_{r})$ the unique element of $\mathcal{B}_{k}^{2}$ determined by $t_{1},\ldots,t_{l},$ $s_{1},\ldots,s_{r},$ and the conditions imposed by $\boldsymbol{\tau}$. Note that by Corollary 4.3, we always have $$I\left(\mathbf{u}^{*}(u_{1},\dots,u_{m}),{\mathbf{t}^{*}(t_{1},\ldots,t_{l},s_% {1},\ldots,s_{r})}\right)\ll|\max_{i}t_{i}|^{-\eta}$$ for any $\eta>0$. By the above considerations we may conclude that $$\displaystyle\sum_{\left({\bf u},{\bf t}\right)\in\mathcal{V}_{k}^{\varepsilon% }\left(\boldsymbol{\tau}\right)}\left|I\left({\bf u},{\bf t}\right)\right|$$ $$\displaystyle\ll\sum_{\stackrel{{\scriptstyle|u_{1}|,\dots,|u_{m}|\leq 2N^{1+% \varepsilon}}}{{\mathbf{u}^{*}(u_{1},\dots,u_{m})\in(\mathcal{U}_{k}^{\epsilon% })^{2}}}}\sum_{\stackrel{{\scriptstyle 1\leq s_{1},\dots,s_{r}\leq N}}{{s_{i}% \neq s_{j}}}}\sum\limits_{\stackrel{{\scriptstyle 1\leq t_{1},\dots,t_{l}\leq N% }}{{t_{i}\neq t_{j},t_{i}\neq s_{j}}}}|I\left(\mathbf{u}^{*}(u_{1},\dots,u_{m}% ),{\mathbf{t}^{*}(t_{1},\ldots,t_{l},s_{1},\ldots,s_{r})}\right)|$$ $$\displaystyle\ll\sum_{\stackrel{{\scriptstyle|u_{1}|,\dots,|u_{m}|\leq 2N^{1+% \varepsilon}}}{{\mathbf{u}^{*}(u_{1},\dots,u_{m})\in(\mathcal{U}_{k}^{\epsilon% })^{2}}}}\sum_{1\leq s_{1},\dots,s_{r}\leq N}\sum\limits_{1\leq t_{1},\dots,t_% {l}\leq N}|\max_{i}t_{i}|^{-\eta}$$ $$\displaystyle\ll\sum_{|u_{1}|,\dots,|u_{m}|\leq 2N^{1+\varepsilon}}\sum_{1\leq s% _{1},\dots,s_{r}\leq N}1\ll N^{m\left(1+\varepsilon\right)+r}\leq N^{\left(2k-% 1\right)\left(1+\varepsilon\right)}.$$ In the last line we used (5.8) and (5.9). Combining the above cases, we have shown that for any value of $l$ we always have $$\sum_{\left({\bf u},{\bf t}\right)\in\mathcal{V}_{k}^{\varepsilon}\left(% \boldsymbol{\tau}\right)}\left|I\left({\bf u},{\bf t}\right)\right|\ll N^{% \left(2k-1\right)\left(1+\varepsilon\right)}.$$ Therefore summing over all $O\left(1\right)$ configurations $\boldsymbol{\tau}$, we conclude that $$V_{k}\left(N,\mathcal{J},\varepsilon\right)=\frac{1}{N^{2k}}\sum_{\tau}\sum_{% \left({\bf u},{\bf t}\right)\in\mathcal{V}_{k}^{\varepsilon}\left(\boldsymbol{% \tau}\right)}\left|I\left({\bf u},{\bf t}\right)\right|\ll N^{-1+\left(2k-1% \right)\varepsilon}.$$ This completes our proof. ∎ 6. Proof of Theorem 1.2 Theorem 1.2 can be deduced from Proposition 5.2 following a standard argument whose proof in a fairly general setting was given in [23]. Proposition 6.1 ([23, Proposition 7.1]). Fix $k\geq 2$, $J\subset\mathbb{R}$ a bounded interval, and a sequence $c_{k}(N)$ such that $c_{k}(N)\to 1$ as $N\to\infty$. Let $(\vartheta_{n}(\alpha))_{n\geq 1}\;(\alpha\in J)$ be a parametrised family of sequences such that the map $\alpha\mapsto\vartheta_{n}(\alpha)$ is continuous for each fixed $n\geq 1$. Assume that there exists $\rho>0$ such that for all $f\in C_{c}^{\infty}(\mathbb{R}^{k-1})$ $$\int_{J}\left(R_{k}\left(f,\alpha,N\right)-c_{k}\left(N\right)\int_{\mathbb{R}% ^{k-1}}f\left({\bf x}\right)\,\text{d}{\bf x}\right)^{2}\,\mathrm{d}\alpha=O(N% ^{-\rho})$$ as $N\to\infty$. Then for almost all $\alpha\in J$, the sequence $(\{\vartheta_{n}(\alpha)\})_{n\geq 1}$ has Poissonian $k$-point correlation. Proof. Poissonian $k$-point correlation is first established along a polynomially sparse subsequence $N_{m}$ using the Borel-Cantelli lemma (or an analogous argument). This is extended to Poissonian $k$-point correlation along the full sequence by a simple sandwiching argument, using the fact that $\lim\limits_{m\to\infty}N_{m+1}/N_{m}=1$. For the full details see [23]. ∎ Proof of Theorem 1.2. Theorem 1.2 follows upon letting $\vartheta_{n}(\alpha)=e^{\alpha a_{n}}$, $J=\mathcal{J}$, $c_{k}(N)=C_{k}(N)$ (recall (5.1)) and $\rho=-1+\varepsilon$ in Proposition 6.1. ∎ References [1] C. Aistleitner: Quantitative uniform distribution results for geometric progressions, Israel J. Math. 204(1): 155–197, 2014. [2] C. Aistleitner and S. Baker: On the pair correlations of powers of real numbers, Israel J. Math., to appear. arXiv:1910.01437 [3] C. Aistleitner, T. Lachmann and F. Pausinger: Pair correlations and equidistribution, J. Number Theory 182: 206–220, 2018. [4] S. Baker: Equidistribution results for self-similar measures, arXiv:2002.11607. [5] Y. Bugeaud: Distribution modulo one and Diophantine approximation, Cambridge Tracts in Mathematics, 193. Cambridge University Press, Cambridge, 2012. [6] A. Dubickas: On the powers of 3/2 and other rational numbers, Math. Nachr. 281(7): 951–958, 2008. [7] A. Dubickas: Powers of a rational number modulo 1 cannot lie in a small interval, Acta Arith. 137(3): 233–239, 2009. [8] D. El-Baz, J. Marklof and I. Vinogradov. The two-point correlation function of the fractional parts of $\sqrt{n}$ is Poisson, Proc. Amer. Math. Soc. 143(7): 2815–2828, 2015. [9] N. D. Elkies and C. T. McMullen: Gaps in $\sqrt{n}$ mod $1$ and ergodic theory, Duke Math. J. 123(1): 95–139, 2004. [10] L. Flatto, J. Lagarias and A. Pollington: On the range of fractional parts $\{\xi(p/q)^{n}\}$, Acta Arith. 70(2): 125–147, 1995. [11] S. Grepstad and G. Larcher: On pair correlation and discrepancy, Arch. Math. (Basel) 109(2): 143–149, 2017. [12] D. R. Heath-Brown: Pair correlation for fractional parts of $\alpha n^{2}$, Math. Proc. Cambridge Philos. Soc., 148(3):385–407, 2010. [13] D. E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms (3rd ed.), Addison Wesley, 1997. [14] J. F. Koksma: Ein mengentheoretischer Satz über die Gleichverteilung modulo Eins, Compositio Math., 2: 250–258, 1935. [15] P. Kurlberg and Z. Rudnick: The distribution of spacings between quadratic residues. Duke Math. J. 100(2): 211–242, 1999. [16] J. Marklof: Pair correlation and equidistribution on manifolds, Monatsh. Math. 191(2): 279–294, 2020. [17] H. Niederreiter and R. Tichy: Solution of a problem of Knuth on complete uniform distribution of sequences, Mathematika 32(1): 26–32, 1985. [18] Z. Rudnick and P. Sarnak: The pair correlation function of fractional parts of polynomials, Comm. Math. Phys., 194(1): 61–70, 1998. [19] Z. Rudnick, P. Sarnak and A. Zaharescu: The distribution of spacings between the fractional parts of $n^{2}\alpha$, Invent. Math., 145(1):37–57, 2001. [20] Z. Rudnick and A. Zaharescu: A metric result on the pair correlation of fractional parts of sequences. Acta Arith. 89(3): 283–293, 1999. [21] Z. Rudnick and A. Zaharescu: The distribution of spacings between fractional parts of lacunary sequences. Forum Math. 14(5): 691–712, 2002. [22] N. Technau and W. Walker: On the triple correlations of fractional parts of $n^{2}\alpha$, arXiv:2005.01490 [23] N. Technau and N. Yesha: On the correlations of $n^{\alpha}$ mod $1$, arXiv:2006.16629. [24] J. L. Truelsen: Divisor problems and the pair correlation for the fractional parts of $n^{2}\alpha$, Int. Math. Res. Not., (16):3144–3183, 2010. [25] H. Weyl: Über die Gleichverteilung von Zahlen mod. Eins, Math. Ann., 77(3): 313–352, 1916.
Category coding with neural network application Qizhi Zhang qizhi.zqz@alibaba-inc.com &Kuang-chih Lee kuang-chih.lee@alibaba-inc.com \ANDHongying Bao hongying.bhy@alibaba-inc.com    Yuan You youyuan.yy@alibaba-inc.com    Dongbai Guo dongbai.gdb@alibaba-inc.com Abstract In many applications of neural network, it is common to introduce huge amounts of input categorical features, as well as output labels. However, since the required network size should have rapid growth with respect to the dimensions of input and output space, there exists huge cost in both computation and memory resources. In this paper, we present a novel method called category coding (CC), where the design philosophy follows the principle of minimal collision to reduce the input and output dimension effectively. In addition, we introduce three types of category coding based on different Euclidean domains. Experimental results show that all three proposed methods outperform the existing state-of-the-art coding methods, such as standard cut-off and error-correct output coding (ECOC) methods.   Category coding with neural network application   Qizhi Zhang qizhi.zqz@alibaba-inc.com Kuang-chih Lee kuang-chih.lee@alibaba-inc.com Hongying Bao hongying.bhy@alibaba-inc.com and Yuan You youyuan.yy@alibaba-inc.com and Dongbai Guo dongbai.gdb@alibaba-inc.com 1 Introduction In machine learning, many features are categorical, such as color, country, user id, item id, etc. In the multi-class classification problem, the labels are categorical too. The ordering relation doesn’t exist among different values for these categories. Usually those categorical variables are represented by one-hot feature vectors. For example, red is encoded to 100, yellow to 010 and blue to 001. But if the number of categories are very huge, for example the user id and item id in e-commerce applications, the one-hot encoding scheme needs too many resources to compute classification results. In the past years while SVM is widely used, ECOC (error-correct output coding) method is proposed for handling huge numbers of output class labels. The idea of ECOC is to reduce a multi-class classification problem of huge number of classes to some two-class classification problems using binary error-correct coding. But for the solution of handling huge number of input categorical features, the similar method doesn’t exist, because the categories can not be separated by linear model, unless the one-hot encoding is used. In recent year, the deep neural network has great improvement in terms of performance and speed. The coding method can be applied to deep neural network with some new beneficial reform. In the classification problem, because the number of labels of a single neural network need not to be binary, if we use a deep learning network as a base learner, it is not necessary to limit the code to be binary. In fact, there is a trade-off between the class number of one base learner and the number of base learner used. According to information theory, if we use $p$ classes classifiers as basic classifiers to solve a classification problem of $N$-class, we need at least $\lceil\log_{p}N\rceil$’s base learners. For example, if we need to solve a classifying problem of 1M’s classes, and we use the binary classifier as base learners, we need at least 20 base learners. For some classical applications, for example, the CNN image classification, we need to build a CNN network for every binary classifier. It is huge cost for computation and memory resources. But if we combine different base learners with 1000 classes, we need at least 2 base learners. We know that the number of parameters in a Deep neural network is usually big, hence using a small number of base learner benefits the reduction of the cost in computing and storage. On the other hand, because the neural network has the ability of non-linear representation, we can use the encoding for categorical features too. Can we use classical error-correct coding for categorical features? We know that in machine learning, the sparsity is a basic rule to be satisfied, but the classical error-correct coding does not satisfy the sparsity. Hence we need to design a new sparse coding scheme for this application. In this paper, we give some new encoding method, they can be applied to both label encoding and feature encoding and give better performance than classical method. In section 2, we give the definition of category coding (CC) and propose 3 classes of CC, namely Polynomial CC, Remainder CC and Gauss CC, which have good property. In section 3 we discuss the application of CC in label encoding. In section 4, we discuss the application of CC in feature encoding. Our main tool is finite field theory and number theory, which can refer to ff and NT . 2 Category coding For a $N$-class categorical feature or label, we define a category coding (CC) as a map $$\begin{array}[]{ccc}f:\mathbb{Z}/N\mathbb{Z}&\longrightarrow&\prod_{i=1}^{r}% \mathbb{Z}/N_{i}\mathbb{Z}\\ x&\mapsto&(f_{i}(x))_{i}\end{array}$$ where each $f_{i}:\mathbb{Z}/N\mathbb{Z}\longrightarrow\mathbb{Z}/N_{i}\mathbb{Z}$ is called a “site-position function”. category coding, for $i=1,2,...r$. Generally, $N$ is a huge number, and $N_{i}$ are some numbers of middle size. We can reduce a $N$-classes classification problem to $r$’s classification problems of middle size through a CC. We can also use a $r$-hot $(\sum_{i=1}^{n}N_{i})$-bit binary encoding instead of the one-hot encoding as the representation of the feature, i.e., use the composite of the CC map $f$ and the nature embedding $$\begin{array}[]{ccl}\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}&\longrightarrow&% \prod_{i=1}^{r}\mathbb{F}_{2}^{N_{i}}=\mathbb{F}_{2}^{\sum_{i}N_{i}}\\ (x_{i})_{i}&\mapsto&(N_{i}\mbox{ bit one hot representation of }x_{i})_{i}\end% {array}$$ to get a $r-$hot encoding. For a CC $f$, we call $\max_{x\neq y}\sharp\{i=1,\cdots,r|f_{i}(x)=f_{i}(y)\}$ the collision number of $f$, and denote $C(f)$. We have the following theorem. Theorem 2.1. For a CC $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}$ , where $N_{1}\leq N_{2}\leq\cdots\leq N_{r}$, we have $C(f)\geq\min\{i=1,\cdots r|N\leq\prod_{j=1}^{i}N_{j}\}-1$. Proof. Let $k:=\min\{i=1,\cdots r|N\leq\prod_{j=1}^{i}N_{j}\}$. Suppose $C(f)<k-1$, i.e $$\max_{x\neq y}\sharp\{i=1,\cdots,r|f_{i}(x)=f_{i}(y)\}<k-1$$ Hence for any $x\neq y\in\mathbb{Z}/N\mathbb{Z}$, there are at most $k-2$ same site-position value between $f(x)$ and $f(y)$. Hence $\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{k-1}\mathbb{Z}/N_{i}\mathbb{Z}$ is an injection, and hence $N\leq\prod_{i=1}^{k-1}N_{i}$. It is a contradiction with the definition of $k$. ∎ If a CC satisfying $C(f)=\min\{i=1,\cdots r|N\leq\prod_{j=1}^{i}N_{j}\}-1$, we call it has the minimal collision property. In both usage of label encoding and feature encoding, we wish the code has minimal collision property. We give 3 classes of CC, i.e, Polynomial CC, Remainder CC and Gauss CC, which satisfies the minimal collision property. 2.1 Polynomial CC For any prime number $p$, we can represent any non-negative integral number $x$ less than $p^{k}$ as the unique form $x=x_{0}+x_{1}p+\cdots+x_{k-1}p^{k-1}\quad(x_{i}\in\mathbb{Z}/p\mathbb{Z})$, which gives a bijection $\mathbb{Z}/p^{k}\mathbb{Z}\longrightarrow\mathbb{F}_{p}^{k}$, where $\mathbb{F}_{p}$ is the Galois field (finite field) of $p$ elements. For the classification problem of $N$-classes and any small positive integral number $k$ (for example, k=2, 3) and a small real number $\epsilon\in(0,1)$, we take a prime number in $[N^{\frac{1}{k}},N^{\frac{1}{k-\epsilon}}]$ (According to the Prime Number Theorem ( Riemann , Prime_Number_Theorem ), there are about $\frac{k(N^{\frac{1}{k-\epsilon}}-N^{\frac{1}{k}})}{\log N}$ such prime numbers.) , and get a injection $\mathbb{Z}/N\mathbb{Z}\longrightarrow\mathbb{Z}/p^{k}\mathbb{Z}\longrightarrow% \mathbb{F}_{p}^{k}$ by p-adic representation. Theorem 2.2. For $r$’s different elements $x_{1},x_{2},\cdots,x_{r}$ in $\mathbb{F}_{p}$, the code defined by the composite map $f$ of the p-adic representation map and the map $$\begin{array}[]{rcl}\phi_{1}:\mathbb{F}^{k}_{p}&\longrightarrow&\mathbb{F}_{p}% [x]_{deg<k}\\ (a_{0},\cdots,a_{k-1})&\mapsto&a_{0}+a_{1}x+\cdots a_{k-1}x^{k-1}\end{array}$$ and the map $$\begin{array}[]{rcl}\phi_{2}:\mathbb{F}_{p}[x]_{deg<k}&\longrightarrow&\mathbb% {F}_{p}^{r}\\ g(x)&\mapsto&(g(x_{1}),\cdots g(x_{r}))\end{array}$$ has the minimal collision property.  $\blacksquare$ Proof. We need proof that $C(\phi)\leq\min\{i=1,\cdots r|N\leq p^{i}\}-1$. Because we know that $\min\{i=1,\cdots r|N\leq p^{i}\}=k$, hence we need just prove $C(\phi)\leq k$, i.e for any $\alpha\neq\beta\in\mathbb{Z}/N\mathbb{Z}$, $\sharp\{i=1,\cdots,r|f_{i}(\alpha)=f_{i}(\beta)\}\leq k-1$. Because the p-adic representation map and is an injection, and the map $\phi_{1}$ is a bijection, we need just to show that for any $g_{1}\neq g_{2}\in\mathbb{F}_{p}[x]_{deg<k}$, $\sharp\{i=1,\cdots,r|g_{1}(x_{i})=g_{2}(x_{i})\}\leq k-1$. Suppose there are $g_{1}\neq g_{2}\in\mathbb{F}_{p}[x]_{deg<k}$ such that $\sharp\{i=1,\cdots,r|g_{1}(x_{i})=g_{2}(x_{i})\}>k-1$, it means the polynomial $g_{1}-g_{2}\in\mathbb{F}_{p}[x]$ of degree at most $k-1$ has at least $k$ roots, it is a contradiction with the Algebraic Basic Theorem on fields. ∎ Remark. The composite map of $\phi_{1}$ and $\phi_{2}$ in above theorem is known as Reed-Solomon code also Reed_and_Solomon . The Reed-Solomon code is a class of non-binary MDS (maximal distinct separate) code Singleton . MDS property is a excellent property in error-corrected coding. But unfortunately, it has not find any nontrivial binary MDS code yet up to now. In fact, for some situation, the fact that there are not any nontrivial binary MDS code is proved. (Guerrini_and_Sala and Proposition 9.2 on p. 212 in Vermani ). This is an advantage of CC than ECOC in label encode also. 2.2 Remainder CC For the original label’s set $\mathbb{Z}/N\mathbb{Z}$, a small number k like 2, or 3, etc., and a small positive number $\epsilon\in(0,1)$, select $r$’s pairwise co-prime numbers $p_{1},p_{2},\cdots p_{r}$ in the domain $\left[N^{\frac{1}{k}},N^{\frac{1}{k-\epsilon}}\right)$. (According to the Prime Number Theorem ( Riemann , Prime_Number_Theorem ), there are about $\frac{k(N^{\frac{1}{k-\epsilon}}-N^{\frac{1}{k}})}{\log N}$ prime and hence pairwise co-prime numbers in this domain.) We define the remainder CC as $$\begin{array}[]{ccc}\mathbb{Z}/N\mathbb{Z}&\longrightarrow&\prod_{i=1}^{n}% \mathbb{Z}/p_{i}\mathbb{Z}\\ x&\mapsto&f_{i}(x)\end{array}$$ where $f_{i}(x)=x\mod p_{i}$, and $\{p_{i}\}$ is called its modules. Then we have the following proposition: Theorem 2.3. The remainder CC has the minimal collision property. Proof. We need only to show that, for any $x\neq y\in\mathbb{Z}/N\mathbb{Z}$, there are at most $k-1$’s $i$ such, that $f_{i}(x)=f_{i}(y)$. Suppose there exist $k$’s different $i$ such, that $f_{i}(x)=f_{i}(y)$, we can suppose that $f_{i}(x)=f_{i}(y)$  for $i=1,2,\cdots k$. Then we have $x\equiv y\mod p_{i}$ for all $i=1,2,\cdots,k$. Because $\{p_{i}\}$ are pairwise co-prime numbers, we have $x\equiv y\mod\prod_{i=1}^{k}p_{i}$. But we know $x,y\in\{0,1,\cdots N-1\}$, which in $\{0,1,\cdots\prod_{i=1}^{k}p_{i}-1\}$, hence $x=y$. ∎ 2.3 Gauss CC We propose a CC based on the ring of Gauss integers Gauss NT , and so called Gauss CC. We write the ring of Gauss integers as $\mathbb{Z}[\sqrt{-1}]:=\{a+b\sqrt{-1}\in\mathbb{C}|a,b\in\mathbb{Z}\}$. For a big integral number $N$, let $t$ is the minimal positive real number such that the number of Gauss integers in the closed disc $\overline{U_{t}(0)}$ is not less than $N$, i.e $\sharp\overline{U_{t}(0)}\cap\mathbb{Z}[\sqrt{-1}]\geq N$ and $\sharp\overline{U_{t-\epsilon}(0)}\cap\mathbb{Z}[\sqrt{-1}]<N$ for any small $\epsilon>0$. In general, we have $\sharp\overline{U_{t}(0)}\cap\mathbb{Z}[\sqrt{-1}]$ is about $\pi t^{2}$, hence we can get such $t$ about $\sqrt{N/\pi}$. We can embed the original IDs to the Gauss integers in Gauss integers in the closed disc. $$\mathbb{Z}/N\mathbb{Z}\hookrightarrow\overline{U_{t}(0)}\cap\mathbb{Z}[\sqrt{-% 1}]$$ Let $k$ be a small positive integral number, like 2,3, and $\epsilon^{\prime}$ be a small positive real number. Let $p_{1},p_{2},\cdots,p_{r}$ be $r$ pairwise co-prime Gauss integral numbers satisfying $|p_{i}|\in[(2t)^{\frac{1}{k}},(2t)^{\frac{1}{k-\epsilon^{\prime}}})\quad\mbox{% for }i=1,2,\cdots,r.$ We define the category mapping $$\begin{array}[]{ccc}\overline{U_{t}(0)}\cap\mathbb{Z}[\sqrt{-1}]&% \longrightarrow&\prod_{i=1}^{r}\mathbb{Z}[\sqrt{-1}]/(p_{i})\\ z&\mapsto&(f_{i}(z))_{i}\end{array}$$ where $(p_{i})$ means the principle ideal of $\mathbb{Z}[\sqrt{-1}]$ generated by $p_{i}$, $f_{i}(z)=z\mod(p_{i})$. $\{p_{i}\}$ is called the modules of this Gauss CC, and we have the following theorem. Theorem 2.4. The Gauss CC has the minimal collision property. Proof. From the method to take $\{p_{i}\}$, we know $k=\min\{i=1,\cdots r|N\leq\prod_{j=1}^{i}|\mathbb{Z}[\sqrt{-1}]/(p_{j})|\}$. Hence we need only to show that, for any $x\neq y\in\overline{U_{t}(0)}\cap\mathbb{Z}[\sqrt{-1}]$, there are at most $k-1$’s $i$ such, that $f_{i}(x)=f_{i}(y)$. Suppose there exist $k$’s different $i$ such, that $f_{i}(x)=f_{i}(y)$, we can suppose that $$f_{i}(x)=f_{i}(y)\quad\mbox{ for }i=1,2,\cdots k$$ Then we have $x-y\equiv 0\mod(p_{i})$ for all $i=1,2,\cdots,k$. Because $\{p_{i}\}$ are pairwise co-prime Gauss integral numbers, hence $\{(p_{i})\}$ are pairwise co-prime ideal of $\mathbb{Z}[\sqrt{-1}]$, and we have $x-y\in\prod_{i=1}^{k}(p_{i})$. Hence $\mathbf{Nm}(x-y)\in\prod_{i=1}^{k}(\mathbf{Nm}(p_{i}))\mathbb{Z}$ i.e, $|x-y|^{2}\in\prod_{i=1}^{k}|p_{i}|^{2}\mathbb{Z}$, and hence $|x-y|\equiv 0\mod\prod_{i=1}^{k}|p_{i}|$. But we know $x,y\in\overline{U_{t}(0)}$, hence $|x-y|\leq 2t$. On the other hand, we know $\prod_{i=1}^{k}|p_{i}|>2t$, hence $|x-y|=0$, and hence $x=y$. ∎ 3 Application for label encode For a $N$-class classification problem, we use a CC $$\begin{array}[]{ccc}f:\mathbb{Z}/N\mathbb{Z}&\longrightarrow&\prod_{i=1}^{r}% \mathbb{Z}/N_{i}\mathbb{Z}\\ z&\mapsto&(f_{i}(z))_{i}\end{array}$$ to reduce a $N$-classes classification problem to $r$’s classification problems of middle size through a LM. Suppose the training dataset is $\{x_{k},y_{k}\}$, where $x_{k}$ is feature and $y_{k}$ is label, then we train a base learner on the dataset $\{x_{k},f_{i}(y_{k})\}$ for every $i=1,2,\cdots r$. We call it the label encoding method. A CC good for label encoding should satisfy the follow properties: Classes high separable. For two different labels $y,\tilde{y}$, there should be as many as possible site-position functions $f_{i}$ such that $f_{i}(y)\neq f_{i}(\tilde{y})$. Base learners independence. When $y$ are selected randomly uniformly from $\mathbb{Z}/N\mathbb{Z}$, the mutual information of $f_{i}(y)$ and $f_{j}(y)$ approximate to 0 for $i\neq j$. The property “classes high separable” ensures that for any two different classes, there are as many as possible base learners are trained to separate them. The property “base learners independence” ensures that the common part of the information learned by any two different base learners is few. Remark. These properties are the similar of the properties “Row separable” and “Column separable” of ECOC (Dietterich_and_Bakiri ) in non-binary situation. The minimal collision property ensure the CCs satisfy “Class high separable”, we will show that they satisfy “Base learner independence” also. 3.1 Polynomial CC We will prove that, the Polynomial CC satisfies the property “Base learners independence” also. Theorem 3.1. If $u$ is a random variable with uniform distribution on $\mathbb{Z}/N\mathbb{Z}$, $y_{i}$ and $y_{j}$ are the i-site value and j-site value ($i\neq j$) of the codeword of $u$ under the simplex LM described above, then the mutual information of $y_{i}$ and $y_{j}$ approach to $0$ when $N$ grows up. Proof. For any $u$ in $\mathbb{Z}/p^{k}\mathbb{Z}$, the i-th site value is $y_{i}=u_{0}+u_{1}x_{i}+\cdots u_{k-1}x_{i}^{k-1}\quad\mod p$, where $u_{0},u_{1},\cdots u_{k-1}$ are the coefficients of the p-adic representation of $u$. We denote this map by $g_{i}:\mathbb{Z}/p^{k}\mathbb{Z}\longrightarrow\mathbb{Z}/p\mathbb{Z}$. Let $t=\lceil N/p\rceil$, consider the following commutative diagram:  \xymatrix Z/ptZ \ar[r]\ar[d] ^g_i & Z/ptZ \ar[d]^g_i Z/pZ \ar[r] & Z/pZ The horizontal arrow in up line is defined by $u_{0}+u_{1}p+\cdots u_{k-1}p^{k-1}\mapsto(u_{0}+1\mod p)+u_{1}p+\cdots u_{k-1}% p^{k-1}$, and the horizontal arrow in down line is defined by $y\mapsto(y+1\mod p)$. The horizontal arrows are bijections, which shows that the numbers of the pre-images in $\mathbb{Z}/pt\mathbb{Z}$ of every element in $\mathbb{Z}/p\mathbb{Z}$ are same and hence equal to $t$. On the other hand, we have the commutative diagram:  \xymatrix Z/p(t-1)Z \ar[r]\ar[dr] & Z/NZ \ar[r]\ar[d] & Z/ptZ \ar[dl] & Z/pZ & where the horizontal arrows are the natural embedding, and other arrows are the restriction of $g_{i}$. But the number of pre-images in $\mathbb{Z}/pt\mathbb{Z}$ of every element in $\mathbb{Z}/p\mathbb{Z}$ is $t$, and the same logic shows that the number of pre-images in $\mathbb{Z}/p(t-1)\mathbb{Z}$ of every element in $\mathbb{Z}/p\mathbb{Z}$ is $t-1$. Therefore the number of pre-images in $\mathbb{Z}/N\mathbb{Z}$ of every element in $\mathbb{Z}/p\mathbb{Z}$ is $t$ or $t-1$. Hence if $u$ is a random variable with uniformly distribution on $\mathbb{Z}/N\mathbb{Z}$, its probability at every point in $\mathbb{Z}/N\mathbb{Z}$ is $1/N$, then the probability of $y_{i}$ at every point in $\mathbb{Z}/p\mathbb{Z}$ are $\frac{t}{N}$ or $\frac{t-1}{N}$. The same logic shows that the probability of $y_{j}$ at every point in $\mathbb{Z}/p\mathbb{Z}$ are $\frac{t}{N}$ or $\frac{t-1}{N}$. Let $s=\lceil N/p^{2}\rceil$, we have the commutative diagram for any $(a,b)\in\mathbb{F}_{p}^{2}$: \xymatrix Z/p^2sZ \ar[r]\ar[d] ^(g_i,g_j) & Z/p^2sZ \ar[d]^(g_i, g_j) Z/pZ×Z/pZ \ar[r] & Z/pZ ×Z/pZ where the up horizontal arrow is defined by $u_{0}+u_{1}p+\cdots u_{k-1}p^{k-1}\mapsto(u_{0}+a\mod p)+(u_{1}+b\mod p)p+% \cdots u_{k-1}p^{k-1}$, and the down horizontal arrow is defined by $(y_{i},y_{j})\mapsto(y_{i}+a+bx_{i}\mod p,y_{j}+a+bx_{j}\mod p)$. Both the horizontal arrows are bijections. Because $x_{i}\neq x_{j}$ we know that when $(a,b)$ runs over all the pairs in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ the down horizontal map maps $(0,0)$ to all the pairs in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$. Therefore all the number of pre-images in $\mathbb{Z}/p^{2}s\mathbb{Z}$ of any element in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ are same, and hence equal to $s$. A similar method shows that if $u$ is a random variable with uniformly distribution on $\mathbb{Z}/N\mathbb{Z}$, the joint probability of $(y_{i},y_{j})$ at every point in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ are $\frac{s}{N}$ or $\frac{s-1}{N}$. We know that the mutual information of $y_{i}$ and $y_{j}$ is $I(Y_{i};Y_{j})=\sum_{(y_{i},y_{j})\in\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p% \mathbb{Z}}p_{i,j}(y_{i},y_{j})\log\frac{p_{i,j}(y_{i},y_{j})}{p_{i}(y_{i})p_{% j}(y_{j})}$. a.) When $k=2$, i.e. $p<N\leq p^{2}$, we know $s=1$ and $p_{i,j}(y_{i},y_{j})=\frac{1}{N}$ on $N$’s point in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ and $0$ on other points. Hence we have $$\begin{array}[]{rl}I(Y_{i};Y_{j})&\leq\sum_{(y_{i},y_{j})\in\mathbb{Z}/p% \mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}}p_{i,j}(y_{i},y_{j})\log\frac{p_{i,j}(y% _{i},y_{j})}{(\frac{t-1}{N})^{2}}=N\times\frac{1}{N}\log\frac{1/N}{(\frac{t-1}% {N})^{2}}=2\log\frac{N}{t-1}-\log N\\ &\leq 2\log\frac{N}{N/p-1}-\log N=2\log p-2\log(1-\frac{p}{N})-\log N=2\log p+% 2O(\frac{p}{N})-\log N\end{array}$$ However, $p\in[N^{\frac{1}{2}},N^{\frac{1}{2-\epsilon}}]$ implies that $p=N^{\frac{1}{2}}(1+o(1))$, hence we have $$I(Y_{i};Y_{j})=\log N+2\log(1+o(1))+2O(N^{-\frac{1}{2}})-\log N=o(1)% \rightarrow 0\mbox{ as }N\rightarrow\infty$$ b.) When $k>2$, i.e. $N>p^{2}$, we have $$\begin{array}[]{rl}I(Y_{i};Y_{j})=&\sum_{(y_{i},y_{j})\in\mathbb{Z}/p\mathbb{Z% }\times\mathbb{Z}/p\mathbb{Z}}p_{i,j}(y_{i},y_{j})\log p_{i,j}(y_{i},y_{j})-% \sum_{(y_{i},y_{j})\in\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}}p_{i,% j}(y_{i},y_{j})(\log p_{i}(y_{i})+\log p_{j}(y_{j}))\\ =&\sum_{(y_{i},y_{j})\in\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}}p_{% i,j}(y_{i},y_{j})\log p_{i,j}(y_{i},y_{j})-\sum_{y_{i}\in\mathbb{Z}/p\mathbb{Z% }}p_{i}(y_{i})\log p_{i}(y_{i})-\sum_{y_{j}\in\mathbb{Z}/p\mathbb{Z}}p_{j}(y_{% j})\log p_{j}(y_{j})\\ \leq&p^{2}\frac{s}{N}\log(\frac{s}{N})-2p\frac{t-1}{N}\log\frac{t-1}{N}\end{array}$$ Because $(s-1)p^{2}<N\leq sp^{2}$ and $(t-1)p<N\leq tp$, we have $$\begin{array}[]{rl}I(Y_{i};Y_{j})<&(1+\frac{p^{2}}{N})\log(\frac{1}{p^{2}}+% \frac{1}{N})-2(1-\frac{p}{N})\log(\frac{1}{p}-\frac{1}{N})=\log\frac{\frac{1}{% p^{2}}+\frac{1}{N}}{(\frac{1}{p}-\frac{1}{N})^{2}}+\frac{p^{2}}{N}\log(\frac{1% }{p^{2}}+\frac{1}{N})+2\frac{p}{N}\log(\frac{1}{p}-\frac{1}{N})\\ =&\log\frac{1+\frac{p^{2}}{N}}{(1-\frac{p}{N})^{2}}+\frac{p^{2}}{N}(\log(1+% \frac{p^{2}}{N})-2\log p)+2\frac{p}{N}(\log(1-\frac{p}{N})-\log p)<\log\frac{1% +\frac{p^{2}}{N}}{(1-\frac{p}{N})^{2}}+\frac{p^{2}}{N}\log(1+\frac{p^{2}}{N})% \\ =&O(\frac{p^{2}}{N})+O(\frac{p}{N})+\frac{p^{2}}{N}O(\frac{p^{2}}{N})=O(\frac{% p^{2}}{N})\end{array}$$ However, $p\in[N^{\frac{1}{k}},N^{\frac{1}{k-\epsilon}}]$ implies that $p=N^{\frac{1}{k}}(1+o(1))$, hence we have $$I(Y_{i};Y_{j})=O(N^{\frac{2}{k}-1})\rightarrow 0\mbox{ as }N\rightarrow\infty$$ ∎ 3.2 Remainder CC and Gauss CC The theorem 2.3, 2.4 tells us that the Remainder CC and Gauss CC satisfies the “Classes high separable” property. In fact, they satisfy the property “Base learners independence” also. Theorem 3.2. Let $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}$ be a Remainder CC , and $x$ be uniformly randomly selected from $\mathbb{Z}/N\mathbb{Z}$, we have that for any $i\neq j$, the mutual Information of $f_{i}(x)$ and $f_{j}(x)$ approximate 0. Proof. Let $t_{i}:=\lceil\frac{N}{p_{i}}\rceil$ and $s_{ij}=\lceil\frac{N}{p_{i}p_{j}}\rceil$ for every $i,j$. We have that the probabilities of $f_{i}(x)$ at every point in $\mathbb{Z}/p_{i}\mathbb{Z}$ are $\frac{t_{i}}{N}$ or $\frac{t_{i}-1}{N}$ and the probabilities of $(f_{i}(x),f_{j}(x))$ at every point in $\mathbb{Z}/p_{i}\mathbb{Z}\times\mathbb{Z}/p_{j}\mathbb{Z}$ are $\frac{s_{ij}}{N}$ or $\frac{s_{ij}-1}{N}$ by using the similar method in the proof of Theorem 3.1. We know that the mutual information of $y_{i}=f_{i}(x)$ and $y_{j}=f_{j}(x)$ is $$I(Y_{i};Y_{j})=\sum_{(y_{i},y_{j})\in\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p% \mathbb{Z}}p_{i,j}(y_{i},y_{j})\log\frac{p_{i,j}(y_{i},y_{j})}{p_{i}(y_{i})p_{% j}(y_{j})}$$ a.) When $k=2$, we have $N<p_{i}p_{j}$ and hence $s=1$ and $p_{i,j}(y_{i},y_{j})=\frac{1}{N}$ on $N$’s point in $\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ and $0$ on other points. Hence we have $$\begin{array}[]{rl}I(Y_{i};Y_{j})&\leq\sum_{(y_{i},y_{j})\in\mathbb{Z}/p% \mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}}p_{i,j}(y_{i},y_{j})\log\frac{p_{i,j}(y% _{i},y_{j})}{\frac{(t_{i}-1)(t_{j}-1)}{N^{2}}}\\ &=N\times\frac{1}{N}\log\frac{1/N}{\frac{(t_{i}-1)(t_{j}-1)}{N^{2}}}\\ &=\log N-\log(t_{i}-1)-\log(t_{j}-1)\\ &<\log N-\log(\frac{N}{p_{i}}-1)-\log(\frac{N}{p_{j}}-1)\\ &\leq\log N-2\log(\frac{N^{\frac{1}{2}}}{N^{2-\epsilon}}-1)\\ &=-2\log(\frac{1}{N^{2-\epsilon}}-N^{-\frac{1}{2}})\\ &\rightarrow 0\quad\mbox{ as }N\rightarrow\infty\end{array}$$ b.) When $k\geq 3$, we have $p_{i}p_{j}<N^{\frac{2}{k-\epsilon}}<N$, and $$\begin{array}[]{rl}&I(Y_{i};Y_{j})\\ =&\sum_{(y_{i},y_{j})\in\mathbb{Z}/p_{i}\mathbb{Z}\times\mathbb{Z}/p_{j}% \mathbb{Z}}p_{i,j}(y_{i},y_{j})\log p_{i,j}(y_{i},y_{j})\\ &-\sum_{(y_{i},y_{j})\in\mathbb{Z}/p_{i}\mathbb{Z}\times\mathbb{Z}/p_{j}% \mathbb{Z}}p_{i,j}(y_{i},y_{j})(\log p_{i}(y_{i})+\log p_{j}(y_{j}))\\ =&\sum_{(y_{i},y_{j})\in\mathbb{Z}/p_{i}\mathbb{Z}\times\mathbb{Z}/p_{j}% \mathbb{Z}}p_{i,j}(y_{i},y_{j})\log p_{i,j}(y_{i},y_{j})\\ &-\sum_{y_{i}\in\mathbb{Z}/p_{i}\mathbb{Z}}p_{i}(y_{i})\log p_{i}(y_{i})-\sum_% {y_{j}\in\mathbb{Z}/p_{j}\mathbb{Z}}p_{j}(y_{j})\log p_{j}(y_{j})\\ \leq&p_{i}p_{j}\frac{s_{ij}}{N}\log(\frac{s_{ij}}{N})-p_{i}\frac{t_{i}-1}{N}% \log\frac{t_{i}-1}{N}-p_{j}\frac{t_{j}-1}{N}\log\frac{t_{j}-1}{N}\end{array}$$ Because $$\begin{array}[]{rcl}(s_{ij}-1)p_{i}p_{j}&<N\leq&s_{ij}p_{i}p_{j}\\ (t_{i}-1)p_{i}&<N\leq&t_{i}p_{i}\\ (t_{j}-1)p_{j}&<N\leq&t_{j}p_{j}\end{array}$$ We have $$\begin{array}[]{rl}&I(Y_{i};Y_{j})\\ <&(1+\frac{p_{i}p_{j}}{N})\log(\frac{1}{p_{i}p_{j}}+\frac{1}{N})\\ &-(1-\frac{p_{i}}{N})\log(\frac{1}{p_{i}}-\frac{1}{N})-(1-\frac{p_{j}}{N})\log% (\frac{1}{p_{j}}-\frac{1}{N})\\ =&\log\frac{\frac{1}{p_{i}p_{j}}+\frac{1}{N}}{(\frac{1}{p_{i}}-\frac{1}{N})(% \frac{1}{p_{j}}-\frac{1}{N})}+\frac{p_{i}p_{j}}{N}\log(\frac{1}{p_{i}p_{j}}+% \frac{1}{N})\\ &+\frac{p_{i}}{N}\log(\frac{1}{p_{i}}-\frac{1}{N})+\frac{p_{j}}{N}\log(\frac{1% }{p_{j}}-\frac{1}{N})\\ \leq&\log(1+\frac{p_{i}p_{j}}{N})-\log(1-(\frac{1}{p_{i}}+\frac{1}{p_{j}})% \frac{p_{i}p_{j}}{N}+\frac{1}{N^{2}})\\ \leq&\log(1+\frac{p_{i}p_{j}}{N})-\log(1-(\frac{1}{p_{i}}+\frac{1}{p_{j}})% \frac{p_{i}p_{j}}{N})\\ =&O(\frac{p_{i}p_{j}}{N})+O((\frac{1}{p_{i}}+\frac{1}{p_{j}})\frac{p_{i}p_{j}}% {N})\\ =&O(\frac{p_{i}p_{j}}{N})\\ =&O(N^{\frac{2}{k-\epsilon}}-1)\\ =&O(N^{\frac{2+\epsilon-k}{k-\epsilon}})\rightarrow 0\quad\mbox{as}\quad N% \rightarrow\infty\end{array}$$ $\blacksquare$ This theorem tells us that, the Remainder CC satisfies the property “Base learners independence”. Similarly, we have Theorem 3.3. Let $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}$ be a Gauss CC, and $x$ be uniformly randomly selected from $\mathbb{Z}/N\mathbb{Z}$, we have that for any $i\neq j$, the mutual Information of $f_{i}(x)$ and $f_{j}(x)$ approximate 0. ∎ This theorem tells us that, the Gauss CC satisfies the property “Base learners independence” also. 3.3 Decode Algorithm Suppose we used the LM $f_{i}:\mathbb{Z}/N\mathbb{Z}\longrightarrow\mathbb{Z}/N_{i}\mathbb{Z}\quad(i=1% ,2,...n)$ to reduce a classification problem of class number $N$ to the classification problems of class number $N_{i}$’s, and trained $n$ base learner for every $f_{i}$, the output of every base learner $i$ is a distribution $P_{i}$ on $\mathbb{Z}/N_{i}\mathbb{Z}$. Now, for a input feature data, how we collect the output $\{P_{i}:i=1,2,\cdots,n\}$ of every base learner to get the predict label? In this paper, we search the $x\in\mathbb{Z}/N\mathbb{Z}$ such that $\sum_{i}\log P_{i}(f_{i}(x))$ is maximal, and let such $x$ be the decoded label. (In fact, $\sum_{i}\log P_{i}(f_{i}(a))=-\sum_{i}KL(f_{i\star}\delta(x-a)||P_{i})$ , where $\delta(x-a)$ is the Delta distribution at $a\in\mathbb{Z}/N\mathbb{Z}$, and $f_{i\star}\delta(x-a)$ is the marginal distribution of $\delta(x-a)$ induced by $f_{i}$.) 3.4 Numeric Experiments We use the Inception V3 network and LM on the dataset “CJK characters”. CJK is a collective term for the Chinese, Japanese, and Korean languages, all of which use Chinese characters and derivatives (collectively, CJK characters) in their writing systems. The data set “CJK characters” is the grey-level image of size 139x139 of 20901 CJK characters (0x4e00 $\sim$ 0x9fa5) in 8 fonts. We use 7 fonts as the train set, and other one font as the test set. We use inception v3 network as base learner, and train the networks using batch size=128 and 100 batch per an epoch. We use three CCs as follows, and get the performance like in Table 1. a. The polynomial CCs with k=2 and p=181. These Polynomial CCs are defined by $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow\mathbb{F}_{p}^{r}$, where $N=21901$, and $f_{i}(x)=((x\mod p)+floor(x/p)i)\mod p$, and r=2 or r=6. b. The Remainder CCs with k=2 and $p_{i}\in\{173,191,157,181,193,199\}$. These Remainder CCs are defined by $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{r}\mathbb{Z}/p_{i}\mathbb{Z}$, where $N=21901$, $f_{i}(x)=x\mod p_{i}$, and $r=2\mbox{ or }6$. c. the Gauss CCs with k=2 and $p_{i}\in\{10\pm 9\sqrt{-1},13\pm 2\sqrt{-1},12\pm 7\sqrt{-1}\}$. These Gauss CCs are defined by $f:\overline{U_{82}(0)}\cap\mathbb{Z}[\sqrt{-1}]\longrightarrow\prod_{i=1}^{r}% \mathbb{Z}[\sqrt{-1}]/(p_{i})$, where $N=21901$, and $f_{i}(x)=x\mod(p_{i})$, and r=2 or r=6. d. ECOC of 15 bit. We can see, even when the base learner number 2 of CCs is much less than the base learner number 15 of ECOC, the performance of CCs are better than the ECOC which trainable number of parameters of networks bigger than CCs. 4 Application for feature encode For a categorical feature take value in $\mathbb{Z}/N\mathbb{Z}$, where $N$ is a huge integral number, we can use the composite mapping of a CC $\mathbb{Z}/N\mathbb{Z}\longrightarrow\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}$ and the nature embedding $$\begin{array}[]{ccl}\prod_{i=1}^{r}\mathbb{Z}/N_{i}\mathbb{Z}&\longrightarrow&% \prod_{i=1}^{r}\mathbb{F}_{2}^{N_{i}}=\mathbb{F}_{2}^{\sum_{i}N_{i}}\\ (x_{i})_{i}&\mapsto&(N_{i}\mbox{ bit one hot representation of }x_{i})_{i}\end% {array}$$ to get a $r$-hot encoding. We use this $r$-hot encoding as feature encoding. Apart from the CC feature encoding, the more natural ideas for feature encoding are COO. Cut off of one-hot encoding. We call a $n$-bit binary code the ’Cut off of one-hot’, if the $n-1$ most frequently used ID’s are one-hot encoded in the front $n-1$ bits, and all the other ID’s are encoded to the code ${}^{\prime}0\cdots 01^{\prime}$. RMP. Using a code frequently used in error-correct encoding. For example, a Reed-Muller code RM with punch by a random subset of bits. For a binary code $\{f_{i}\}_{i\in\mathbb{Z}/n\mathbb{Z}}:C\hookrightarrow\mathbb{F}_{2}^{n}$ and a subset $Q\subset\mathbb{Z}/n\mathbb{Z}$ of $m$ elements, the punch of $f$ by $Q$ means the code $\{f_{i}\}_{i\in\mathbb{Z}/n\mathbb{Z}\setminus Q}:C\hookrightarrow\mathbb{F}_{% 2}^{n-m}$. We will show that, the performance of our Polynomial CC, Remainder CC and Gauss CC are better than both the code COO and RMP. 4.1 Numeric Experiments We use the dataset “Movie Lends” (Movie_Lends ), which has the columns UserID, MovieID, Rating and Timestamp. The UserIDs range between 1 and 6040, and MovieIDs range between 1 and 3952, ratings are made on a 5-star scale, timestamp is represented in seconds. Each user has at least 20 ratings. We use only the column UserID, MovieID and Rating. and use a DNN with an embedding layer and two full-connected layers. In the embedding layer, the User code and Movie code are embedded to real vectors of dimension 32 respectively, the dimension of the output the two full-connected layers are 64 and 1 respectively. After the first full-connected layer we use ’RELU’, after the second full-connected layer we use $x\mapsto 4*sigmoid(x)+1$. We use this network as a regression model, and train it by minimize MSE. The ratio between train data and validation data is 8:2. We compare the validation loss of the following methods: 1. 582 bit cut off of the one-hot code for UserID, and 474 bit cut off of the one-hot code for MovieID. 2. 582 bit random punch of RM(12,1) for UserID, and 474 bit random punch of RM(11,1) for MovieID. 3. 582 bit 6-hot Polynomial code based on finite field $\mathbb{F}_{97}$ for UserID, and 474 bit 6-hot Polynomial code based on finite field $\mathbb{F}_{73}$ for MovieID. 4. 582 bit Remainder code with modules $\{83,89,97,101,103,109\}$ for UserID, and 474 bit Remainder code with modules $\{67,71,73,79,83,101\}$ for MovieID. 5. 582 bit Gauss code with modules $\{8\pm 5\sqrt{-1},9\pm 4\sqrt{-1},10+\sqrt{-1},10+3\sqrt{-1}\}$ for UserID, and 474 bit Remainder code with modules $\{67,71,73,79,83,101\}$ for MovieID. The validation losses are like in Table 3. We see that the performance of Polynomial CC, Remainder CC and Gauss CC are better than the one-hot cut and RM code with punch of same length significantly. Moreover, the performance of Gauss CC is best, and then the Remainder CC. 4.2 Theoretical analysis for feature coding We see the performance of Polynomial CC, Remainder CC and Gauss CC are good for feature coding, but we don’t know how to choose the non-zero bit number $r$ in the coding. More generally, how to study the performance of codes without experiments? In the theory of error-correcting code, we know the Hamming distance is an important metric for codes. In general, if the original IDs and length of coding is fixed, the error-correcting codes with big Hamming distance have good performance. But for feature coding, Hamming distance is not a good metric. For example, we compare the performance of Method 2 introduced in the previous subsection and the anti-Method 2. The codings used in anti-Method 4 and Method 4 have the relationship: $x\mapsto 1-x$. The corresponding pair of codes in the two method has same Hamming distance, but the performance is difference (in Table 3). Hence the Hamming distance is not a good choose for metric of feature encoding. For a binary $r$-hot codeword $c$ of length $n$, we can view $\frac{1}{r}c$ as a distribution on $\mathbb{Z}/n\mathbb{Z}$, and call it the reduced distribution of x, write it as dist($c$). The average minimal KL-divergence (AMKL) of a code $I\longrightarrow C$ is defined as $\sum_{i}\min_{j}\mbox{KL}(dist(c_{i})||dist(c_{j}))p_{i}$. We propose that use AMKL as the metric of code, and give the conjecture: Conjucture 4.1. The feature code with bigger AMKL has better performance. To examine the conjecture 4.1, we give a lemma to compute the AMKL firstly: Lemma 4.2. For a $n$ bit $r$-hot code $I\longrightarrow\mathbb{F}_{2}^{n}$, if for any codeword $c_{i}$ the maximal common non-zero bit number between $c_{i}$ and any other codeword in $C$ is $r$, the AMKL equal to $(1-\frac{t}{r})\infty$. Proof. For any $i\in I$, let $x_{i}$ denote the codeword of $i$. For any $i\neq j$ in $I$, the reduced distribution of $x_{i},x_{j}$ are $\mbox{dist}(x)_{i}=\frac{1}{r}x_{i}$, $\mbox{dist}(x)_{j}=\frac{1}{r}x_{j}$ respectively. Hence the KL-divergency of dist($x_{i}$), dist($x_{j}$) is $\mbox{KL}(dist(x_{i})||dist(x_{j}))=\frac{1}{r}\log\frac{1/r}{0}\times(r-\tau)% +\frac{1}{r}\log\frac{1/r}{1/r}\times\tau=(1-\frac{\tau}{r})\log\infty$. Hence $\sum_{i}\min_{j}\mbox{KL}(dist(c_{i})||dist(c_{j}))p_{i}=\sum_{i}(1-\frac{\tau% }{r})p_{i}\log\infty=(1-\frac{\tau}{r})\log\infty$. ∎ We use the some numeric experiments to examine the conjecture 4.1. We use the following encoding Method on dataset “Movie Lends”, and their AMKL and performance is like in table 3. We see that the AMKL has positive effect to performance. Moreover, the performance of Gauss CC > Remainder CC > Polynomial CC with same length and AMKL. Method 1. 582 bit Remainder code with modules $\{289,293\}$ for UserIDs, and 474 bit Remainder code with modules $\{235,239\}$ for MovieIDs. Method 2. 582 bit Remainder code with modules $\{193,194,195\}$ for UserIDs, and 474 bit Remainder code with modules $\{157,158,159\}$ for MovieIDs. Method 3. 582 bit 6-hot Polynomial code based on finite field $\mathbb{F}_{97}$ for UserIDs, and 474 bit 6-hot Polynomial code based on finite field $\mathbb{F}_{73}$ for MovieIDs. Method 4. 582 bit Remainder code with modules $\{83,89,97,101,103,109\}$ for UserIDs, and 474 bit Remainder code with modules $\{67,71,73,79,83,101\}$ for MovieIDs. Method 5. 582 bit Gauss code with modules $\{8\pm 5\sqrt{-1},9\pm 4\sqrt{-1},10+\sqrt{-1},10+3\sqrt{-1}\}$ for UserIDs, and 474 bit Remainder code with modules $\{67,71,73,79,83,101\}$ for MovieIDs. Method 6. 582 bit Remainder code with modules {19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 67 } for UserIDs, and 473 bit Remainder code with modules {17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53} for MovieIDs. 5 Conclusion We propose three classes of category coding (CC) with minimal collision property. They are Polynomial CC, Remainder CC and Gauss CC. In the application for label coding in the classification problem with huge labels number using CNN, we prove that they have good theoretical properties and show that they have good performance in numerical experiments. In the application for feature coding in collaborative filtering using DNN, we show that their performance is better than cut-off method and classical binary coding method. Moreover, we give a metric “AMKL”of feature coding, and show it has positive effect to the performance. In additional, we show that the performance of Gauss CC > Remainder CC > Polynomial CC with same length and AMKL. References (1) Nathan Jacobson. Lectures in Abstract Algebra III: Theory of Fields and Galois Theory. Springer-Verlag New York, 1964. (2) https://en.wikipedia.org/wiki/Gaussian_integer (3) E. Guerrini and M. Sala. A classification of MDS binary systematic codes. BCR preprint 2006. www.bcri.ucc.ie/FILES/PUBS/BCRI_57.pdf, (4) http://grouplens.org/datasets/movielens/1m/ (5) Lang, Serge. Algebraic Number Theory. Springer-Verlag New Yor, 1994. (6) Apostol T. M. Introduction to Analytic Number Theory. Springer-Verlag, 1976, New York. (7) Irving S. Reed and Gustav Solomon. Polynomial codes over certain finite fields. JSIAM volume 8(2), jun of 1960, p300–304. url=http://links.jstor.org/sici?sici=0368-4245%28196006%298%3A2%3C300%3APCOCFF%3E2.0.CO%3B2-2 (8) Bernhard Riemann. Ueber die Anzahl der Primzahlen unter einer gegebenen Grosse. Monatsberichte der Berliner Akademie, November 1859. (9) https://en.wikipedia.org/wiki/Prime_number_theorem. (10) Irving S. Reed and Gustav Solomon. Polynomial codes over certain finite fields. J. SIAM, 8:300-304, 1960. (11) Richard C. Singleton. Maximum distance q-nary codes. IEEE Transactions on Information Theory, 10(2):116–118, April 1964. (12) Sejnowski T.J., Rosenberg C.R.(1987).Parallel networks that learn to pronounce english text. Journal of Complex Systems,1(1), 145-168. (13) Shu Lin; Daniel Costello (2005). Error Control Coding (2 ed.). Pearson. ISBN 0-13-017973-6. Chapter 4. (14) L. R. Vermani. Elements of Algebraic Coding Theory. CRC Press, 1996. (15) E. Guerrini and M. Sala. A classification of MDS binary systematic codes. BCRI preprint, www.bcri.ucc.ie 56, UCC, Cork, Ireland, 2006. (16) T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of artificial intelligence research, pp. 263–286, 1995. (17) E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. The Journal of Machine Learning Research, 1:113–141, 2001. (18) A. Passerini, M. Pontil, and P. Frasconi. New results on error correcting output codes of kernel machines. Neural Networks, IEEE Transactions on, 15(1):45–54, 2004. (19) Langford, J., and Beygelzimer, A. 2005. Sensitive error correcting output codes. In COLT. (20) G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006 (21) S. Escalera, O. Pujol, and P. Radeva. Ecoc-one: A novel coding and decoding strategy. In ICPR, volume 3, pp. 578–581, 2006. (22) O. Pujol, P. Radeva, and J. Vitria. Discriminant ECOC: a heuristic method for application dependent design of error correcting output codes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 28(6):1007–1012, 2006. (23) O. Pujol, S. Escalera, and P. Radeva. An incremental node embedding technique for error correcting output codes. Pattern Recognition, 41(2):713–725, 2008. (24) S. Escalera, O. Pujol, and P. Radeva. Separability of ternary codes for sparse designs of errorcorrecting output codes. Pattern Recognition Letters, 30(3):285–297, 2009. (25) G. Zhong, K. Huang, and C.-L. Liu. Joint learning of error-correcting output codes and dichotomizers from data. Neural Computing and Applications, 21(4):715–724, 2012. (26) G. Zhong and M. Cheriet. Adaptive error-correcting output codes. In IJCAI, 2013. (27) G. Zhong and C.-L. Liu. Error-correcting output codes based ensemble feature extraction. Pattern Recognition, 46(4):1091–1100, 2013. (28) Yang, Luo, Loy, Shum, Tang. Deep Representation Learning with Target Coding. Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015 :3848-3854. (29) Berger, A.: Error-Correcting Output Coding for text classification. In: IJCAI(1999) (30) Ghani, R.: Using error-correcting codes for text classification. Proceedings of ICML-00, 17th International Conference on Machine Learning (pp. 303–310). Stanford, US: Morgan Kaufmann Publishers, San Francisco, US. (31) Ghani, R. Using Error-Correcting Codes for Efficient Text Classification with a Large Number of Categories. KDD Lab Project Proposal. (32) T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of artificial intelligence research, 1995, p263-286.
Enumerating Multiple Equivalent Lasso Solutions Yannis Pantazis Computer Science Department, University of Crete, Heraklion, 70013, Greece    Vincenzo Lagani11footnotemark: 1 Gnosis Data Analysis PC, Palaiokapa 64, 71305, Heraklion, Greece    Ioannis Tsamardinos11footnotemark: 1 22footnotemark: 2 Abstract Predictive modelling is a data-analysis task common in many scientific fields. However, it is rather unknown that multiple predictive models can be equally well-performing for the same problem. This multiplicity often leads to poor reproducibility when searching for a unique solution in datasets with low number of samples, high dimensional feature space and/or high levels of noise, a common scenario in biology and medicine. The Lasso regression is one of the most powerful and popular regularization methods, yet it also produces a single, sparse solution. In this paper, we show that nearly-optimal Lasso solutions, whose out-of-sample statistical error is practically indistinguishable from the optimal one, exist. We formalize various notions of equivalence between Lasso solutions, and we devise an algorithm to enumerate the ones that are equivalent in a statistical sense: we define a tolerance on the root mean square error (RMSE) which creates a RMSE-equivalent Lasso solution space. Results in both regression and classification tasks reveal that the out-of-sample error due to the RMSE relaxation is within the range of the statistical error due to the sampling size. 1 Introduction Feature selection allows to identify predictive models that are exempt from irrelevant or noisy predictors. The latter render the model difficult to interpret and can negatively affect predictive performances. Traditionally, feature selection focuses on identifying a single, optimal set of variables [1]. However, multiple, equivalent sets of predictor often exist [2], especially in fields where low sample size, high dimensionality and presence of noise can make predictors indistinguishable from each other on the basis of their (partial) association with the outcome. Biology and medicine are prototypical fields where these adverse conditions are met. In a seminal paper [3], Ein-dor and co-authors demonstrate that multiple, equivalent prognostic signature for breast cancer can be found just by analyzing the same dataset with a different partition in training and test set, showing that several genes exist which are practically exchangeable in terms of predictive power. Statnikov and Aliferis [4] further show that the presence of multiple optimal signatures is not a rare occurrence, and is actually common in biological datasets. Retrieving all multiple predictive signatures is important for several reasons. First, characterizing all equivalent predictive sets can provides insights into the mechanisms generating the data, which is often a major goal in feature selection studies. Moreover, the availability of alternative signatures can have practical applications whenever one of the signature is too expensive or impractical for being measured, and must be replaced with another one. Despite the relevance of the problem, to date only few algorithms address it directly. Two constraint-based approaches are the Target Information Equivalence (TIE${}^{*}$ [5]) and the Statistically Equivalent Signatures (SES [2]) algorithms. The operation of both is based on conditional independence tests, which are used to identify and discard predictors that are non-relevant for the problem at hand. SES applies additional tests for assessing whether a variables is discarded for being equivalent to one that is already selected, while TIE${}^{*}$ identify equivalencies by running its feature selection procedure multiple times on subsets of the same dataset. Another algorithm which has been recently proposed by Cox and Battey [6] searches for multiple solutions across a large number of separate analyses. Unfortunately, both the Cox–Battey method and TIE${}^{*}$ are computationally intensive due to their brute-force nature. In this work, we show that multiple equivalent solutions can be also found for the popular least absolute shrinkage and selection operator (Lasso) method [7]. Lasso is a widely-used inference tool with applications in several scientific areas such as machine learning, bioinformatics, compressed sensing and statistical inference. The success of Lasso stems from its ability to select a subset of variables and thus produce statistical models that are easily interpretable and enjoy enhanced prediction accuracy, especially, when the number of samples is smaller compared to their dimension. Given an outcome vector $y\in\mathbb{R}^{n}$, a matrix $X\in\mathbb{R}^{n\times p}$ with the predictor variables and the coefficient vector $\beta\in\mathbb{R}^{p}$, the Lagrangian form of Lasso is defined as $$\min_{\beta\in\mathbb{R}^{p}}L(\beta):=\min_{\beta\in\mathbb{R}^{p}}\frac{1}{2% }||y-X\beta||_{2}^{2}+\lambda||\beta||_{1}$$ (1) where $\lambda\geq 0$ is a penalty parameter. Lasso solvers such as LARS [8] and FISTA [9] have been extensively applied in thousands of real problems. Additionally, extensions of (1) for generalized linear models such as logistic regression with $l_{1}$-norm regularization have been proposed [10]. When the entries of $X$ are drawn from a continuous probability distribution, the uniqueness of the Lasso solution is almost surely guaranteed [11]. However, when the predictor variables take discrete values or there exists strong collinearity then several optimal Lasso solutions exist and they share the same cost function value, $L(\cdot)$. We name them strongly-equivalent Lasso solutions, SELSs . In [11], the author provided various characterizations of the space of all SELSs which forms a convex polytope (i.e., a bounded polyhedron). Enumerating the vertices of the polytope is adequate to determine all SELSs, which can be done by applying specific algorithms to the set of equalities and inequalities characterizing the polytope [12]. Having the same cost function value is, however, a very strict criterion and in most real datasets the optimal Lasso solution is unique. We propose to relax the definition of equivalence to having the same in-sample root mean squared error (RMSE). This relaxation leads to the definition of RMSE-equivalent Lasso solutions (RELS), which are solutions whose RMSE is below a specific threshold. Inspired by the characterization of the SELS space, we devise an algorithm that computes a maximal subspace of the RELS space. A set of constraints that guarantees sparsity as well as the key characteristics of the Lasso solution(s) is imposed. Furthermore, we provided theoretical bounds for the RMSE that relates the user-defined tolerance with the spectral properties of the measurement matrix, $X$. Two datasets, corresponding to a regression and a classification task respectively, are used for experimental evaluation. Each dataset is repetitively split in equally-sized training and test set, with the former used for deriving multiple RELSs and the latter for evaluation purpose. RMSE values on the training set is within the pre-defined tolerance for all RELSs, while the testing RMSE is slightly outside the tolerance for some RELSs. Nevertheless, the variation in testing RMSE due to the specific split of the data is at least twice as higher than the testing RMSE variation across RELS within each repetition. Thus, the testing error introduced by the relaxation is usually inferior to the variance in performance that can be obtained simply splitting the data in different training and test sets. Moreover, in both datasets we also observe significant variability on the number of solutions across repetitions, with the number of RELSs ranging from one up to several thousands depending on the specific split used. We remark here that studies based on bootstrapping [13], subsampling [14] or locally perturbing the support of the Lasso solution [15] propose to run the Lasso multiple times and establish the best (unique) solution. Their goal is to improve the ordinary Lasso solution in terms of either accuracy or stability. However, this is irrelevant to our goal which is to determine the set of all equivalent Lasso solutions. The rest of the paper is organized as follows. Section 3 presents the mathematical formulation of the set of all SELSs as a convex polytope and then the enumeration of all polytope’s vertices. Section 4 generalizes to the case where a tolerance is allowed to the statistical error while Section 5 presents a categorization of the predictor variables according to their presence in all equivalent solutions. Finally, Section 6 demonstrates multiple solutions and their statistics on real datasets. 2 Definition of Lasso Solutions Equivalence We present two definitions of equivalence for the Lasso inference problem. The first is strong equivalent solutions which have been studied in [11] while the second is a relaxed equivalence where statistical error up to a tolerance is allowed. 2.1 Strong equivalence Given $\lambda>0$, two vectors $\hat{\beta},\hat{\beta}^{\prime}\in\mathbb{R}^{p}$ are Lasso equivalent solutions in the strong sense if and only if $L(\hat{\beta})=L(\hat{\beta}^{\prime})$. The set of all SELS is defined as $$K:=\{x\in\mathbb{R}^{p}:L(x)=\min_{\beta}L(\beta)\}\ ,$$ (2) and, it will be further studied in Section 3. Additionally, two SELSs not only share the same cost function error but also predict the same values for the target variable [11] (i.e., $X\hat{\beta}=X\hat{\beta}^{\prime}$). Consequently, SELSs have the same $l_{1}$ norm, $||\hat{\beta}||_{1}=||\hat{\beta}^{\prime}||_{1}$. Another characteristic of all SELSs is that the non-zero coefficients are not allowed to flip their sign [11]. Hence, if $\hat{\beta}_{i}>0$ for the $i$-th coefficient of a Lasso solution then $\hat{\beta}_{i}^{\prime}\geq 0$ for any SELS, $\hat{\beta}^{\prime}$, and, similarly, for the coefficients with negative values. 2.2 RMSE equivalence A less restrictive equivalence is to require two solutions to have similar performance. We will search for solutions whose performance metric differ by a small tolerance value from a given solution. Moreover, in order to avoid handling absolute quantities, we suggest working with the relative performance equivalence. Mathematically, denoting by $D(\cdot)$ the performance metric, $\hat{\beta}$ the given solution and $TOL$ a tolerance, we say that $\bar{\beta}$ is performance equivalent to $\hat{\beta}$ if the following relation on the relative performance metric is satisfied $$D(\bar{\beta})\leq(1+TOL)D(\hat{\beta})\ .$$ (3) Several performance measures such as the log-likelihood or the cost function which may depend on the inference problem at hand (classification, regression, survival, etc.) could participate in the definition of relaxed equivalence. However, controlling only the performance metric may add unnecessary redundancy to the relaxed solutions destroying the desired property of sparsity. Indeed, a potentially large number of irrelevant variables may satisfy the performance constraint for a given $TOL$, nonetheless, they should not belong to the set of equivalent solutions. To alleviate this issue, we propose to restrict the relaxed solution space by allowing only the non-zero coefficients of the given solution to vary. Proceeding, RMSE is a performance metric that can be directly applied for Lasso regression. Thus, given a Lasso solution, $\hat{\beta}$, whose support (or active set) is defined by $supp(\hat{\beta}):=\{i:\hat{\beta}\neq 0\}$ and a tolerance $TOL$, we define the set of RMSE-equivalent Lasso Solutions (RELSs) as $$\displaystyle K_{TOL}(\hat{\beta})$$ $$\displaystyle:=\{x\in\mathbb{R}^{p}:x_{-supp(\hat{\beta})}=0\ \ \&$$ (4) $$\displaystyle RMSE(x)\leq(1+TOL)RMSE(\hat{\beta})\}$$ where the root mean squared error metric is defined by $RMSE(\beta):=\frac{1}{\sqrt{n}}||y-X\beta||_{2}$ while $-A$ denotes the complement set of $A$. The following proposition is a direct consequence of the fact that the RMSE is a convex function with respect to $\beta$. Proposition 2.1. The set $K_{TOL}(\hat{\beta})$ of all RELSs is convex. Proof. Let $\bar{\beta}_{1},\bar{\beta}_{2}\in K_{TOL}(\hat{\beta})$ and $c_{1},c_{2}\geq 0$ such that $c_{1}+c_{2}=1$, then for $\bar{\beta}:=c_{1}\bar{\beta}_{1}+c_{2}\bar{\beta}_{2}$ we have $$\displaystyle||y-X\bar{\beta}||_{2}=||y-X(c_{1}\bar{\beta}_{1}+c_{2}\bar{\beta% }_{2})||_{2}$$ $$\displaystyle=||c_{1}(y-X\bar{\beta}_{1})+c_{2}(y-X\bar{\beta}_{2})||_{2}$$ $$\displaystyle\leq c_{1}||y-X\bar{\beta}_{1}||_{2}+c_{2}||y-X\bar{\beta}_{2}||_% {2}$$ $$\displaystyle\leq c_{1}(1+TOL)||y-X\hat{\beta}||_{2}+c_{2}(1+TOL)||y-X\hat{% \beta}||_{2}$$ $$\displaystyle=(1+TOL)||y-X\hat{\beta}||_{2}$$ Thus, $\bar{\beta}\in K_{TOL}(\hat{\beta})$ and convexity is proved. ∎ Remark. Other performance metrics can be utilized. Particularly, when the Lasso cost function is used as a performance metric, the support of $\hat{\beta}$ is maximal and $TOL$ is set to 0 then the strong equivalence is obtained. 3 Strongly-Equivalent Lasso Solutions Given a Lasso solution, $\hat{\beta}$, the so-called equicorrelation set $\mathcal{E}$ is defined as [11] $$\mathcal{E}:=\{i\in\{1,...,p\}:|X_{i}^{T}(y-X\hat{\beta})|=\lambda\}\ .$$ (5) The equicorrelation set is unique because all SELSs have equal fitted values. It holds that $\hat{\beta}_{-\mathcal{E}}=0$ for any SELS, thus, any SELS has active set that is a subset of $\mathcal{E}$. In other words, $\mathcal{E}$ is the largest active set. The equicorrelation sign vector, $s\in\mathbb{R}^{|\mathcal{E}|}$, is defined by $$s:=sign\big{(}X_{\mathcal{E}}^{T}(y-X\hat{\beta})\big{)}$$ (6) where $X_{\mathcal{E}}$ is the matrix that contains only the columns of $X$ that are indexed by the equicorrelation set $\mathcal{E}$. With a slight abuse of notation since we are restricting the solution space only to the variables with non-zero coefficients, the set of all SELSs can be rewritten as $$K=\{x\in\mathbb{R}^{|\mathcal{E}|}:X_{\mathcal{E}}x=X_{\mathcal{E}}\hat{\beta}% _{\mathcal{E}}\ \&\ Sx\geq 0\}$$ (7) where $\hat{\beta}$ is an arbitrary Lasso solution while $S:=diag(s)$ is the diagonal matrix with the signs. The first constraint ensures that all elements in $K$ will have the same fitted value while the second constraint ensures that coefficients’ sign will not flip. Another representation for the convex polytope $K$ is by observing that $X_{\mathcal{E}}x=X_{\mathcal{E}}\hat{\beta}_{\mathcal{E}}\Rightarrow X_{% \mathcal{E}}(x-\hat{\beta}_{\mathcal{E}})=0$. Hence, if $b:=x-\hat{\beta}_{\mathcal{E}}$ then $b$ belongs to the Null space of $X_{\mathcal{E}}$, and, $K$ can be rewritten as $K=\{b\in Null(X_{\mathcal{E}}):S(b+\hat{\beta}_{\mathcal{E}})\geq 0\}$. 3.1 Enumeration Algorithm A bounded polyhedron (i.e., a polytope) can be represented either as the convex hull of a finite set of vertices or by using a combination of linear constraint equalities and inequalities. In particular, the vertices of the convex polytope, $K$, defined above corresponds to the “extreme” SELSs. Thus, enumerating the vertices of $K$ from the set of equality and inequality constraints, we can enumerate all SELSs. There exist algorithms that enumerate the vertices defined by a set of inequality constraints (see Fukuda et al. [16] and the references therein) and can be extended to take into account equality constraints, too. In this paper, we employed the Matlab package by Jacobson [12] which contains tools for converting between the (in)equality and the vertices representations. As a benchmark example, we present a linear regression model with 1000 variables which are sampled identically and independently by a standard Gaussian distribution. We augment the feature space by $X_{1001}=\frac{1}{2}(X_{2}+X_{3})$, $X_{1002}=\frac{1}{2}(X_{4}+X_{5})$ and $X_{1003}=\frac{1}{4}(X_{2}+X_{3}+X_{4}+X_{5})$ while the target variable is defined as $y=-X_{1}+X_{2}+X_{3}+X_{4}+X_{5}$. For demonstration clarity, measurement noise was not added to the target variable. We set $\lambda=10^{-4}$ and compute a Lasso solution from $n=100$ samples using Matlab’s lasso.m function. Then, we compute the equicerrelation set, $\mathcal{E}$, and the sign vector $s$. Due to statistical inconsistencies, the exact equation constraint in the definition of $\mathcal{E}$ is not satisfied. Thus, we define a region around $\lambda$ where if the absolute correlation value is in this region, the corresponding variable is included in $\mathcal{E}$. Experiments not shown here revealed that for low sample sizes the computed correlations do not match their theoretic values. An alternative approach for estimating the equicorrelation set is to run elastic net since it has been shown in [11] that its active set is exactly $\mathcal{E}$ at the limit of zeroing the $l_{2}$ regularization term. We then run the vertex enumeration algorithm and extreme SELSs are reported in Table 1. All SELSs can be expressed as a linear combination of these five extremal points. Finally, we remark that the number of vertices, like the number of edges and faces, can grow exponentially fast with the dimension of the polytope making the enumeration algorithm impractical when $Null(X_{\mathcal{E}})$ has dimension larger than 20. In such cases, we cannot enumerate all the extreme SELSs. Nevertheless, it would be useful to know which of the variables participate in all SELSs and which are not. In the above benchmark example, $X_{1}$ participated in all SELSs while the other variables with at least one solution with non-zero coefficient participated in some of the solutions. In Section 5, a practical categorization of the variables into dispensable (participate in some solutions) and indispensable (participate in all solutions) which has been firstly introduced in [11] is presented and extended. 4 RMSE-equivalent Lasso Solutions Even though we limit the active set of equivalent solutions to a subset of the given solution’s active set, enumerating all RELSs is still hard. Indeed, the geometric shape of $K_{TOL}(\hat{\beta})$ is curved resulting in a infinite number of extreme RELSs. Hence, we restrict the solutions that we will eventually enumerate to a subspace of $K_{TOL}(\hat{\beta})$. Inspired by the representations of SELSs, we propose an approach that enumerates the vertices of the largest convex polytope of the RELSs space that produces RMSE less than $TOL$. The central idea is to relax the null space constraint while keeping intact the constraint on the sign of the coefficients. We rewrite the null space constraint using the singular value decomposition for $X_{\mathcal{E}}$. Let $X_{\mathcal{E}}$ be decomposed as $$X_{\mathcal{E}}=U_{\mathcal{E}}\Sigma_{\mathcal{E}}V_{\mathcal{E}}^{T}\ ,$$ (8) where $U_{\mathcal{E}}$ and $V_{\mathcal{E}}$ are orthogonal matrices with the left and right space eigenvectors, respectively, while $\Sigma_{\mathcal{E}}$ is a diagonal matrix with the ordered absolute eigenvalues. Assume that $\sigma_{i}\neq 0$ for $i=1,...,i^{+}$ and $\sigma_{i}=0$ for $i=i^{+}+1,...,|\mathcal{E}|$, then, the constraint $X_{\mathcal{E}}(x-\hat{\beta}_{\mathcal{E}})=0$ in (7) is equivalent to $V_{\mathcal{E}}^{+}(x-\hat{\beta}_{\mathcal{E}})=0$ where $V_{\mathcal{E}}^{+}=[v_{1}|...|v_{i^{+}}]$ corresponds to the matrix with the eigenvectors whose eigenvalues are non-zero. Inspired by the above representation, we propose to relax the null space constraint to $$V_{\mathcal{E}}^{*}x=V_{\mathcal{E}}^{*}\hat{\beta}_{\mathcal{E}}$$ (9) where $V_{\mathcal{E}}^{*}=[v_{1}|...|v_{i^{*}}]$ as above while $i^{*}$ is an integer between $1$ and $|\mathcal{E}|$ to be specified later. Thus, the convex polytope for the relaxed Lasso solutions is defined as $$K^{*}:=\{x\in\hat{\beta}_{\mathcal{E}}+[-l,l]^{|\mathcal{E}|}:V_{\mathcal{E}}^% {*}x=V_{\mathcal{E}}^{*}\hat{\beta}_{\mathcal{E}}\ \&\ Sx\geq 0\}\ .$$ (10) We further constrain the relaxed Lasso solutions to live in a box and have coefficients that are at most $l$ far away from the given Lasso solution, $\hat{\beta}$. This constraint is added so as to guarantee the boundedness of the polyhedron defined by the other two constraint. We choose to set the box size to be $l=||\hat{\beta}||_{\infty}=\max_{i}\hat{\beta}_{i}$ which allows any coefficient to vary till the zero value. Figure 1(a) depicts the RELS space and the relaxed polytope, $K^{*}$. Our goal is to choose appropriately $i^{*}$ so as $K^{*}$ is a maximal subset of the RELS space. Figure 1(b) demonstrates a two-dimensional example where the Lasso regression problem has a unique solution (red circle). We set $i^{*}=1$ hence the relaxed set of Lasso solutions constitutes the red line that connects the positive axes determined by the direction of $v_{2}$. The two vertex (solid red dots) are then further tested for ensuring that their relative RMSE is below the maximum tolerance. Before proceeding with an algorithm that enumerates a subset of the RELS space, a theoretical bound in terms of RMSE for the elements of $K^{*}$ is derived next. Theorem 4.1. Let $\hat{\beta}$ be a Lasso solution. For any $\bar{\beta}\in K^{*}$ it holds that $$RMSE(\bar{\beta})\leq RMSE(\hat{\beta})+\sqrt{\frac{|\mathcal{E}|}{n}}\sigma_{% i^{*}+1}2l\ .$$ (11) Proof. First, apply the Minkowski inequality as follows $$\displaystyle||y-X_{\mathcal{E}}\bar{\beta}_{\mathcal{E}}||_{2}$$ $$\displaystyle=||y-X_{\mathcal{E}}\hat{\beta}_{\mathcal{E}}+X_{\mathcal{E}}(% \hat{\beta}_{\mathcal{E}}-\bar{\beta}_{\mathcal{E}})||_{2}$$ $$\displaystyle\leq||y-X_{\mathcal{E}}\hat{\beta}_{\mathcal{E}}||_{2}+||X_{% \mathcal{E}}(\hat{\beta}_{\mathcal{E}}-\bar{\beta}_{\mathcal{E}})||_{2}$$ Next, define the matrices $\bar{V}_{\mathcal{E}}^{*}:=[v_{i^{*}+1}|...|v_{|\mathcal{E}|}]$, $$\Sigma_{\mathcal{E}}^{*}=\begin{bmatrix}diag(\sigma_{1},...,\sigma_{i^{*}})&0% \\ 0&0\end{bmatrix}$$ and $$\bar{\Sigma}_{\mathcal{E}}^{*}=\begin{bmatrix}0&0\\ 0&diag(\sigma_{i^{*}+1},...,\sigma_{|\mathcal{E}|})\end{bmatrix}\ .$$ It holds that $V_{\mathcal{E}}=[V_{\mathcal{E}}^{*}|\bar{V}_{\mathcal{E}}^{*}]$ as well as $\Sigma_{\mathcal{E}}=\Sigma_{\mathcal{E}}^{*}+\bar{\Sigma}_{\mathcal{E}}^{*}$. Using these matrices we write $$\displaystyle X_{\mathcal{E}}\bar{\beta}_{\mathcal{E}}$$ $$\displaystyle=U_{\mathcal{E}}\Sigma_{\mathcal{E}}V_{\mathcal{E}}^{T}\bar{\beta% }_{\mathcal{E}}=U_{\mathcal{E}}\Sigma_{\mathcal{E}}^{*}(V_{\mathcal{E}}^{*})^{% T}\bar{\beta}_{\mathcal{E}}+U_{\mathcal{E}}\bar{\Sigma}_{\mathcal{E}}^{*}(\bar% {V}_{\mathcal{E}}^{*})^{T}\bar{\beta}_{\mathcal{E}}$$ $$\displaystyle=U_{\mathcal{E}}\Sigma_{\mathcal{E}}^{*}(V_{\mathcal{E}}^{*})^{T}% \hat{\beta}_{\mathcal{E}}+U_{\mathcal{E}}\bar{\Sigma}_{\mathcal{E}}^{*}(\bar{V% }_{\mathcal{E}}^{*})^{T}\bar{\beta}_{\mathcal{E}}$$ $$\displaystyle=U_{\mathcal{E}}\Sigma_{\mathcal{E}}V_{\mathcal{E}}^{T}\hat{\beta% }_{\mathcal{E}}+U_{\mathcal{E}}\bar{\Sigma}_{\mathcal{E}}^{*}(\bar{V}_{% \mathcal{E}}^{*})^{T}(\bar{\beta}_{\mathcal{E}}-\hat{\beta}_{\mathcal{E}})$$ $$\displaystyle=X_{\mathcal{E}}\hat{\beta}_{\mathcal{E}}+U_{\mathcal{E}}\bar{% \Sigma}_{\mathcal{E}}^{*}(\bar{V}_{\mathcal{E}}^{*})^{T}(\bar{\beta}_{\mathcal% {E}}-\hat{\beta}_{\mathcal{E}})$$ Hence, we bound the error term in the inequality above by $$\displaystyle||X_{\mathcal{E}}(\hat{\beta}_{\mathcal{E}}-\bar{\beta}_{\mathcal% {E}})||_{2}$$ $$\displaystyle=||U_{\mathcal{E}}\bar{\Sigma}_{\mathcal{E}}^{*}(\bar{V}_{% \mathcal{E}}^{*})^{T}(\bar{\beta}_{\mathcal{E}}-\hat{\beta}_{\mathcal{E}})||_{2}$$ $$\displaystyle\leq||U_{\mathcal{E}}\bar{\Sigma}_{\mathcal{E}}^{*}(\bar{V}_{% \mathcal{E}}^{*})^{T}||_{2}||\bar{\beta}_{\mathcal{E}}-\hat{\beta}_{\mathcal{E% }}||_{2}$$ where we used the vector-induced matrix norm $||A||_{2}:=\sup_{x\neq 0}\frac{||Ax||_{2}}{||x||_{2}}$ which is also known as the spectral norm. It holds that $||A||_{2}=\sigma_{max}(A)$ thus the bound is rewritten as $$||X_{\mathcal{E}}(\hat{\beta}_{\mathcal{E}}-\bar{\beta}_{\mathcal{E}})||_{2}% \leq\sigma_{i^{*}+1}||\bar{\beta}_{\mathcal{E}}-\hat{\beta}_{\mathcal{E}}||_{2}$$ The proof is completed by observing that the box constraint results in the estimate $$||\bar{\beta}_{\mathcal{E}}-\hat{\beta}_{\mathcal{E}}||_{2}\leq\sqrt{|\mathcal% {E}|}2l\ .$$ ∎ 4.1 Enumeration algorithm An algorithm that is capable of enumerating RELSs for a subset of the set of all RELSs is proposed in Algorithm 1. It takes as input a Lasso solution, a tolerance $TOL$ and the maximum dimension $d_{max}$ of the subspace that will be explored. In the for loop, the dimension of the subspace is increased starting from $i_{0}$ (the theoretical bound) until the maximum value is reached or there are vertices whose RMSE is above the tolerance. As the dimension of the exploration subspace is increased the possibility of finding solutions whose RMSE exceeds the tolerance is also increased. If such solutions occur then we break and the solutions that exceed the tolerance are discarded. Due to the convexity of RMSE, the polytope that is defined by the remaining solutions is also a subset of the RELSs space. Thus, the algorithm is sound. Additionally, a variant of binary search can be utilized instead of the linear search for identifying the optimal $i^{*}$. However, the performance gains in computational time is minimal due to the fact that the number of vertices grows exponentially fast since the computational cost for the last iteration is proportional to the total cost of all previous iterations. Remark. We also tried alternative definitions of the subset, $K^{*}$, where we relax different properties of the strongly-equivalent characterization. For instance, we relaxed the condition $X(\hat{\beta}-\bar{\beta})=0$ to $||X(\hat{\beta}-\bar{\beta})||<=r$ with $r>0$ resulting, however, in optimization problems which are algorithmically intractable. Overall, the choice of RMSE as performance measure and the particular subset, $K^{*}$, in (10), were based on the mathematical simplicity and the practical implementation both being very critical for the adoption of a less popular but important idea of searching for and enumerating multiple equivalent solutions. 5 Variable Categorization Due to the exponential growth of the polytope’s vertices, the complete enumeration is not always feasible. Nevertheless, there is an alternative approach to qualitatively assess the predictor variables by exploiting the fact that the feasible range of a coefficient’s value for all Lasso solutions can be efficiently computed as it was shown in [11]. Generally, it is expected that the indispensable variables are more informative compared to dispensable ones in terms of predictive performance; thus, we expect to be able to exploit this categorization in future work for improving the identification of equivalent Lasso solutions. We present the existing formulation for SELSs and based on it we extend it for RELSs. 5.1 SELSs For each $i\in\mathcal{E}$, the $i$-th coefficient’s lower bound $\hat{\beta}_{i}^{l}$ and upper bound $\hat{\beta}_{i}^{u}$ are computed by solving the linear programs $$\hat{\beta}_{i}^{l}=\min_{x}x_{i}\ \text{subject to}\ X_{\mathcal{E}}x=X_{% \mathcal{E}}\hat{\beta}_{\mathcal{E}}\ \&\ Sx\geq 0\ ,$$ and $$\hat{\beta}_{i}^{u}=\max_{x}x_{i}\ \text{subject to}\ X_{\mathcal{E}}x=X_{% \mathcal{E}}\hat{\beta}_{\mathcal{E}}\ \&\ Sx\geq 0\ ,$$ respectively. If $0$ is an element of the set $[\hat{\beta}_{i}^{l},\hat{\beta}_{i}^{u}]$ then the $i$-th variable is called dispensable otherwise it is called indispensable and they participate in all Lasso solutions. The fact that the sign of a coefficient remains the same in all SELSs implies that dispensable variables have either $\hat{\beta}^{l}=0$ or $\hat{\beta}^{u}=0$ while indispensable variables have either $\hat{\beta}^{l}>0$ or $\hat{\beta}^{u}<0$. From a practical perspective, the number of linear programs to be solved is $2|\mathcal{E}|$ which is feasible. 5.2 RELSs Similarly, we can relax the criterion on dispensable/indispensable variables. Because there is no computation limitation with the linear programs that are solved, we can discard the constraint on the control of the subspace dimension (i.e., there is no need for $d_{max}$). The linear programs for the lower and upper bounds for a given $i^{*}$ are $$\bar{\beta}_{i}^{l}=\min_{x}x_{i}\ \text{s.t.}\ V_{\mathcal{E}}^{*}x=V_{% \mathcal{E}}^{*}\hat{\beta}_{\mathcal{E}},\ -l\leq x-\hat{\beta}_{\mathcal{E}}% \leq l\ \&\ Sx\geq 0\ ,$$ and $$\bar{\beta}_{i}^{u}=\max_{x}x_{i}\ \text{s.t.}\ V_{\mathcal{E}}^{*}x=V_{% \mathcal{E}}^{*}\hat{\beta}_{\mathcal{E}},\ -l\leq x-\hat{\beta}_{\mathcal{E}}% \leq l\ \&\ Sx\geq 0\ ,$$ respectively. It is evident that as we increase $i^{*}$ the number of dispensable variables is increased while the number of indispensable variables is decreased. 6 Results Two real datasets are used for evaluating our approach. Particularly, we are interested in assessing whether multiple Lasso solutions produce equivalent predictive performances on the test set, for which there is no theoretical guarantee. In order to perform statistical analysis on the obtained solutions, we split the samples into 50% training and 50% testing and iterate the splitting 100 times. We run Matlab’s Lasso implementation on the training set and we use Algorithm 1 for determining the multiple Lasso solutions. The RELS solutions are then evaluated on the testing set for comparison purposes. In all analyses we set $TOL=0.01$, $d_{max}=10$. We optimize the Lasso penalty parameter $\lambda$, using the Akaike information criterion estimated by 5-fold cross validation. The optimal value for $\lambda$ is 0.3 and 0.1 for the AquaticTox and Breast Cancer datasets, respectively. 6.1 AcquaticTox dataset The AcquaticTox dataset which is taken from the package QSARdata [17], leads to a regression problem. The task is to predict the toxicity of 322 different compounds on the basis of a set of 6652 molecular descriptors produced by the software DRAGON (Talete Srl, Milano Italy). After removing the duplicate predictor variables the set of unique feature vectors dropped to 3825. The average standard deviation across repetition for the training set error normalized by the mean error is $0.0088$, which as expected is below the $1\%$ tolerance. The same quantity for the testing error is $0.0150$ which is approximately $70\%$ higher than that of training error. Nevertheless, the uncertainty due to the sample splitting is even higher than the RMSE error since the relative standard deviation of the mean error for training and testing sets are $0.0235$ and $0.0268$, respectively. Thus, we can argue that the testing error is within the uncertainty region stemming from the limited sample size. Additionally, there is variability in the number of equivalent solutions from iteration to iteration with the lowest number being 6 solutions while the largest number of solutions in this experiment was 913. The left panel of Figure 2 reports the training (blue) and testing (green) RMSE values for all solutions obtained across 10 randomly chosen data splittings. Vertical bars separate solutions produced in different repetitions. In the right panel variables are sorted according to their occurrence in the active set across all 100 iterations. Dark blue corresponds to how often a variable is present in all RELSs (i.e., the indispensable variables) while light blue corresponds to the dispensable variables. Clearly, there is a gap in the occurrence distribution revealing that only three variables have high occurrence probability. These predictors are moe2D_lip_acc, DragonX_MLOGP and DragonX_HATSp. 6.2 Breast Cancer dataset The BreastCancer dataset discriminates between estrogen-receptor positive (ER+) and estrogen-receptor negative (ER-) tumors using 22283 gene expression measures and it is available in the package breastCancerVDX [18]. The results obtained in this analysis are similar to the ones obtained in the previous dataset: the average standard deviation for both the training and testing error within equivalent Lasso solution is respectively 6.5 and 3 times smaller than the relative standard deviation due to sample splitting (see Figure 3, left panel). The variability of the number of equivalent solutions ranges from 1 up to 3776 and it could become even higher if the value of the tolerance is increased. In the right panel of Figure 2, it is striking that one particular features is indispensable in all repetitions. This feature corresponds to the 205225_at probe which encodes the estrogen receptor gene ESR1. Mutations of this genes have been previously linked to survival of breast cancer patient [robinson2013activating, clatot2017esr1]. The next two most frequently occurring features are 209602_s_at and 209604_s_at both encoding the GATA3 gene. Interestingly, more than half of the equivalent solutions contain only one of the two features showing the capacity of the proposed approach to discriminate between highly correlated features. Finally, we remark that linear regression and RMSE are not the best option for comparison in a classification task. In future work, we plan to extend the multiple Lasso solution enumeration algorithm to the Lasso logistic regression as well as to other metrics like area under the ROC curve for binary classification tasks. Acknowledgments The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617393. References [1] Isabelle Guyon and André Elisseeff. An introduction to variable and feature selection. Journal of machine learning research, 3(Mar):1157–1182, 2003. [2] Vincenzo Lagani, Giorgos Athineou, Alessio Farcomeni, Michail Tsagris, and Ioannis Tsamardinos. Feature selection with the r package mxm: Discovering statistically-equivalent feature subsets. Journal of Statistical Software, 80(7), 2017. [3] Liat Ein-Dor, Itai Kela, Gad Getz, David Givol, and Eytan Domany. Outcome signature genes in breast cancer: is there a unique set? Bioinformatics, 21(2):171–178, 2004. [4] A. Statnikov and C. F. Aliferis. Analysis and Computational Dissection of Molecular Signature Multiplicity. PLoS Computational Biology, 6(5):e1000790, May 2010. [5] Alexander Statnikov, Nikita I Lytkin, Jan Lemeire, and Constantin F Aliferis. Algorithms for discovery of multiple markov boundaries. Journal of Machine Learning Research, 14(Feb):499–566, 2013. [6] D R Cox and H S Battey. Large numbers of explanatory variables, a semi-descriptive analysis. Proceedings of the National Academy of Sciences of the USA, 114(32):8592–8595, aug 2017. [7] Robert Tibshirani. Regression selection and shrinkage via the lasso. Journal of the Royal Statistical Society B, 58(1):267–288, 1996. [8] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [9] A. Beck and M. Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [10] J. Friedman, T. Hastie, and R. Tibshirani. Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of statistical software, 33(1):1–22, 2010. [11] Ryan J. Tibshirani. The lasso problem and uniqueness. Electronic Journal of Statistics, 7(0):1456–1490, 2013. [12] Matt Jacobson. Analyze N-dimensional Polyhedra in terms of Vertices or (In)Equalities. https://www.mathworks.com/matlabcentral/fileexchange/30892-analyze-n-dimensional-polyhedra-in-terms-of-vertices-or–in-equalities, 2015. [13] Francis R. Bach and Francis R. Bolasso: model consistent Lasso estimation through the bootstrap. In ICML ’08, pages 33–40, New York, New York, USA, 2008. [14] Nicolai Meinshausen and Peter Bühlmann. Stability selection. Journal of the Royal Statistical Society: Series B, 72(4):417–473, jul 2010. [15] S. Hara and T. Maehara. Enumerate Lasso Solutions for Feature Selection. AAAI, 2017. [16] K. Fukuda, T. M. Liebling, and F. Margot. Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron. Computational Geometry, 8(1):1–12, jun 1997. [17] Max Kuhn. QSARdata: Quantitative Structure Activity Relationship (QSAR) Data Sets. R package version 1.3. http://CRAN.R-project.org/package=QSARdata, 2013. [18] M. Schroeder, B. Haibe-Kains, A. Culhane, C. Sotiriou, G. Bontempi, and J. Quackenbush. breastCancerVDX: Gene Expression Datasets Published by Wang et al. [2005] and Minn et al. [2007] (VDX). R package version 1.6.0. http://compbio.dfci.harvard.edu/, 2011.
The Fine Structure Lines of Hydrogen in HII Regions Brian Dennison Physics Department, and Pisgah Astronomical Research and Science Education Center, University of North Carolina at Asheville, Asheville, NC 28804 dennison@unca.edu B. E. Turner National Radio Astronomy Observatory, 520 Edgement Road, Charlottesville, VA 22903-2475 bturner@nrao.edu Anthony H. Minter National Radio Astronomy Observatory, P. O. Box 2, Green Bank, WV 24944 tminter@nrao.edu Abstract The $2s_{1/2}$ state of hydrogen is metastable and overpopulated in HII regions. In addition, the $2p$ states may be pumped by ambient Lyman-$\alpha$ radiation. Fine structure transitions between these states may be observable in HII regions at 1.1 GHz ($2s_{1/2}-2p_{1/2}$) and/or 9.9 GHz ($2s_{1/2}-2p_{3/2}$), although the details of absorption versus emission are determined by the relative populations of the $2s$ and $2p$ states. The $n=2$ level populations are solved with a parameterization that allows for Lyman-$\alpha$ pumping of the $2p$ states. The Lyman-$\alpha$ pumping rate has long been considered uncertain as it involves solution of the difficult Lyman-$\alpha$ transfer problem. The density of Lyman-$\alpha$ photons is set by their creation rate, easily determined from the recombination rate, and their removal rate. Here we suggest that the dominant removal mechanism of Lyman-$\alpha$ radiation in HII regions is absorption by dust. This circumvents the need to solve the Lyman-$\alpha$ transfer problem, and provides an upper limit to the rate at which the $2p$ states are populated by Lyman-$\alpha$ photons. In virtually all cases of interest, the $2p$ states are predominantly populated by recombination, rather than Lyman-$\alpha$ pumping. We then solve the radiative transfer problem for the fine structure lines in the presence of free-free radiation. In the likely absence of Lyman-$\alpha$ pumping, the $2s_{1/2}\rightarrow 2p_{1/2}$ lines will appear in stimulated emission and the $2s_{1/2}\rightarrow 2p_{3/2}$ lines in absorption. Because the final $2p$ states are short lived these lines are dominated by intrinsic linewidth (99.8 MHz). In addition, each fine structure line is a multiplet of three blended hyperfine transitions. Searching for the 9.9 GHz lines in high emission measure HII regions offers the best prospects for detection. The lines are predicted to be weak; in the best cases, line-to continuum ratios of several tenths of a percent might be expected with line strengths of tens to a hundred mK with the Green Bank Telescope. Predicted line strengths, at both 1.1 and 9.9 GHz, are given for a number of HII regions, high emission measure components, and planetary nebulae, based upon somewhat uncertain emission measures, sizes, and structures. The extraordinary width of these lines and their blended structure will complicate detection. ISM: HII regions — ISM: lines and bands — line: formation — planetary nebulae: general — radio lines: ISM 1 Introduction Transitions between fine-structure sublevels in atomic hydrogen have never been detected astronomically. With advances in radio astronomy, however, it now appears that transitions between the $2s_{1/2}$ and $2p_{1/2}$, and $2s_{1/2}$ and $2p_{3/2}$ levels may soon be observable in HII regions. Figure 1 shows the fine and hyperfine structure of the $n=2$ level of atomic hydrogen. The fine structure splitting into the $2s_{1/2}$, $2p_{1/2}$, and $2p_{3/2}$ levels is caused by a combination of spin orbit coupling, relativistic effects and the Lamb shift. Significantly, the $2s_{1/2}$ state is metastable owing to the transition rules for angular momentum, i.e. the $2s$–$1s$ transition is forbidden. Hydrogen atoms in the interstellar medium are removed from the $2s_{1/2}$ state principally through 2-photon emission to the ground state (Breit and Teller 1940) with decay rate per atom, $A_{2\gamma}=8.227\ {\rm sec}^{-1}$ (Spitzer and Greenstein 1951), resulting in optical continuum radiation. Because this rate is some eight orders of magnitude slower than Lyman decay, it may seem reasonable to expect that the $2s_{1/2}$ state will be overpopulated relative to other excited states under photoionization equilibrium. It must be noted, however, that trapped Lyman-$\alpha$ radiation in an HII region will pump the $2p$ states. In addition, in dense HII regions collisions, primarily with ions, will transfer hydrogen atoms between the $2s$ and $2p$ states. These effects must be taken into account in order to determine the expected strengths and signs (absorption versus stimulated emission) of the $2s_{1/2}$-$2p_{1/2}$ (1.1 GHz) and $2s_{1/2}$-$2p_{3/2}$ (9.9 GHz) lines. Apparently, Wild (1952) was one of the first to investigate the astrophysical implications of these lines, suggesting that the $2s_{1/2}$-$2p_{3/2}$ line might be observed in the sun. Purcell (1952), however, noted that collisions in the dense solar atmosphere would equilibrate the $2s$ and $2p$ populations making detection unlikely. Townes (1957) suggested that an overpopulation of the metastable $2s$ states may lead to detectable lines in the interstellar medium. Shklovski (1960) raised the possibility of Lyman-$\alpha$ pumping of the $2p$ states but discounted its sufficiency for producing observable $2p\rightarrow 2s$ transitions. Pottasch (1960) argued that the Lyman-$\alpha$ depth in HII regions is essentially infinite such that virtually all downward Lyman-$\alpha$ transitions are balanced by a Lyman-$\alpha$ absorption, resulting in an overpopulation of $2p$ states relative to $2s$ at densities below about $10^{4}$ cm${}^{-3}$. Field & Partridge (1961) further considered the case in which Lyman-$\alpha$ radiation is effectively destroyed by collisional conversion of $2p$ states to $2s$ (followed by 2-photon decay to the ground state). The consequent overpopulation of $2p$ states then led to the prediction that the 9.9 GHz lines would appear in stimulated emission in HII regions. (This also implied that the 1.1 GHz lines would appear in absorption.) Myers & Barrett (1972) used the Haystack 37-m radiotelescope with a 16-channel filter bank spanning 1 GHz in frequency to search for the 9.9 GHz lines, but did not find any lines in excess of 0.1 K antenna temperature. Ershov (1987) estimated the strengths of both sets of lines in Orion A and W3(OH) under the assumption that the $2p$ states are negligibly populated with respect to $2s_{1/2}$. The major variation in the predictions arises from the uncertain density of Lyman-$\alpha$ radiation in HII regions, which in turn is determined by the processes that destroy Lyman-$\alpha$ radiation. In addition to the conversion of $2p$ to $2s$ states via collisions, other processes are likely to be important. Resonantly scattered Lyman-$\alpha$ photons undergo a diffusion in frequency and may thus acquire a significant probability of escaping the HII region due the much lower optical depth in the line wings (Cox & Matthews, 1969; Spitzer, 1978). Systematic gas motions within the region (e.g. expansion) can also contribute to the migration of Lyman-$\alpha$ photons away from line center. The dominant removal mechanism, however, is absorption by dust within the HII region (Kaplan & Pikelner, 1970; Spitzer, 1978). This last fact greatly simplifies the estimation of the Lyman-$\alpha$ density. In this paper, we obtain formulae for the $2s$ and $2p$ populations in an HII region under a parameterization that characterizes the rate of $2p$ production by Lyman-$\alpha$ absorption in terms of the rate of $2p$ production through recombination (Section 2). Existing models for dust in HII regions and planetary nebulae are then used to place tight constraints on the Lyman-$\alpha$ density and thus the $2p$ production rate by Lyman-$\alpha$ photons (Section 3). The radiative transfer problem for the fine structure lines is then solved for a uniform region (Section 4). We obtain approximate predicted line strengths for various HII regions, compact components, and planetary nebulae under the justified assumption that Lyman-$\alpha$ pumping of the $2p$ states is negligible (Section 5). High emission measure HII regions and/or planetary nebulae are most likely to yield detectable lines. 2 Population of the $2s$ and $2p$ States Spitzer & Greenstein (1951) estimated that between 0.30 and 0.35 of all recombinations in an HII region reach the $2s$ state. Including the effects of collisions and the 2-photon decay rate, the rate equation for the $2s$ state is $$f\alpha n_{i}n_{e}+C_{ps}n_{i}n_{2p}=A_{2\gamma}n_{2s}+C_{sp}n_{i}n_{2s},$$ (1) where $f$ is the fraction of recombinations producing the $2s$ state and $\alpha$ is the recombination coefficient. The collision coefficients, $C_{ps}$ and $C_{sp}$, give the appropriate rates at which atoms are transferred from $2p$ to $2s$ and from $2s$ to $2p$, respectively. These rates are dominated by collisions with protons, but including electron collisions (and assuming that $n_{e}=n_{i}$ and $T=10^{4}$ K), we have $C_{sp}=5.31\times 10^{-4}$ cm${}^{3}$ s${}^{-1}$ (Seaton, 1955). Since $kT$ is much greater than the energy separations between the $2s$ and $2p$ states, the reverse rate is determined by just the ratio of statistical weights, i.e. $$C_{ps}={g_{2s}\over g_{2p}}C_{sp}={1\over 3}C_{sp}.$$ (2) Similar considerations apply to the $2p$ states. The fraction of recombinations reaching $2p$ is $(1-f)$. [It is worth noting that processes such as Lyman-$\beta$ absorption, which ultimately produce the $2s$ state through $1s\rightarrow 3p\rightarrow 2s$, have already been taken into account in the calculation of $f$ (Spitzer & Greenstein, 1951).] Radiative decay occurs through both spontaneous Lyman-$\alpha$ emission (with $A_{21}=6.25\times 10^{8}$ sec${}^{-1}$), as well as stimulated emission. We must also include radiative excitation from the ground state via Lyman-$\alpha$ absorption. Including collisions which couple to the $2s$ states, the rate equation for $2p$ then is $$\displaystyle(1-f)\alpha n_{i}n_{e}+C_{sp}n_{i}n_{2s}+c\int n_{\nu}^{(1s)}d\nu% \int B_{12}(\nu-\nu^{\prime})n_{\nu^{\prime}}d\nu^{\prime}$$ $$\displaystyle=A_{21}n_{2p}+C_{ps}n_{i}n_{2p}+c\int n_{\nu}^{(2p)}d\nu\int B_{2% 1}(\nu-\nu^{\prime})n_{\nu^{\prime}}d\nu^{\prime},$$ (3) where $n_{\nu}^{(1s)}$ and $n_{\nu}^{(2p)}$ are the densities of atoms in the $1s$ and $2p$ states per radial velocity interval, measured in frequency units; and $n_{\nu^{\prime}}$ is the photon density per frequency interval (in cm${}^{-3}$ Hz${}^{-1}$). Now, $$\displaystyle B_{21}(\nu-\nu^{\prime})={c^{2}\over{8\pi\nu_{L}^{2}}}A_{21}L(% \nu-\nu^{\prime})$$ (4) $$\displaystyle{\rm and}\ \ B_{12}(\nu-\nu^{\prime})=3B_{21}(\nu-\nu^{\prime}),$$ (5) where $\nu_{L}$ is the Lyman-$\alpha$ frequency, and $L(\nu-\nu^{\prime})$ is the Lorentz line profile. For a thermal gas, $$n_{\nu}^{(1s)}={n_{1s}\over{\sqrt{\pi}\Delta\nu_{D}}}e^{-{{(\nu-\nu_{0})^{2}}% \over{(\Delta\nu)_{D}^{2}}}},$$ (6) where $(\Delta\nu)_{D}$ is the Doppler width. For $T=10^{4}$ K, $(\Delta\nu)_{D}=1.29\times 10^{11}$ Hz. Clearly thermal widths will completely dominate the natural (Lorentz) line width of Lyman-$\alpha$, and thus we can replace $L(\nu-\nu^{\prime})$ by the Dirac delta function. We then obtain for the absorption term: $$c\int n_{\nu}^{(1s)}d\nu\int B_{12}(\nu-\nu^{\prime})n_{\nu^{\prime}}d\nu^{% \prime}={{3c^{3}}\over{8\pi\nu_{L}^{2}}}A_{21}{n_{1s}\over{\sqrt{\pi}(\Delta% \nu)_{D}}}\int n_{\nu}e^{-{{(\nu-\nu_{0})^{2}}\over{(\Delta\nu)_{D}^{2}}}}d\nu.$$ (7) Of course, a similar calculation can be carried out for the stimulated emission term. Because the nebula is optically thick in Lyman-$\alpha$, $n_{\nu}$ must be considered carefully. Were escape from the nebula the primary removal mechanism, then a steady state would result in which photons are created near line center, diffuse in frequency through resonant scattering and are effectively removed far out in the wings. The decrease in both creation rate and diffusion rate with frequency offset results in a nearly flat distribution ($n_{\nu}$) out to the approximate frequency at which photons freely escape, beyond which it drops sharply (Capriotti, 1966). We write this frequency offset as $w(\Delta\nu)_{D}$, where $w$ is a dimensionless parameter expressing the offset in terms of the Doppler width. We shall therefore assume that $n_{\nu}=n_{Ly\alpha}/[2w(\Delta\nu)_{D}]$ for $|\nu-\nu_{D}|<w(\Delta\nu)_{D}$, and $n_{\nu}=0$ for $|\nu-\nu_{D}|>w(\Delta\nu)_{D}$, where $n_{Ly\alpha}$ is the density of Lyman-$\alpha$ photons. The $2p$ rate equation then becomes $$(1-f)\alpha n_{i}n_{e}+C_{sp}n_{2s}n_{i}+{{3c^{3}}\over{16\pi\nu_{L}^{2}}}{A_{% 21}\over{(\Delta\nu)_{D}}}{{\rm erf}(w)\over w}n_{Ly\alpha}n_{1s}=A_{21}n_{2p}.$$ (8) Spontaneous emission completely dominates the depopulation of the $2p$ state and thus the other two terms that appeared on the right hand side of equation (3) have been dropped. Stimulated emission is negligible in comparison with spontaneous emission for any reasonable value of $n_{Ly\alpha}$. Indeed, equality of spontaneous and stimulated emission would imply a radiation pressure (due to Lyman-$\alpha$) many orders of magnitude in excess of the thermal gas pressure. Also, the $2p\rightarrow 2s$ collision rate is negligible for any reasonable value of $n_{i}$. [We include these collisions terms insofar as they populate the $2s$ state, however. See equation (1).] As we shall see, it is quite plausible that Lyman-$\alpha$ photons are absorbed by dust before significant frequency diffusion occurs. In this case $n_{\nu}$ will simply reflect the thermal distribution of atoms. The resulting $2p$ rate equation is identical to that given above provided we replace ${{\rm erf}(w)/w}$ with $2\sqrt{2/\pi}$. At low densities, Lyman-$\alpha$ photons are created at the rate at which recombinations lead to $2p$ states, i.e. $(1-f)\alpha n_{i}n_{e}$. At high densities ($n_{i}>10^{4}$ cm${}^{-3}$) $2s$ states may be collisionally converted to $2p$ states leading to additional Lyman-$\alpha$ photons, and thus the Lyman-$\alpha$ creation rate could be as large as $\alpha n_{i}n_{e}$, the total recombination rate. We therefore define the Lyman-$\alpha$ lifetime as $$t_{Ly\alpha}={n_{Ly\alpha}\over{r\alpha n_{i}n_{e}}},$$ (9) where $1-f\leq r\leq 1$. Since $2p$ states decay to the ground state much faster than they could be collisionally converted to $2s$, all recombinations to $2p$ are regarded as producing a Lyman-$\alpha$ photon. Thus, $r$ could never be smaller than $1-f$. We define $$S={{3c^{3}}\over{16\pi\nu_{L}^{2}}}{A_{21}\over{(\Delta\nu)_{D}}}{r\over{(1-f)% }}{{\rm erf}(w)\over w}\chi n_{H}t_{Ly\alpha},$$ (10) where $\chi=n_{1s}/n_{H}$, and $n_{H}$ is the total hydrogen density (atomic plus ionized). The rate equations [(1) and (8)] can then be solved for the $2s$ and $2p$ populations: $$n_{2s}={{\alpha n_{i}n_{e}\big{[}fA_{21}+(1-f)(1+S)C_{ps}n_{i}\big{]}}\over{A_% {21}(A_{2\gamma}+C_{sp}n_{i})-C_{sp}C_{ps}n_{i}^{2}}}$$ (11) $$n_{2p}={{\alpha n_{i}n_{e}\big{[}(1-f)(1+S)A_{2\gamma}+(1+S-fS)C_{sp}n_{i}\big% {]}}\over{A_{21}(A_{2\gamma}+C_{sp}n_{i})-C_{sp}C_{ps}n_{i}^{2}}}.$$ (12) The dimensionless parameter $S$ gives the rate at which $2p$ states are produced by captured Lyman-$\alpha$ radiation in terms of the $2p$ creation rate from recombination (at low densities). The ratio $n_{2p}/(3n_{2s})$ provides a determination of the relative importance of Lyman-$\alpha$ pumping of the $2p$ states and whether a fine structure line is expected to appear in absorption or (stimulated) emission. If $n_{2p}/(3n_{2s})>1$, then the 9.9 GHz, $2s_{1/2}$-$2p_{3/2}$ line will appear in emission, and the 1.1 GHz, $2s_{1/2}$-$2p_{1/2}$ line will appear in absorption. Of course, $n_{2p}/(3n_{2s})<1$ implies the opposite. (This assumes that the two $2p$ states are populated according to their relative statistical weights. To evaluate cases in which $n_{2p}/(3n_{2s})$ is of order unity, it would be necessary to separately account for the rates at which $2p_{1/2}$ and $2p_{3/2}$ states are created and destroyed, including the collisional rates coupling these states.) From equations (11) and (12) it follows that $n_{2p}/(3n_{2s})>1$ if $S>S_{crit}$ where, $$S_{crit}={{3f}\over{(1-f)}}{{A_{21}}\over{A_{2\gamma}}}=1.14\times 10^{8}.$$ (13) (The numerical value of $S_{crit}$ corresponds to $f=1/3$.) Because the $2p$ states naturally decay about $10^{8}$ times faster than the $2s$ states, population equality thus requires a pumping rate some 8 orders of magnitude faster than the approximate rate at which $2s$ and $2p$ states are formed through recombination. 3 The Lyman-$\alpha$ Density in HII Regions Determination of the Lyman-$\alpha$ density in HII regions is a complicated transfer problem. It is likely, however, that the dominant mechanism for removing Lyman-$\alpha$ photons is quite straightforward, i.e. absorption by dust (Kaplan & Pikelner, 1970; Spitzer, 1978). Thus, we eschew the noncoherent radiative transfer problem and find the upper limits to $t_{Ly\alpha}$ and $S$ set by absorption. Other competing removal processes would reduce the lifetime, and therefore density, of Lyman-$\alpha$ photons, resulting in a lower value for $S$. Using a silicate-graphite model for dust in HII regions (Aannestad, 1989), with a dust-to-gas ratio of 0.009, the extinction at Lyman-$\alpha$ can be shown to be $N_{H}/(5.4\times 10^{20}{\rm cm}^{-2})$ mag, where $N_{H}$ is the column density of hydrogen. The albedo for this mixture is about 0.4 at Lyman-$\alpha$ (Draine & Lee, 1984). The lifetime of Lyman-$\alpha$ photons against absorption by dust can then be calculated as $t_{Ly\alpha}=(3.3\times 10^{10}\ {\rm cm}^{-3}\ {\rm s})/n_{H}$. We then find that $$S=4.2\times 10^{7}\chi{r\over{(1-f)}}{{\rm erf}(w)\over w}\ .$$ (14) Since $\chi<<1$ throughout most of the volume of an HII region (Osterbrock, 1989), we conclude that $S<<S_{crit}$ and that the $2s$ state is overpopulated relative to the $2p$ states. In the harsh environments of planetary nebulae dust might be destroyed by shocks or hard UV radiation, or possibly separated from the ionized gas by radiation pressure (Natta & Panagia, 1981; Pottasch, 1987). Abundance measurements indicate, however, that various heavy elements are depleted from the gas phase in the ionized regions of NGC 7027 (Kingdon & Ferland, 1997) and NGC 6445 (van Hoof et al., 2000), suggesting that dust has not been destroyed in significant quantities. Additionally, planetary nebulae frequently exhibit a mid-IR spectral component characteristic of warm dust heated by the intense radiation field within the ionized region (Kwok, 1980; Hoare, 1990; Hoare et al., 1992). Summarizing these results, Barlow (1993) has argued that dust is a common constituent of the ionized zones of planetary nebulae, albeit with dust-to-gas ratios about an order of magnitude or more below that of the general ISM. Middlemass (1990) has modeled NGC7027 and finds an extinction optical depth of about 0.17 at 500.7 nm. For a uniform model the column density of hydrogen in the ionized zone is about $3.5\times 10^{21}$ cm${}^{-2}$ (Thomasson & Davies, 1970), along the radius of the nebula. For the graphite dust model used by Middlemass (1990) the extinction at Lyman-$\alpha$ then is $N_{H}/(1.3\times 10^{22}{\rm cm}^{-2})$ mag; about a factor of 20 smaller than that in a general HII region described above, and roughly consistent with the smaller dust-to-gas ratio ($\approx 7\times 10^{-4}$) in this object (Barlow, 1993). For an albedo of 0.4, the Lyman-$\alpha$ lifetime is $t_{Ly\alpha}=(7.6\times 10^{11}{\rm cm}^{-3}\ {\rm s})/n_{H}$, and thus $$S\approx 9.8\times 10^{8}\chi{r\over{(1-f)}}{{{\rm erf}(w)}\over w}.$$ (15) If we assume that absorption by dust is the dominant process limiting the Lyman-$\alpha$ density, make the replacement ${\rm erf}(w)/w\rightarrow 2\sqrt{2/\pi}$, and let $r\approx 1$, we obtain $S=2.3\times 10^{9}\chi$. Since it is quite unlikely that $\chi$ is as large as 0.05, as required for $S\approx S_{crit}$, we conclude also in this case that the $2p$ states are underpopulated relative to the $2s$ states. Of course, the above estimates are upper limits to the Lyman-$\alpha$ lifetime, as other processes (described in Section 1) may contribute to the removal of these photons. Thus, values of $S$ estimated in this way are upper limits. In addition, if escape is important then the relevant value of ${\rm erf}(w)/w$ would be smaller than that substituted above. It should also be noted that Lyman-$\alpha$ radiation contributes to heating the dust in the ionized region. The energy absorbed is then reradiated in the IR. In the case of NGC 7027 a mid-IR spectral component is evidently due to dust with temperature $T_{d}\approx 230$ K comixed with the ionized gas (Kwok, 1980). If the heating is dominated by Lyman-$\alpha$ radiation, then $$n_{Ly\alpha}h\nu_{L}cQ_{abs}(Ly\alpha)=4\left<Q(a,T_{d})\right>\sigma T_{d}^{4},$$ (16) where $Q_{abs}(Ly\alpha)$ is the absorption efficiency at Lyman-$\alpha$ and $\left<Q(a,T_{d})\right>$ is the Planck-averaged emissivity (Draine & Lee, 1984), a function of grain size $a$ and temperature $T$. Large grains, being efficient emitters in the IR, will yield a larger estimate of $n_{Ly\alpha}$. Thus, we assume 1 $\mu$m graphite grains for which $Q_{abs}(Ly\alpha)\approx 1$ and $\left<Q(1\ \mu{\rm m},230\ {\rm K})\right>\approx 0.05$ (Draine & Lee, 1984). Then, $$n_{Ly\alpha}={{4\sigma T^{4}}\over{h\nu_{L}c}}{{\left<Q(1\ \mu{\rm m},230\ {% \rm K})\right>}\over{Q_{abs}}}=6.5\times 10^{4}\ {\rm cm}^{-3}.$$ (17) We then find $$S={{3c^{3}}\over{16\pi\nu_{L}^{2}}}{A_{21}\over{(\Delta\nu)_{D}}}{1\over{% \alpha(1-f)}}{{\rm erf}(w)\over w}{\chi\over{(1-\chi)}}{{n_{Ly\alpha}}\over{n_% {e}}}=2.6\times 10^{10}{{{\rm erf}(w)}\over w}{\chi\over{(1-\chi)}},$$ (18) where we have taken $n_{i}=(1-\chi)n_{H}$ (since the fraction of atoms in excited, bound states is negligible). To obtain the numerical value given above we used the recombination coefficient in the density bounded case with $T=10^{4}$ K (Osterbrock, 1989). Assuming Lyman-$\alpha$ radiation is primarily removed through absorption by dust (in which case ${\rm erf}(w)/w$ is replaced by $2\sqrt{2/\pi}$), then $S<S_{crit}$ unless $\chi$ exceeds $2.4\times 10^{-3}$, which is unlikely (Osterbrock, 1989). It should also be noted that other sources, such as continuum radiation, are likely important in heating the dust, thereby reducing further the implied value of $n_{Ly\alpha}$ (and therefore $S$). Finally, the $1\ \mu$m grain size assumed here is probably an overestimate implying that $n_{Ly\alpha}$ is also overestimated. 4 Radiative Transfer of the Fine Structure Lines Evidently, the $2s$ state is overpopulated relative to $2p$, thus the fine structure transitions will proceed from $2s_{1/2}$ to $2p_{3/2}$ via absorption and to $2p_{1/2}$ via stimulated emission. Although the $2p$ populations are probably negligible, they will be included in the radiative transfer calculation. The distribution of $2p$ states between $2p_{1/2}$ and $2p_{3/2}$ may deviate somewhat from the statistical weights ($1/3$ and $2/3$, respectively), in part because the separate collisional rates from $2s$ are not proportional to the statistical weights. Since the $2p$ population is most likely negligible, a detailed calculation of the distribution between $2p_{1/2}$ and $2p_{3/2}$ states will not be carried out here. Rather, the fractional populations of $2p_{1/2}$ and $2p_{3/2}$ will be parameterized as $\beta_{a}/3$ and $2\beta_{b}/3$, respectively. If $\beta_{a}=\beta_{b}=1$, then these states are populated according to their statistical weights. There is the obvious constraint that $\beta_{a}/3+2\beta_{b}/3=1$. The fine structure transitions are allowed electric dipole transitions and the corresponding rates may be computed in a straightforward manner (Bethe & Salpeter, 1957), giving $A_{a}=1.597\times 10^{-9}$ sec${}^{-1}$ ($2s_{1/2}$–$2p_{1/2}$) and $A_{b}=8.78\times 10^{-7}$ sec${}^{-1}$ ($2p_{3/2}$–$2s_{1/2}$). The absorption coefficient (valid for either transition) is $$\kappa_{\nu}=\pm{c^{2}\over{8\pi\nu^{2}}}{g\over g_{2s}}A_{\nu}\big{(}n_{2s}-{% \beta\over 3}n_{2p}\big{)},$$ (19) where $g$ is the degeneracy of the final state (2 for $2p_{1/2}$, 4 for $2p_{3/2}$; $g_{2s}=2$). The $-$ sign corresponds to transitions to $2p_{1/2}$; the $+$ sign to transitions to $2p_{3/2}$, and $\beta$ is either $\beta_{a}$ or $\beta_{b}$, respectively. Either final state quickly decays to the ground state via Lyman-$\alpha$ with rate $A_{21}$. The natural line width of the $2s$–$2p$ transitions is therefore dominated by the rapid decay of the $2p$ state and is $\Gamma=A_{21}/2\pi=99.8$ MHz. For a Lorentzian profile, $$A_{\nu}={{A(\Gamma/2\pi)}\over{(\nu-\nu_{f})^{2}+(\Gamma/2)^{2}}},$$ (20) where $A$ is either $A_{a}$ or $A_{b}$ and $\nu_{f}$ is the frequency of the fine structure transition. From equations (11) and (12) we find $$n_{2s}-{\beta\over 3}n_{2p}={{f\alpha n_{i}n_{e}}\over{(A_{2\gamma}+C_{sp}n_{i% })}}\big{(}1-{{\beta S}\over{S_{crit}}}\big{)},$$ (21) where we have assumed that the Lyman decay rate $A_{21}$ always dominates the collision rate $C_{sp}n_{i}$. The absorption coefficient at line center then is $$\kappa_{\nu}^{max}=\pm{c^{2}\over{8\pi\nu_{f}^{2}}}{g\over g_{2s}}{{2A}\over{% \pi\Gamma}}{{f\alpha n_{i}n_{e}}\over{(A_{2\gamma}+C_{sp}n_{i})}}\big{(}1-{{% \beta S}\over{S_{crit}}}\big{)}.$$ (22) This will be scaled according to the free-free absorption coefficient (Altenhoff et al., 1960), $$\kappa_{\nu}^{ff}\approx{{0.212n_{i}n_{e}}\over{v^{2.1}T^{1.35}}}.$$ (23) The ratio of the line optical depth (at line center), $\tau_{\nu}^{max}$, to the free-free continuum optical depth, $\tau_{\nu}^{ff}$, then is $$R={{\tau_{v}^{max}}\over{\tau_{\nu}^{ff}}}={K\over{(A_{2\gamma}+C_{sp}n_{i})}}% \big{(}1-{{\beta S}\over{S_{crit}}}\big{)},$$ (24) where $K=-3.0\times 10^{-4}$ sec${}^{-1}$ for the $2s_{1/2}\rightarrow 2p_{1/2}$ transition and $K=0.41$ sec${}^{-1}$ for the $2s_{1/2}\rightarrow 2p_{3/2}$ transition. We assumed $T=10^{4}$ K, and that all ionizing photons are captured by the HII region (Osterbrock, 1989). The equation of transfer is $${{dI_{\nu}}\over{dx}}=(-\kappa_{\nu}^{ff}-\kappa_{\nu})I_{\nu}+J_{\nu}^{ff}.$$ (25) Note that the line produces absorption or stimulated emission only (through $\kappa_{\nu}$), but does not produce significant spontaneous emission. Thus, the emissivity is completely dominated by the free-free emissivity, $J_{\nu}^{ff}$. For an HII region uniform along a ray path the solution is $$T_{b}=T{{1-e^{-(1+R)\tau_{\nu}^{ff}}}\over{1+R}},$$ (26) where the result has been expressed in terms of brightness temperature $T_{b}$. The line strength (in K) divided by the continuum brightness temperature is $${{\Delta T_{b}}\over T_{b,cont}}={1\over{1+R}}{{e^{\tau_{\nu}^{ff}}-e^{-R\tau_% {\nu}^{ff}}}\over{e^{\tau_{\nu}^{ff}}-1}}-1.$$ (27) In the optically thin limit ($\tau_{\nu}^{ff}<<1$) expanding (to second order) gives $${{\Delta T_{b}}\over T_{b,cont}}=-{1\over 2}R\tau_{\nu}^{ff}=-{1\over 2}\tau_{% \nu}^{max}.$$ (28) This result is in agreement with that obtained by Ershov (1987) for the optically thin limit. In the optically thick limit ($\tau_{\nu}^{ff}>>1$), $${{\Delta T_{b}}\over T_{b,cont}}={{-R}\over{1+R}}.$$ (29) The lines do not disappear in the optically thick limit as long as the $2s$ and $2p$ states are not in local thermodynamic equilibrium. (See also Ershov 1987.) The $10^{4}$ K microwave radiation field is not sufficient to overcome the rates described in Section 2 and thus it can not establish equilibrium. For example, in the likely case where $n_{2p}/(3n_{2s})<<1$, the $2p$ levels are drained by Lyman-$\alpha$ emission faster than the microwave radiation field can return these states to $2s$ via either absorption or stimulated emission. (See also Section 2.) Each fine structure level is split into two hyperfine levels as shown in Figure 1. The allowed transitions are also indicated. Both fine structure lines are split into three hyperfine components. The relative intensities of the components can be calculated from the appropriate sum rules (Sobelman, 1992). For the 910, 1088, and 1147 MHz lines the ratios are 1:2:1. The 9852, 9876, and 10030 MHz lines appear in the ratios 1:5:2. 5 Prospects for Detection The 9.9 GHz transitions are intrinsically about three orders of magnitude stronger than the 1.1 GHz transitions. The solution to the radiative transfer equation indicates that, in general, the line brightness temperature $\Delta T_{b}$ grows as the square of the free-free optical depth in the optically thin limit (equation 28). One factor comes from the growth of free-free emission, which effectively forms the “background”, and the other from the proportionate growth of the line absorption. In the optically thick limit, the growth in line temperature saturates (equation 29). Most HII regions of interest are optically thin at 9.9 GHz and optically thick at 1.1 GHz. In general, the higher optical depth at 1.1 GHz does not significantly offset the relative weakness of these transitions. We find, therefore, that the 9.9 GHz lines offer the better prospects for detection. Table 1 gives estimates of the line strengths for various HII regions and components, and planetary nebulae. It was assumed that $S=0$, in which case the 9.9 GHz lines would appear in absorption and the 1.1 GHz lines in stimulated emission. Most of the entries in Table 1 correspond to high emission measure components in HII regions, which are listed according to the nomenclature of the original references. The published emission measure values $E$ were used to calculate the free-free optical depth and the continuum brightness temperature (assuming in most cases $T_{e}\approx 10^{4}$ K). The line-to-continuum optical depth ratios $R$ are calculated from the published electron densities (equation 24). Notably, our predictions for both the 1.1 GHz and 9.9 GHz lines from Orion A (M42) are in good agreement with those of Ershov (1987). The estimates in Table 1 take into account the distribution of the line strength over three hyperfine components making up each fine structure line and the consequent line blending. At 9.9 GHz, the strongest line (9876 MHz) will be blended with the weakest line (9853 MHz). The peak temperature occurs at 9874 MHz and is 75% of that calculated using equation 26 for a single (fictitious) fine structure line. The 10030 MHz line will be somewhat distinct with a peak temperature equal to 32% of that predicted from equation 26, including contributions from the wings of the 9852 and 9876 MHz lines. The situation at 1.1 GHz is similar. The 1088 MHz line will appear blended with the 1147 MHz line, with a peak value of 63% at 1093 MHz. The 910 MHz line will remain distinct with a peak value of 30%, including line wing contributions from the other two lines. The estimates in Table 1 give the line temperature at the peak of the brightest spectral feature in either the 9.9 GHz or 1.1 GHz blended multiplet. At both frequencies, the resulting profile of a strong blended feature and a distinct weaker line, combined with Lorentzian line shapes, will provide a unique detection signature. Figure 2 shows the predicted appearance of the blended multiplet around 9.9 GHz. The Green Bank Telescope (GBT) provides a realistic hope of detection of the 9.9 GHz lines. Having a clear aperture it should be relatively free of standing waves in the antenna structure. Thus, it should be possible to search for very broad spectral features over bandpasses as large as 800 MHz. Nevertheless, the high emission measure HII regions that are likely to show the lines are typically quite compact. Therefore, the estimated line antenna temperatures $(\Delta T_{a})_{9.9}$ include the effects of dilution in the $1^{\prime}.21$ GBT beam. The estimated values of $\Delta T_{a}$ at 1.1 GHz were calculated using either the $11^{\prime}$ GBT beam (subscript $G$), or where appropriate, the Arecibo $4^{\prime}.3$ beam (subscript $A$). Quite possibly the observational sensitivity will be limited by systematic effects in which the continuum antenna temperature is modulated by frequency dependent gain variations which are not entirely removed by calibration procedures. Thus the limiting factor may be the line-to-continuum ratio, $\Delta T/T$, which is also tabulated for both lines. The estimates given in Table 1 are only approximate. The underlying observations are biased in favor of component sizes to which various interferometer arrays are sensitive. In complex HII regions structures are present on a range of scales, down to very high emission measure arc-second scale components (Turner & Matthews, 1984). The components included in the table (typically a few tens of arc seconds in size) were selected because they contribute significantly to the total flux in the GBT beam. The more compact, arc-second scale components typically yield larger values of $\Delta T/T$ (despite densities $>10^{4}$ cm${}^{-3}$ and consequent collisional de-excitation), yet they tend to contribute relatively little to the total flux in the GBT beam. Conversely, extended, low emission measure components in the beam will add to the antenna temperature while contributing little line absorption. It should also be noted that the presence of structures having a wide range of densities implies that the uniform model considered in Section 4 is an oversimplification. Inhomogeneous structure, including clumping, in the emission regions would imply that the emission measure estimates in Table 1 represent an average over the surface of the source. In the optically thin case, i.e. most sources at 9.9 GHz, a redistribution of emission measure, and therefore optical depth, will tend to strengthen the overall estimated line strength $\Delta T_{b}$ due to its $\tau^{2}$-dependence (equation 28). Collisional de-excitation in denser regions of the source, however, will tend to reduce the line optical depth (equation 24). Because $R$ is density dependent, particularly for densities above about $1.5\times 10^{4}$ cm${}^{-3}$, the brightness temperature for a homogeneous source (equation 26) cannot be rescaled using some weighted optical depth; rather, detailed modeling would be required. Three high emission measure planetary nebulae are also included in the table (IC 418, NGC 7027, and NGC 6572). These objects tend to have somewhat simpler structure than the more complex HII regions. Nevertheless, NGC 7027 has a well documented shell structure. In this case, a number of emission line diagnostics indicate $n_{e}\approx 5\times 10^{4}$ cm${}^{-3}$ (Middlemass, 1990). Subarcsecond radio images show that the emission originates from a shell with a peak emission measure of about $2.7\times 10^{8}$ pc cm${}^{-6}$ (Bryce et al., 1997). Both results are consistent with an area filling factor of about 30%, with characteristic emission measure of about $1.2\times 10^{8}$ pc cm${}^{-6}$. The higher emission measure boosts the line-to-continuum ratio, whereas the higher density acts oppositely due to collisional de-excitation. The net result in this case is a somewhat stronger estimated line with $\Delta T_{b}/T_{b,cont}\approx-1.4\times 10^{-3}$. This case is worked out in Table 1 as {NGC 7027}${}_{S}$, and depicted in Figure 2. Generally, compact, high emission measure objects tend to give stronger line-to-continuum ratios, despite their higher densities (which result in collisional de-excitation). This is because HII regions tend to be loosely organized along domains of constant excitation parameter $U$ (Habing & Israel, 1979). Using size $d$ as a free parameter, then $E\propto U^{3}/d^{2}$. For $E<3.8\times 10^{8}$ pc cm${}^{-6}$, the HII region is optically thin at 9.9 GHz and, for $S=0$ $${{\Delta T_{b}}\over{T_{b,cont}}}\approx-10^{-2}{E_{8}\over{n_{4}+1.5}}$$ (30) where $E_{8}=E/(10^{8}\ {\rm pc\ cm}^{-6})$ and $n_{4}=n/(10^{4}\ {\rm cm}^{-3})$. Thus, in the low density limit ($n_{4}<<1.5$) the line-to-continuum ratio grows in direct proportion to $E$, and therefore to $d^{-2}$ for fixed $U$. Of course, the most compact objects have densities in excess of $1.5\times 10^{4}\ {\rm cm}^{-3}$, in which case the line-to-continuum ratio increases with $d^{-1/2}\propto E^{1/4}$ for fixed $U$, since $n_{e}\propto(U/d)^{3/2}$. For emission measures above $\approx 4\times 10^{8}$ pc cm${}^{-6}$ an HII region becomes optically thick at 9.9 GHz and the advantage of increased emission measure is lost. In such cases, higher densities will reduce the line-to-continuum ratio. As discussed above, the 1.1 GHz lines are considerably weaker. Under the most favorable conditions of high free-free optical depth and low density, we find $${{\Delta T_{b}}\over{T_{b,cont}}}\approx-R\approx 3.6\times 10^{-5}\ ,$$ (31) assuming, as discussed above, that $S=0$. These conditions are not uncommon and exceptionally high emission measures are not required to achieve high optical depth at 1.1 GHz. In the case of M 42 the optical depth at 1.1 GHz is 1.2 and the beam diluted line strength would be about 4 mK, versus a continuum antenna temperature of $\approx$ 400 K. 6 Conclusions The metastable $2s_{1/2}$ state of hydrogen is likely overpopulated in HII regions. Lyman-$\alpha$ pumping of the $2p$ states is expected to be negligible due to absorption of Lyman-$\alpha$ radiation by dust. Thus, the $2s_{1/2}\rightarrow 2p_{3/2}$ transitions (9.9 GHz) are predicted to appear in absorption and the $2s_{1/2}\rightarrow 2p_{1/2}$ transitions (1.1 GHz) in stimulated emission. Because of the short lifetime of the final $2p$ states, the width of the lines is dominated completely by intrinsic line width. In effect, then, the power is distributed over $\approx 100$ MHz of line width resulting in very weak lines. In addition, the power is distributed over three strongly blended hyperfine lines in each multiplet. Searching for the 9.9 GHz lines in high emission measure HII regions offers the best prospects for detection. In the optically thin limit, the line strength varies as the square of the free-free optical depth. Predicted line-to-continuum ratios (in absorption) range up to several tenths of a percent in W58A, including the effects of line blending. With the Green Bank Telescope, the predicted peak absorption line strength may reach $\Delta T_{a}\approx-170$ mK in this case, allowing for the redistribution of line strength over the three hyperfine lines. Other high emission HII regions are expected to show somewhat weaker 9.9 GHz lines, for example, with line-to-continuum ratios of about 0.1 percent and line strengths of tens of mK with the Green Bank Telescope. These predictions are uncertain, however, owing to biases inherent in estimating emission measures as well as selection effects in various interferometric surveys of compact HII regions. These conclusions apply to thermal sources. An important extension of this work would consider the broad line regions of active galactic nuclei and quasars in which a strong nonthermal microwave radiation field could influence the populations of the $2s$ and $2p$ levels, as well as provide a background for line absorption or stimulated emission. In general, detection of the fine structure lines of hydrogen will be challenging due to the extraordinary line width and blended structure. The observations will require meticulous baseline calibration and subtraction. We thank Drs. R. Brown and J. Simonetti for useful discussions, and Dr. A. Ershov for bringing his work to our attention. Portions of this work were completed while one of the authors (B.D.) was a Visiting Scientist at the National Radio Astronomy Observatory (NRAO) in Green Bank, WV, and also a faculty member in the Department of Physics at Virginia Tech. This work was supported by the Glaxo-Wellcome Endowment at the University of North Carolina-Asheville and by National Science Foundation grant AST-0098487 to Virginia Tech. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. References Aannestad (1989) Aannestad, P. A. 1989, ApJ, 338, 162 Altenhoff et al. (1960) Altenhoff, W., Mezger, P. G., Wendker, H., & Westerhout, G. 1960, Veröff Univ. Sternwarte, Bonn, 59, 48 Barlow (1993) Barlow, M. J. 1993, in Planetary Nebulae, Proceedings of the 155 Symposium of the IAU, eds. R. Weinberger & A. Acker (Dordrecht: Kluwer), 163 Bethe & Salpeter (1957) Bethe, H. A. & Salpeter, E. E. 1957, Handbuch der Physik, Band 35, Atome 1, 1 Breit & Teller (1940) Breit, G. & Teller, E. 1940, ApJ, 91, 215 Bryce et al. (1997) Bryce,, M., Pedlar, A., Muxlow, T., Thomasson, P., & Mellema, G. 1997, MNRAS, 284, 815 Capriotti (1966) Capriotti, E. R. 1966, ApJ, 146, 709 Cox & Matthews (1969) Cox, D. P. & Matthews, W. G. 1969, ApJ, 155, 859 Draine & Lee (1984) Draine, B. T. & Lee H. M. 1984, ApJ, 285, 89 Ershov (1987) Ershov, A. A. 1987, Soviet Ast. Letters, 13, 115 Field & Partridge (1961) Field, G. B. & Partridge, R. B. 1961, ApJ, 134, 959 Habing & Israel (1979) Habing, H. J. & Isreal, F. P. 1979, ARA&A, 17, 345 Hoare (1990) Hoare, M. G. 1990, MNRAS, 244, 193 Hoare et al. (1992) Hoare, M. G., Roche, P. F., & Clegg, R. E. S. 1992, MNRAS, 258, 257 Kaplan & Pikelner (1970) Kaplan, S. A. & Pikelner, S. B. 1970, The Interstellar Medium (Cambridge: Harvard University Press) Kingdon & Ferland (1997) Kingdon, J. B. & Ferland, G. J. 1997, ApJ, 477, 732 Krassner et al. (1983) Krassner, J., Pipher, J. L., Savedoff, M. P., & Soifer, B. T. 1983, AJ, 88, 972 Kwok (1980) Kwok, S. 1980, ApJ, 236, 592 Middlemass (1990) Middlemass, D. 1990, MNRAS, 244, 294 Myers & Barrett (1972) Myers, P. C. & Barrett, A. H. 1972, ApJ, 176, 111 Natta & Panagia (1981) Natta, A. & Panagia, N. 1981, ApJ, 248, 189 Osterbrock (1989) Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (Mill Valley: University Science Books) Pottasch (1960) Pottasch, S. R. 1960, ApJ, 131, 202 Pottasch (1987) Pottasch, S. R. 1986, in Late Stages of Stellar Evolution, eds. S. Kwok & S. R. Pottasch, (Dordrecht: D. Reidel) Purcell (1952) Purcell, E. M. 1952, ApJ, 116, 457 Seaton (1955) Seaton, M. J. 1955, Proc Roy Soc London A, 68, 457 Schraml & Mezger (1969) Schraml, J. & Mezger, P. G. 1969, ApJ, 156, 269 Shklovski (1960) Shklovski, I. S. 1960, Cosmic Radio waves (Cambridge: Harvard University Press), 255 Sobelman (1992) Sobelman, I. I. 1992, Atomic Spectra and Radiative Transitions (Berlin: Springer) Spitzer (1978) Spitzer, L. 1978, Physical Processes in the Interstellar Medium (New York: Wiley-Interscience) Spitzer & Greenstein (1951) Spitzer, L. & Greenstein, J. L. 1951, ApJ, 114, 407 Thomasson & Davies (1970) Thomasson, P & Davies, J. G. 1970, MNRAS, 150, 359 Townes (1957) Townes, C. H. 1957, in Radio Astronomy, Proceeding from 4th IAU Symposium, ed. H. C. Van de Hulst (Cambridge: Cambridge Univ. Press), 92 Turner & Matthews (1984) Turner, B. E. & Matthews, H. E. 1984, ApJ, 277, 164 van Gorkom et al. (1980) van Gorkom, J. H., Goss, W. M., Shaver, P. A., Schwartz, U. J., & Harten, R. H. 1980, å, 89, 150 van Hoof et al. (2000) van Hoof, P. A. M., Van de Steene, G. C., Beintema, D. A., Martin, P. G., Pottasch, S. R., & Ferland, G J. 2000, ApJ, 532, 384 Webster & Altenhoff (1970) Webster, W. J. & Altenhoff, W. J. 1970, AJ, 75, 896 Wild (1952) Wild, J. P., ApJ, 115, 206 Wynn-Williams (1971) Wynn-Williams, C. G. 1971, MNRAS, 151, 397
Towards Lightweight Applications: Asymmetric Enroll-Verify Structure for Speaker Verification Abstract With the development of deep learning, automatic speaker verification has made considerable progress over the past few years. However, to design a lightweight and robust system with limited computational resources is still a challenging problem. Traditionally, a speaker verification system is symmetrical, indicating that the same embedding extraction model is applied for both enrollment and verification in inference. In this paper, we come up with an innovative asymmetric structure, which takes the large-scale ECAPA-TDNN model for enrollment and the small-scale ECAPA-TDNNLite model for verification. As a symmetrical system, our proposed ECAPA-TDNNLite model achieves an EER of 3.07% on the Voxceleb1 original test set with only 11.6M FLOPS. Moreover, the asymmetric structure further reduces the EER to 2.31%, without increasing any computational costs during verification. Index Terms—  lightweight speaker verification, asymmetric enroll-verify structure, ECAPA-TDNNLite 1 Introduction Automatic speaker verification (ASV) refers to the process of verifying a user’s identity based on the voiceprint [1, 2]. Classification-based ASV systems generally consist of two stages: in the enrollment stage, ASV systems extract a fixed-dimensional speaker embedding according to the user’s voice; then in the verification stage, given the unknown speech, speaker embeddings are extracted and compared with the enrolled one. A preset threshold makes the final decision to accept or reject the speech. In the past years, performance of ASV has made significant improvement due to the successful application of deep neural networks (DNN) [3, 4, 5, 6]. However, the computational complexity also increases accordingly. For devices like mobile phones and IoT terminals, it matters to develop low-latency models with limited resources, and the task has attracted much attention. For example, [7] and [8] halve the number of channels and prepose strides to reduce computational requirements of ResNet34. [9] and [10] develop the lightweight models based on separable convolutions [11]. [12] applies binary neural networks to the task, while [13] utilizes knowledge distillation to guide the student model with the teacher model. They are all symmetrical systems. In this paper, we propose an asymmetric structure at the system level, where models of different scales are separately employed in the enrollment and verification stages. Specifically, a large-scale model with higher accuracy and larger computational consumption is applied for enrollment, while a small-scale model balancing performance and inference latency executes during verification. As a result, the asymmetric structure achieves better performance than the small-scale model. We argue that it benefits lightweight applications due to the following reasons: • The DNN based embedding extraction model executes in the verification stage for most of the time, since users usually enroll their voices only once. Employing a large-scale model during enrollment does not significantly increase the overall computational complexity. • Users are less sensitive to the latency of enrollment. Besides, in some IoT application scenarios, feedbacks like “enrollment success” can be presented immediately even though the device is still processing the enrolled speech, which improves the user experience. • The asymmetric structure exactly meets the scenarios where speakers enroll their voices on the server while verifying identities on devices. The server generally owns more abundant resources and supports larger models. Symmetrical systems, however, limit the server side to small-scale models only, leading to poorer performance. Another highlight of our paper is ECAPA-TDNNLite, a small-scale model based on ECAPA-TDNN [14, 15]. ECAPA-TDNNLite reduces computational costs by squeezing feature mapping sizes during calculation and employing separable convolutions instead, which reaches the balance between performance and inference latency. The rest of this paper is organized as follows. The next section introduces details of the asymmetric structure. Section 3 describes the experimental setup and evaluation protocol. Results and discussions are presented in Section 4, while conclusions are drawn in Section 5. 2 Asymmetric Structure 2.1 Overview The training process of the asymmetric structure is shown in Fig. 1. Input features like MFCCs are fed into the large-scale model (L-Model) and the small-scale model (S-Model), respectively. Then classification-based loss functions compute the loss between speaker embeddings and the corresponding ground-truth labels. Following popular configurations in ASV, we employ the additive angular margin softmax (AAM Softmax) loss [16]. Besides, an additional loss function is proposed to align the enrolled embedding and the verified embedding from the same input utterance. In inference, the L-Model computes the enrolled embedding according to the target speaker’s voice while the S-Model extracts the verified embedding given the unknown speech. It is worth noting that if the S-Model is used for both enrollment and verification, the whole system degrades to the embedding-level knowledge distillation solution like [13]. 2.2 Space Alignment A key problem in the asymmetric structure is that the enrolled embedding and the verified one may be derived from different speaker subspaces, leading to mismatch in inference. Therefore, it is necessary to maximize similarity of the two embeddings in the training process, namely space alignment. It is similar to metric learning methods in ASV, and thus we investigate the angular prototypical (AP) loss in [8, 17] to our system with slight modification. Assume that a mini-batch contains $B$ samples. $\boldsymbol{e}_{1},\boldsymbol{e}_{2},...,\boldsymbol{e}_{B}$ are enrolled embeddings extracted from the mini-batch, and $\boldsymbol{v}_{1},\boldsymbol{v}_{2},...,\boldsymbol{v}_{B}$ are the verfied embeddings. The cosine similarity between $\boldsymbol{e}_{i}$ and $\boldsymbol{v}_{j}$ is $$\cos\theta_{i,j}=\frac{\langle\boldsymbol{e}_{i},\boldsymbol{v}_{j}\rangle}{\|\boldsymbol{e}_{i}\|\|\boldsymbol{v}_{j}\|},$$ (1) and the loss function is defined as $$L_{\text{AP}}=-\frac{1}{B}\sum_{i=1}^{B}\log\frac{e^{w\cos\theta_{i,i}+b}}{\sum_{j=1}^{B}e^{w\cos\theta_{i,j}+b}}.$$ (2) In the original settings, $w$ and $b$ are trainable parameters. Here we view $w$ as a hyperparameter, and remove bias $b$ since it should have been canceled out by fraction. The AP loss aims at maximizing cosine similarity between $\boldsymbol{e}_{i}$ and $\boldsymbol{v}_{i}$ ($i=1,2,...,N$) while minimizing the similarity of the rest pairs. Therefore, utterances to form a mini-batch must be from different speakers. Let $L_{\text{L-AAM}}$ and $L_{\text{S-AAM}}$ be the two AAM loss functions, and the overall loss is formulated as $$L=L_{\text{S-AAM}}+L_{\text{L-AAM}}+\lambda\cdot L_{\text{AP}},$$ (3) where $\lambda$ is a scale factor to balance the losses. 2.3 L-Model We employ ECAPA-TDNN as the L-Model, which is a new variant of the TDNN structure and has achieved promising success in ASV. Input MFCC features are fed into a Conv1D layer with stride $s=1$. The followings are three stacked SE-Res2Blocks. Each block contains a preceding dense layer, the dilated convolutions, a succeding dense layer and a squeeze-and-expansion (SE) layer [18]. The whole block is covered by a skip connection. Outputs from the three SE-Res2Blocks are concatenated over channel dimension and fed into another dense layer with 1536 units. Then the attentive statistics pooling (ASP) layer calculates weighted statistics over the temporal dimension, converting frame-wise feature mappings to utterance-wise vectors. The last dense layer reduces the vector dimension from 3072 to 192, generating output speaker embeddings. More details are reported in [15]. 2.4 S-Model Considering the requirements of real-life applications, the S-Model is expected to run under critical resource-limited conditions with only around 10M floating-point operations per second (FLOPS). Therefore, we extend the ECAPA-TDNN network to ECAPA-TDNNLite with the following modifications: • Change stride of the first Conv1D layer from 1 to 2. The configuration reduces the sequence length by half and thus cuts down 50% computation, with slight degradation in performance. • Replace dilated convolutions with separable convolutions in SE-Res2Blocks to further reduce the number of parameters, meanwhile maintaining the same receptive field. • Sum outputs of the three SE-Res2Blocks instead of concatenation. The concatenation operator results in high-dimensional feature mappings and relatively expensive computational costs for the following layers. The whole network topology of ECAPA-TDNNLite is shown in Fig. 2. 3 Experimental Setup 3.1 Dataset Experiments are carried out on the Voxceleb dataset [20, 21]. The development part of Voxceleb2 is employed for training, which contains 1092009 utterances from 5994 speakers. We perform online data augmentation over the training utterances with MUSAN [22] and RIR [23] datasets. There are six types of augmentation: music, babble, ambient noise, television, tempo and reverberation. The babble noise includes 3 to 8 speech files, and the television noise is a mixture of one speech file and one music file. The tempo augmentation speeds up or down utterances by 1.1 or 0.9 without changing speakers’ pitch. For reverberation, we only take the small and medium simulated room impulse responses. 3.2 Evaluation Protocol All systems are evaluated on clean trials of the Voxceleb1 dataset, including Voxceleb1-O, Voxceleb1-E and Voxceleb1-H. Cosine similarity is calculated between embedding pairs. Evaluation metrics include equal error rate (EER) and minimum normalized detection cost (MinDCF). $P_{\text{target}}$ is set to 0.01 and $C_{\text{FA}}=C_{\text{Miss}}=1$ for MinDCF. 3.3 Training Details Inputs are 80-dimensional MFCC features with 25 ms length and 10 ms shift. MFCCs are mean-normalized and no voice activity detection is applied. SpecAugment randomly masks 0 to 5 frames in both time and frequency domains of the log mel spectrograms. Last, the features are cropped into 2-second segments and 256 segments form a mini-batch. Note that segments in the same mini-batch must come from different speakers, to satisfy requirements of the AP loss. We set the number of channels $C=512$ for ECAPA-TDNN, and $C=144$ for ECAPA-TDNNLite. The bottleneck dimension of the SE-Block and the ASP layer is 128, and the scale dimension in the SE-Res2Block equals 8. Size of the output speaker embedding is 192. AAM Softmax loss functions use a margin of 0.2 and a scale of 32. Hyperparameter $w$ in AP loss is also set to 32, and the scale factor $\lambda$ equals 10. In the training process, model parameters update through the SGD optimizer. According to the warmup strategy, the learning rate is initialized as 0, and linearly increases to 0.1 in 5 epochs. Then the learning rate halves whenever the validation loss does not improve for over 3 epochs. The training process terminates after 100 epochs. 4 Results 4.1 Performance of ECAPA-TDNNLite Table 1 lists recent works on lightweight ASV models. Most of them achieve EERs in the range of 2% and 3%, with millions of parameters and up to 1G FLOPS. We argue that their computational costs are still expensive and thus mainly take Juliens’ work as the baseline, which reaches a 3.31% EER with only 238K parameters and 11.5M FLOPS. To have a fair comparison, we reproduce the work with the same experimental setup in Section 3. According to experimental results, the individual ECAPA-TDNNLite model achieves a better EER of 3.07% on Voxceleb1-O. Although our model has 30% more parameters than Julien’s, the FLOPS are the same and our model even runs 4 times faster. It shows an interesting phenomenon that the inference speed is not strictly proportional to the number of parameters or FLOPS. In our case, it is a possible reason that Julien’s model stacks deep despite fewer parameters, and thus reduces the degree of parallelism. More discussions can be viewed in [24]. We skip this topic due to space limitation. 4.2 Performance of the Asymmetric Structure Performance of the asymmetric structure is shown in Table 2. Experiments 1 and 2 report EERs and MinDCFs of individual ECAPA-TDNN and ECAPA-TDNNLite models. For the rest experiments, we jointly train the two models in the asymmetric structure. Experiment 3 corresponds to the knowledge distillation method. After joint training, only the small-scale model, or namely the student model, is applied for both enrollment and verification. The performance improves by 2% relatively in comparison with experiment 2. In the last experiment where we employ ECAPA-TDNN for enrollment and ECAPA-TDNNLite for verification, the EER metric reduces by 25%, as well as 15% to 23% for MinDCF, which proves effective in comparison with the symmetric structure. The increased computational complexity only lies in the enrollment stage, which we argue is of low usage rate and less latency-sensitive to users in daily life. The asymmetric structure also shows a new solution to improve performance on IoT devices. In addition to increasing the computing capability or updating models with limited resources, it is also effective to employ a powerful model on the server for enrollment. To explore why the asymmetric structure is effective, we select two speakers from the Voxceleb1 test set and extract speaker embeddings with both ECAPA-TDNN and ECAPA-TDNNLite models. Then the embeddings are projected into 2D surface and normalized by different scales, as shown in Fig. 3. $\theta_{\text{intra}}$ is the maximum intra-class distance of the ECAPA-TDNNLite embeddings, and $\theta_{\text{inter}}$ is the minimum inter-class distance. An ASV system is expected to have smaller $\theta_{\text{intra}}$ and larger $\theta_{\text{inter}}$. When we replace the ECAPA-TDNNLite embeddings with ECAPA-TDNN embeddings for enrollment, $\theta_{\text{intra}}$ narrows by $\Delta\theta$ while $\theta_{\text{inter}}$ widens by the same value, which eventually improves the performance. 5 Conclusions This paper proposes the asymmetric structure, where models of different scales are employed for enrollment and verification. ECAPA-TDNNLite is presented as the small-scale model, which achieves an EER of 3.07% with only 11.6M FLOPS. The asymmetric structure further improves the performance by 25% relatively, with no additional computational costs during verification. References [1] A.E. Rosenberg, “Automatic Speaker Verification: A Review,” Proc. IEEE, vol. 64, no. 4, pp. 475–487, 1976. [2] Arnab Poddar, Md Sahidullah, and Goutam Saha, “Speaker Verification with Short Utterances: A Review of Challenges, Trends and Opportunities,” IET Biometrics, vol. 7, no. 2, pp. 91–101, 2018. [3] David Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, and Sanjeev Khudanpur, “X-Vectors: Robust DNN Embeddings for Speaker Recognition,” in ICASSP, 2018, pp. 5329–5333. [4] Dean Luo, Chunxiao Zhang, Linzhong Xia, and Lixin Wang, “Factorized Deep Neural Network Adaptation for Automatic Scoring of L2 Speech in English Speaking Tests,” in Proc. Interspeech, 2018, pp. 1656–1660. [5] Daniel Garcia-Romero, Greg Sell, and Alan Mccree, “MagNetO: X-vector Magnitude Estimation Network plus Offset for Improved Speaker Recognition,” in Proc. Odyssey, 2020, pp. 1–8. [6] Jenthe Thienpondt, Brecht Desplanques, and Kris Demuynck, “Integrating Frequency Translational Invariance in TDNNs and Frequency Positional Information in 2D ResNets to Enhance Speaker Verification,” arXiv preprint arXiv:2104.02370, 2021. [7] Weicheng Cai, Jinkun Chen, and Ming Li, “Exploring the Encoding Layer and Loss Function in End-to-End Speaker and Language Recognition System,” in Proc. Odyssey, 2018, pp. 74–81. [8] Joon Son Chung, Jaesung Huh, Seongkyu Mun, Minjae Lee, Hee-Soo Heo, Soyeon Choe, Chiheon Ham, Sunghwan Jung, Bong-Jin Lee, and Icksang Han, “In Defence of Metric Learning for Speaker Recognition,” in Proc. Interspeech, 2020, pp. 2977–2981. [9] Nithin Rao Koluguri, Jason Li, Vitaly Lavrukhin, and Boris Ginsburg, “SpeakerNet: 1D Depth-wise Separable Convolutional Network for Text-Independent Speaker Recognition and Verification,” arXiv preprint arXiv:2010.12653, 2020. [10] Julien Balian, Raffaele Tavarone, Mathieu Poumeyrol, and Alice Coucke, “Small Footprint Text-Independent Speaker Verification For Embedded Systems,” in ICASSP, 2021, pp. 6179–6183. [11] Francois Chollet, “Xception: Deep Learning With Depthwise Separable Convolutions,” in Proc. CVPR, 2017. [12] Tinglong Zhu, Xiaoyi Qin, and Ming Li, “Binary Neural Network for Speaker Verification,” in Proc. Interspeech, 2021, pp. 86–90. [13] Shuai Wang, Yexin Yang, Tianzhe Wang, Yanmin Qian, and Kai Yu, “Knowledge Distillation for Small Foot-print Deep Speaker Embedding,” in ICASSP, 2019, pp. 6021–6025. [14] Jenthe Thienpondt, Brecht Desplanques, and Kris Demuynck, “The Idlab Voxsrc-20 Submission: Large Margin Fine-Tuning and Quality-Aware Score Calibration in DNN Based Speaker Verification,” in ICASSP, 2021, pp. 5814–5818. [15] Brecht Desplanques, Jenthe Thienpondt, and Kris Demuynck, “ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification,” in Proc. Interspeech, 2020, pp. 3830–3834. [16] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, “ArcFace: Additive Angular Margin Loss for Deep Face Recognition,” in Proc. CVPR, 2019. [17] Heo, Hee Soo and Lee, Bong-Jin and Huh, Jaesung and Chung, Joon Son, “Clova baseline system for the voxceleb speaker recognition challenge 2020,” arXiv preprint arXiv:2009.14153, 2020. [18] Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-Excitation Networks,” in Proc. CVPR, 2018. [19] Joon Son Chung, Jaesung Huh, and Seongkyu Mun, “Delving into VoxCeleb: Environment Invariant Speaker Recognition,” in Proc. Odyssey, 2020, pp. 349–356. [20] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman, “Voxceleb: A Large-scale Speaker Identification Dataset,” arXiv preprint arXiv:1706.08612, 2017. [21] Joon Son Chung, Arsha Nagrani, and Andrew Zisserman, “VoxCeleb2: Deep Speaker Recognition,” in Proc. Interspeech, 2018, pp. 1086–1090. [22] David Snyder, Guoguo Chen, and Daniel Povey, “Musan: A Music, Speech, and Noise Corpus,” arXiv preprint arXiv:1510.08484, 2015. [23] Tom Ko, Vijayaditya Peddinti, Daniel Povey, Michael L. Seltzer, and Sanjeev Khudanpur, “A Study on Data Augmentation of Reverberant Speech for Robust Speech Recognition,” in ICASSP, 2017, pp. 5220–5224. [24] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” in ECCV, 2018, pp. 122–138.
Integrals derived from the doubling method David Ginzburg School of Mathematical Sciences, Sackler Faculty of Exact Sciences, Tel-Aviv University, Israel 69978 ginzburg@post.tau.ac.il  and  David Soudry School of Mathematical Sciences, Sackler Faculty of Exact Sciences, Tel-Aviv University, Israel 69978 soudry@post.tau.ac.il Abstract. In this note, we use a basic identity, derived from the generalized doubling integrals of [C-F-G-K1], in order to explain the existence of various global Rankin-Selberg integrals for certain $L$-functions. To derive these global integrals, we use the identities relating Eisenstein series in [G-S], together with the process of exchanging roots. We concentrate on several well known examples, and explain how to obtain them from the basic identity. Using these ideas, we also show how to derive a new global integral. Key words and phrases: $L$-functions, Doubling method, Eisenstein series, Cuspidal automorphic representations, Fourier coefficients 1991 Mathematics Subject Classification: Primary 11F70 ; Secondary 22E55 This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 461/18). 1. Introduction The first construction of global Rankin-Selbrg integrals using the doubling method is due to Piatetski-Shapiro and Rallis. See [PS-R1] where the authors introduced a global integral which represents the standard $L$-function attached to any irreducible, automorphic, cuspidal representation $\pi$ of $G(\bf A)$, where $G$ is a classical group defined over a number field $F$, and $\bf A$ is its ring of Adeles. We recall this briefly for the symplectic group $G=Sp_{2k}$. Let $\pi$ denote an irreducible, automorphic, cuspidal representation of $Sp_{2k}({\bf A})$. Form the Eisenstein series $E(f_{s})$ on $Sp_{4k}({\bf A})$, associated to a smooth, holomorphic section $f_{s}$ of the parabolic induction $Ind_{Q_{2k}({\bf A})}^{Sp_{4k}({\bf A})}|\det\cdot|^{s}$, where $Q_{2k}\subset Sp_{4k}$ is the Siegel parabolic subgroup. The doubling integral introduced in [PS-R1] is given by (1.1) $$\int\limits_{Sp_{2k}(F)\times Sp_{2k}(F)\backslash Sp_{2k}({\bf A})\times Sp_{% 2k}({\bf A})}\varphi_{1}(g)\overline{\varphi_{2}(h)}E(f_{s})(t(g,h))dgdh,$$ where, $\varphi_{i}$, $i=1,2$, are cusp forms in the space of $\pi$, and $t(g,h)$ denotes the direct sum embedding of $Sp_{2k}({\bf A})\times Sp_{2k}({\bf A})$ inside $Sp_{4k}({\bf A})$. This integral represents $L(\pi,s+\frac{1}{2})$, after normalizing the Eisenstein series. A similar construction exists for general linear groups and for orthogonal groups. In recent years new constructions of global integrals which use the doubling method were discovered. First, in [G-H], the authors constructed a doubling integral for the exceptional group $G_{2}$. This integral, which involves an Eisenstein series on the exceptional group $E_{8}$, represents the seven degree $L$-function. Recently, in [C-F-G-K1], the method of [PS-R1] was extended to a more general doubling integral, which represents the standard $L$-function $L(\pi\times\tau,s+\frac{1}{2})$, where $\pi$, $\tau$ are irreducible, automorphic, cuspidal representations of $G(\bf A)$, $GL_{n}({\bf A})$ respectively, and $G$ is a split classical group. As explained in [C-F-G-K2], this construction can be extended to represent $L$-functions $L(\tilde{\pi}\times\tilde{\tau},s+\frac{1}{2})$ for pairs of irreducible, automorphic, cuspidal representations of metaplectic covers of $G(\bf A)$ and $GL_{n}({\bf A})$. All the above integrals have the same structure in the sense that they have the form (1.2) $$\int\limits_{G(F)\times G(F)\backslash G({\bf A})\times G({\bf A})}\ \int% \limits_{U(F)\backslash U({\bf A})}\varphi_{1}(g)\overline{\varphi_{2}(h)}E(f_% {\mathcal{E}_{\tau},s})(ut(g,h))\psi^{-1}_{U}(u)dudgdh.$$ Here, $\varphi_{i}$, $i=1,2$, are cusp forms in the space of $\pi$; $E(f_{\mathcal{E}_{\tau},s})$ is a certain Eisenstein series on a certain group $H$, attached to a smooth, holomorphic section $f_{\mathcal{E}_{\tau},s}$ of a parabolic induction on $H(\bf A)$, with parabolic data $(P,\mathcal{E}_{\tau})$, where $\mathcal{E}_{\tau}$ is a certain automorphic representation of the Levi part of $P(\bf A)$, associated to $\tau$; $U$ is a certain unipotent subgroup of $H$ and $\psi_{U}$ is a character of $U(\bf A)$, trivial on $U(F)$. Finally, $t$ denotes an embedding of $G({\bf A})\times G({\bf A})$ inside $H(\bf A)$, so that conjugation by $t(g,h)$ preserves $U(\bf A)$ and $\psi_{U}$. For $G=Sp_{2k}$, integral (1.1) is obtained from integral (1.2) by taking $H=Sp_{4k},\ P=Q_{2k},\ U$- the trivial group, $\tau$- the trivial representation of $GL_{1}({\bf A})$, and $\mathcal{E}_{\tau}$- the trivial representation of $GL_{2k}({\bf A})$. In all known cases of the doubling method [PS-R1], [G-H], [C-F-G-K1], starting with integral (1.2), for $\text{Re}(s)$ large, after an unfolding process it is equal to (1.3) $$\int\limits_{G({\bf A})}\int\limits_{U_{0}({\bf A})}<\varphi_{1},\pi(h)\varphi% _{2}>f_{W({\mathcal{E}}_{\tau}),s}(\delta u_{0}t(1,h))\psi^{-1}_{U}(u_{0})du_{% 0}dg.$$ Here, $<\varphi_{1},\pi(h)\varphi_{2}>$ denotes the standard $L^{2}$- inner product of the two cusp forms $\varphi_{1},\pi(h)\varphi_{2}$; $U_{0}$ is a certain unipotent subgroup of $U$; $\delta$ is a certain element in $H(F)$; $f_{W({\mathcal{E}}_{\tau}),s}$ is the composition of the section with a functional $W({\mathcal{E}}_{\tau})$ on $\mathcal{E}_{\tau}$ given by a certain Fourier coefficient. It defines a unique (up to scalars) functional, satisfying appropriate equivariance properties, on the space of ${\mathcal{E}}_{\tau}$. For example, in the cases considered in [PS-R1] and [G-H], ${\mathcal{E}}_{\tau}$ is the trivial representation. In the case of [C-F-G-K1], ${\mathcal{E}}_{\tau}$ is a Speh representation $\Delta(\tau,m)$ (of appropriate length $m$), associated with $\tau$. For example, for $G=Sp_{2k}$, $m=2k$, and $P=Q_{2nk}$ is the Siegel parabolic subgroup of $H=Sp_{4nk}$. It follows that integral (1.2) is Eulerian, and then it represents the standard $L$-function for the pair $(\pi,\tau)$, after normalizing the Eisenstein series. The main advantage of the doubling construction is that it applies to any irreducible, automorphic, cuspidal representation of $G({\bf A})$. In other words, it is what we refer to as model free, meaning that for integral (1.2) to be nonzero (as data vary), one need not assume that $\pi$ has a certain nonzero Fourier coefficient, or any other type of model. In the last 30 years, people found many Rankin-Selberg integrals which represent the above $L$- functions, given by the (generalized) doubling method. Most of them depend on a certain model. For example, for $G=Sp_{2k}$, a global integral for the tensor product $L$-function $L(\pi\times\tau,s)$ was introduced in [G-R-S1]. This construction assumes that an appropriate Whittaker coefficient of the cuspidal representation $\pi$ is not zero. Another construction, this time using a Fourier-Jacobi model, also referred to as a Fourier-Jacobi mixed model, was introduced in [G-J-R-S]. These two constructions use a model which is proved to be unique. This property guarantees that the integrals are indeed Eulerian. There are also examples of integrals obtained by the so-called New Way method. A basic example of such integrals is introduced in [PS-R2]. They represent the standard $L$-function attached to the representation $\pi$. Here the integrals unfold to a certain model which is not unique. In spite of that, the global integrals are Eulerian. Many other examples of this type of integrals were constructed. See for example [B-F-G]. In this context, it is only natural to ask the following questions. First, why do so many different examples of global integrals, which represent the same $L$-function, exist? Second, how does one find these global integrals which turn out to be Eulerian? This issue is especially important when we deal with New Way global integrals. The existence of such integrals is still a mystery. In this note we suggest an answer to these questions. Our assertion is that all the integrals which represent the above $L$-functions (represented by the generalized doubling method) can be derived by a relatively simple procedure from one basic identity. This includes the above constructions of global integrals which use a unique model of any type, as well as the New Way integrals. In fact, our derivation explains why these types of integrals show up. Moreover, we predict that all integrals which represent these $L$-functions can be derived in the way we shall now explain. We do this for $(G,H,\pi,\tau)$ in the set up of [C-F-G-K1], where $G$ is a split classical group. Starting with integral (1.2), we define (1.4) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})(g)=\int\limits_{G(F)\backslash G({% \bf A})}\int\limits_{U(F)\backslash U({\bf A})}\varphi_{\pi}(h)E(f_{\mathcal{E% }_{\tau},s})(ut(g,h),s)\psi^{-1}_{U}(u)dudh$$ Here, $\varphi_{\pi}$ is a cusp form in the space of $\pi$. In (1.2), we took the complex conjugate of $\pi$ instead of $\pi$, so that $\varphi_{\pi}$ is in place of $\bar{\varphi}_{2}$. We have Theorem 1. Let $f_{\mathcal{E}_{\tau},s}$ be a $K$-finite, holomorphic section. Then the function $\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})$ is meromorphic and takes values in a certain outer conjugate of $\pi$ by an element of order 2, $\pi^{\iota}$. Moreover, the right hand side of equation (1.4) is Eulerian, and represents the same partial $L$-function that integral (1.2) represents, after normalizing the Eisenstein series. The theorem follows from the Euler product expansion of integral (1.2), and we will prove it in Sec. 2.3. A similar theorem with the same proof holds in the set up of [PS-R1], [G-H]. We claim that all known constructions of global integrals, which represent the above $L$- functions, are derived from identity (1.4). The way to derive them is as follows. Suppose we are given a global integral which unfolds to a certain global model of $\pi$, via a Fourier coefficient, or a period integral etc. This model may or may not be unique. Then, to derive this global integral from identity (1.4), we apply this model to $\xi(\varphi_{\pi^{\iota}},f_{\mathcal{E}_{\tau},s})$, thinking of it as a cusp form in $\pi$. We compute it by means of Fourier expansions and the use of identities relating Eisenstein series as in [G-S]. Then at the end of this computation we obtain, as an inner integration, the same global integral which we started with. This is an important point, and we would like to emphasize it. Starting with the global model above on the function $\xi(\varphi_{\pi^{\iota}},f_{\mathcal{E}_{\tau},s})$, we can always first unfold the Eisenstein series which appears in identity (1.4). However, this is not what we do. Instead, by performing some Fourier expansions, and using the identities relating Eisenstein series as in [G-S], we can derive a ”simpler” global integral, which we conjecture to be Eulerian and represent the above $L$-function. Notice that this procedure implies that for every given global model defined on the space of $\pi$, one can construct an Eulerian integral which unfolds to this model. We also predict that all future constructions of global integrals which represent the above $L$-functions, will also be derived from identity (1.4) by a similar procedure. Clearly, all the above refers to those cases for which an identity similar to identity (1.4), satisfying Theorem 1, exists. There are many other global integral constructions which unfold to $L$- functions not represented by the doubling method. In this paper, we give several examples of the procedure described above. Rather than concentrate on one general example, we decided to give several low rank examples. We believe that by doing so it will be easier for the reader to see the general idea. Our first example is the famous Jacquet-Langlands integral construction for the standard $L$-function attached to an irreducible, automorphic, cuspidal representation of $GL_{2}(\bf A)$. See [J-L]. In Section 3, we show how to derive the Jacquet-Langlands integral from the doubling integral of Piatetski-Shapiro and Rallis in [PS-R1], representing the same $L$-function. In Section 4, we give two examples of global integrals involving New Way type integrals. The first is the famous integral of Piatetski-Shapiro and Rallis in [PS-R2]. It represents the standard $L$-function for an irreducible, automorphic, cuspidal representation $\pi$ of $Sp_{2k}(\bf A)$. This example we do in general. In the second example, we show how can one use our procedure to construct new global integrals. Here we extend the work of [PS-R2] and construct a New Way type integral which unfolds to the same Fourier coefficient used in [PS-R2]. We then conjecture that this integral is Eulerian and represents the tensor product $L(\pi\times\tau,s)$, where $\tau$ is an irreducible, automorphic, cuspidal representation of $GL_{n}(\bf A)$. In the last section we give the example of the global integral construction of [G-J-R-S] for $\pi$ on the double cover of $Sp_{4}(\bf A)$ and $\tau$ on $GL_{2}(\bf A)$, involving a Fourier-Jacobi mixed model of $\pi$ with respect to an irreducible, automorphic, cuspidal representation of $SL_{2}(\bf A)$. We explain through our procedure how does this integral show up. 2. Notations and Preliminarries 2.1. Some General Notations We start with some general notations. Since most of the examples in this note are for the symplectic group, we fix some notations for this specific group. For a positive integer $k$, let $J_{k}$ denote the matrix of size $k$, which has ones on the anti-diagonal and zeros elsewhere. We realize $Sp_{2k}$ as $$Sp_{2k}=\{g\in GL_{2k}\ :\ g^{t}\begin{pmatrix}&J_{k}\\ -J_{k}&\end{pmatrix}g=\begin{pmatrix}&J_{k}\\ -J_{k}&\end{pmatrix}\}.$$ Denote $$Mat_{k}^{0}=\{A\in Mat_{k}\ :\ A^{t}J_{k}=J_{k}A\}.$$ We denote by $Q_{k}$ the (standard) Siegel parabolic subgroup of $Sp_{2k}$. More generally, for a partition $k_{1},...,k_{r+1}$ of $k$, we denote by $Q_{k_{1},...,k_{r},2k}$ the standard parabolic subgroup of $Sp_{2k}$, whose Levi part is isomorphic to $GL_{k_{1}}\times\cdots\times GL_{k_{r}}\times Sp_{2k_{r+1}}$. We denote its unipotent radical by $U_{k_{1},...,k_{r},2k}$. For two positive integers $n_{1}$ and $n_{2}$, denote by $0_{n_{1}\times n_{2}}$ the zero matrix in $Mat_{n_{1}\times n_{2}}$. We shall write $0_{n}$ for $0_{1\times n}$. Let $e_{i,j}$ denote the matrix of size $2k$, which has one at the $(i,j)$ entry, and zero elsewhere. Let $$e_{i,j}^{\prime}=e_{i,j}-e_{2k-j+1,2k-i+1},\quad if\quad 1\leq i,j\leq k,$$ and $$e_{i,j}^{\prime}=e_{i,j}+e_{2k-j+1,2k-i+1},\quad if\quad 1\leq i\leq k,\quad j% >k.$$ Let $F$ be a number field and let ${\bf A}$ denote its ring of Adeles. Fix $\psi$, a non-trivial character of $F\backslash{\bf A}$. Let $Sp_{2k}^{(2)}({\bf A})$ denote the metaplectic double cover of $Sp_{2k}({\bf A})$. We denote a theta series on $Sp^{(2)}_{2k}({\bf A})$, corresponding to $\psi$ and a Schwartz function $\phi\in\mathcal{S}({\bf A}^{k})$, by $\theta_{\psi,Sp_{2k}}^{\phi}$, or shortly $\theta_{\psi,2k}^{\phi}$. We will rely on the well-known properties of this series. For notations and the action of the Weil representation we use the formulas as given in [G-R-S1], page 188. 2.2. Eisenstein Series In this sub-section we consider certain Eisenstein series which we will use later. We also state some relevant identities which they satisfy. All the results in this sub-section are proved in [G-S], or follow from it. For an irreducible, automorphic, cuspidal representation $\tau$ of $GL_{2}({\bf A})$, and a natural number $m$, let $\Delta(\tau,m)$ denote the Speh representation of $GL_{2m}({\bf A})$. This representation was studied in [J]. Let $Q_{2m}$ denote the Siegel parabolic subgroup of $Sp_{4m}$. Consider an Eisenstein series $E(f_{\Delta(\tau,m),s})$ on $Sp_{4m}({\bf A})$, corresponding to a smooth holomorphic section $f_{\Delta(\tau,m),s}$ of $Ind_{Q_{2m}({\bf A})}^{Sp_{4m}({\bf A})}\Delta(\tau,m)|\text{det}\cdot|^{s}$ (normalized induction). See [G-S], Sec. 2. Sometimes it will be convenient to drop the notation of the section and simply denote $E_{\tau,m}(\cdot,s)$. We will also need similar Eisenstein series $E^{(2)}(f_{\Delta(\tau,m)\gamma_{\psi},s})$ on $Sp_{4m}^{(2)}({\bf A})$. It corresponds to a smooth, holomorphic section of $Ind_{Q_{2m}^{(2)}({\bf A})}^{Sp_{4m}^{(2)}({\bf A})}\Delta(\tau,m)\gamma_{\psi% }|\text{det}\cdot|^{s}$. Here $\gamma_{\psi}$ is the Weil factor attached to the character $\psi$. See [G-S], Sec. 2. Again, sometimes it will be convenient to simply use the notation $E^{(2)}_{\tau,m}(\cdot,s)$. Consider the unipotent radical $U_{1^{2},4m}$. Let $\psi_{1}$ denote the following character of $U_{1^{2},4m}(\bf A)$ . For $u=(u_{i,j})\in U_{1^{2},4m}$, $$\psi_{1}(u)=\psi(u_{1,2}).$$ Let $U_{1^{2},4m}^{0}$ denote the subgroup of $U_{1^{2},4m}$ consisting of all matrices $u=(u_{i,j})\in U_{1^{2},4m}$, such that $u_{2,j}=0$ for all $3\leq j\leq 4m-2$. For $t\in F^{*}$, let $\psi_{U_{1^{2},4m}^{0},t}$ denote the following character of $U_{1^{2},4m}^{0}(\bf A)$, $$\psi_{U_{1^{2},4m}^{0},t}(u)=\psi(u_{1,2}+tu_{2,4m-1}).$$ There is a natural projection $j=j_{4m-3}$, from $U_{1^{2},4m}$ onto the Heisenberg group in $4m-3$ variables ${\mathcal{H}}_{4m-3}$. In coordinates, given $u=(u_{i,j})\in U_{1^{2},4m}$, $$j(u)=(u_{2,3},u_{2,4},\ldots,u_{2,4m-2},u_{2,4m-1}).$$ Here, for elements in ${\mathcal{H}}_{4m-3},$ we use the notations given in [G-R-S1], Sec. 1. Of course, we have a similar projection from $U_{1,2m}$ onto $\mathcal{H}_{2m-1}$. It is an isomorphism. We denote the inverse map by $i_{2m-1}$. From [G-S] Theorems 7.1, 8.1, for all $\tilde{h}=(h,\epsilon)\in Sp_{4(m-1)}^{(2)}({\bf A})$, we have (2.1) $$E_{\tau,m-1}^{(2)}(\tilde{h},s)=\int\limits_{U_{1^{2},4m}(F)\backslash U_{1^{2% },4m}({\bf A})}\theta_{\psi^{-1},4(m-1)}^{\phi}(j(u)\tilde{h})E_{\tau,m}(ut(h)% ,s)\psi_{1}^{-1}(u)du.$$ In a similar way, (2.2) $$E_{\tau,m-1}(h,s)=\int\limits_{U_{1^{2},4m}(F)\backslash U_{1^{2},4m}({\bf A})% }\theta_{\psi^{-1},4(m-1)}^{\phi}(j(u)\tilde{h})E_{\tau,m}^{(2)}(ut(\tilde{h})% ,s)\psi_{1}^{-1}(u)du.$$ Here, $t(h)=diag(I_{2},h,I_{2})$ and $t(\tilde{h})=(t(h),\epsilon)$. We used the shorthand notation for Eisenstein series in order to stress the fact that the r.h.s. of (2.1)(resp. (2.2)) is equal to an Eisenstein series on $Sp_{4(m-1)}^{(2)}({\bf A})$ (resp. $Sp_{4(m-1)}({\bf A})$), namely the Eisenstein series indicated on the l.h.s. of (2.1)(resp. (2.2)). A more precise statement of (2.1) is that given a smooth, holomorphic section $f_{\Delta(\tau,m),s}$ of $Ind_{Q_{2m}({\bf A})}^{Sp_{4m}({\bf A})}\Delta(\tau,m)|\text{det}\cdot|^{s}$, and a Schwartz function $\phi\in\mathcal{S}({\bf A}^{2m-2})$, there is a smooth, meromorphic section $\Lambda(f_{\Delta(\tau,m),s},\phi)$ of $Ind_{Q_{2m-2}^{(2)}({\bf A})}^{Sp_{4m-4}^{(2)}({\bf A})}\Delta(\tau,m-1)\gamma% _{\psi^{-1}}|\text{det}\cdot|^{s}$, such that, for all $\tilde{h}=(h,\epsilon)\in Sp_{4(m-1)}^{(2)}({\bf A})$, we have $$E^{(2)}(\Lambda(f_{\Delta(\tau,m),s}))(\tilde{h})=\int\limits_{U_{1^{2},4m}(F)% \backslash U_{1^{2},4m}({\bf A})}\theta_{\psi^{-1},4(m-1)}^{\phi}(j(u)\tilde{h% })E(f_{\Delta(\tau,m),s})(ut(h),s)\psi_{1}^{-1}(u)du.$$ The section $\Lambda(f_{\Delta(\tau,m),s},\phi)$ is given, for $Re(s)$ sufficiently large by an explicit Adelic unipotent integration, and continues to a meromorphic function. A similar theorem holds for (2.2). Moreover, in [G-S] the above identities are proved to hold for normalized Eisenstein series, as well. We will use another identity relating Eisenstein series. We first fix some more notations. Let $r$ and $k$ denote two positive integers such that $k\geq 3r$. Consider the unipotent radical $U_{2r,2k}$. It consists of all matrices of the form (2.3) $$u=\begin{pmatrix}I_{2r}&x_{1}&y&x_{2}&z\\ &I_{r}&&&\star\\ &&I_{2(k-3r)}&&\star\\ &&&I_{r}&\star\\ &&&&I_{2r}\end{pmatrix}\in Sp_{2k}.$$ Define a character $\psi_{U_{2r,2k}}$ of $U_{2r,2k}(\bf A)$, as follows. Let $u\in U_{2r,2k}(\bf A)$ of the form (2.3). For $i=1,2$, write $x_{i}=\begin{pmatrix}x_{i,1}\\ x_{i,2}\end{pmatrix}$, where $x_{i,j}\in Mat_{r}(\bf A)$. Then $$\psi_{U_{2r,2k}}(u)=\psi(\text{tr}(x_{1,1}+x_{2,2})).$$ This is a special case of the character defined in [G-S] in equation (1.2). The stabilizer of $\psi_{U_{2r,2k}}$ inside the Levi subgroup $GL_{2r}({\bf A})\times Sp_{2(k-2r)}(\bf A)$ is the group $Sp_{2r}({\bf A})\times Sp_{2(k-3r)}(\bf A)$, embedded in $Sp_{2k}(\bf A)$ as (2.4) $$t(g,h)=\text{diag}(g,j(g,h),g^{*}),\ \ j(g,h)=\begin{pmatrix}g_{1}&&g_{2}\\ &h&\\ g_{3}&&g_{4}\end{pmatrix},\ \ g=\begin{pmatrix}g_{1}&g_{2}\\ g_{3}&g_{4}\end{pmatrix}\in Sp_{2r}(\bf A).$$ To state the third identity that we need, let $\sigma$ denote an irreducible, automorphic, cuspidal representation of $SL_{2}({\bf A})$, and let $r=1$, $k=6$. For $\tau$ as above, form the Eisenstein series $E_{\tau,\sigma}(\cdot,s)$ on $Sp_{6}({\bf A})$ associated with $Ind_{Q_{2^{2},6}({\bf A})}^{Sp_{6}({\bf A})}(\tau\times\sigma)|\text{det}\cdot% |^{s}$. Then, it follows from [G-S], Theorem 1.1 that, for any cusp form $\varphi_{\sigma}$ in the space of $\sigma$, (2.5) $$E_{\tau,\sigma}(h,s)=\int\limits_{SL_{2}(F)\backslash SL_{2}({\bf A})}\int% \limits_{U_{2,12}(F)\backslash U_{2,12}({\bf A})}\varphi_{\sigma}(g^{\iota})E_% {\tau,3}(ut(g,h),s)\psi_{U_{2,12}}^{-1}(u)dudg.$$ Here, for a matrix $h\in SL_{2}$, $h^{\iota}=J_{2}hJ_{2}$, where we recall that $J_{2}=\begin{pmatrix}&1\\ 1&\end{pmatrix}$. Again, the identity (2.5) is written using a shorthand notation, dropping mention of the sections. The point is that the r.h.s. of (2.5) is equal to an Eisenstein series on $Sp_{6}(\bf A)$, namely an Eisenstein series corresponding to the parabolic induction $Ind_{Q_{2^{2},6}({\bf A})}^{Sp_{6}({\bf A})}(\tau\times\sigma)|\text{det}\cdot% |^{s}$. Note the form of the r.h.s. of (2.5). It is a kernel integral, where the kernel function on $SL_{2}({\bf A})\times Sp_{6}(\bf A)$ is the $\psi_{U_{2,12}}$-Fourier coefficient of the Eisenstein series $E_{\tau,3}$ on $Sp_{12}(\bf A)$. Then we integrate this kernel function against a cusp form on $SL_{2}(\bf A)$ to obtain an Eisenstein series $E_{\tau,\sigma}(\cdot,s)$ on $Sp_{6}(\bf A)$. The precise statement of (2.5) says that for a given smooth, holomorphic section $f_{\Delta(\tau,3),s}$ of $Ind_{Q_{6}({\bf A})}^{Sp_{12}({\bf A})}\Delta(\tau,3)|\text{det}\cdot|^{s}$, and a cusp form $\varphi_{\sigma}$ in the space of $\sigma$, there is a smooth, meromorphic section $\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma})$ of $Ind_{Q_{2,2}({\bf A})}^{Sp_{6}({\bf A})}(\tau\times\sigma)|\text{det}\cdot|^{s}$, such that, for all $h\in Sp_{6}(\bf A)$, $$E(\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma}))(h)=\int\limits_{SL_{2}(F)% \backslash SL_{2}({\bf A})}\int\limits_{U_{2,12}(F)\backslash U_{2,12}({\bf A}% )}\varphi_{\sigma}(g^{\iota})E(f_{\Delta(\tau,3),s})(ut(g,h),s)\psi_{U_{2,12}}% ^{-1}(u)dudg.$$ The section $\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma})$ is given, for $Re(s)$ sufficiently large, by an explicit Adelic integration, and continues to a meromorphic function. Moreover, in [G-S] the above identity is proved to hold for normalized Eisenstein series, as well. 2.3. Proof of Theorem 1 We prove Theorem 1 for the set up of [C-F-G-K1]. For simplicity of notation we prove the theorem for $G=Sp_{2k}$. The proof for the other split classical groups is almost entirely the same, and similarly for the set up of [PS-R1], where the classical group need not be split. The proof for the case in [G-H] is also quite similar. We specify some of the data in (1.2) and (1.3). The group $H$ is $Sp_{4nk}$, and $U=U_{(2k)^{n-1},4nk}$. Recall that this is the unipotent radical of the standard parabolic subgroup of $H$, with Levi part isomorphic to $GL_{2k}^{n-1}\times Sp_{4k}$. The character $\psi_{U}$ is stabilized by $G({\bf A})\times G(\bf A)$, embedded in $H(\bf A)$ by $t(g,h)=\text{diag}(g^{\Delta_{n-1}},j(g,h),(g^{*})^{\Delta_{n-1}})$, where $g^{\Delta_{n-1}}=\text{diag}(g,...,g)$, $n-1$ times, and, for $g,h\in Sp_{2k}(\bf A)$, $j(g,h)$ has the same shape as (2.4). The representation $\mathcal{E}_{\tau}$ is the Speh representation $\Delta(\tau,2k)$, and $W(\Delta(\tau,2k))$ denotes the model referred to as a Whittaker-Speh-Shalika model in [C-F-G-K1]. It is obtained by applying a Fourier coefficient on $\Delta(\tau,2k)$, corresponding to $\psi$ and the partition $(n^{2k})$. Next, the element $\delta$ in (1.3) is $$\delta=\begin{pmatrix}0&I_{2k}&0&0\\ 0&0&0&I_{2k(n-1)}\\ -I_{2k(n-1)}&0&0&0\\ 0&I_{2k}&I_{2k}&0\end{pmatrix}.$$ Finally, the unipotent group $U_{0}$ is a subgroup of $U_{(2k)^{n-1},4nk}$, realizing the quotient $\delta^{-1}Q_{2nk}\delta\cap U_{(2k)^{n-1},4nk}\backslash U_{(2k)^{n-1},4nk}$. Denote, for $b\in G({\bf A})$, $b^{\iota}=\begin{pmatrix}&I_{k}\\ I_{k}\end{pmatrix}b\begin{pmatrix}&I_{k}\\ I_{k}\end{pmatrix}$. The proof is a simple consequence of the unfolding process of the global integrals (1.2) and (1.4). A careful inspection of the unfolding process implies the following two facts. First, as a function of $g,h\in G({\bf A})$, the integral (2.6) $$\int\limits_{U_{0}({\bf A})}f_{W({\mathcal{E}}_{\tau}),s}(\delta u_{0}t(g,h))% \psi^{-1}_{U}(u_{0})du_{0}$$ is left invariant by the following (almost) diagonal embedding of $G({\bf A})$. For all $b\in G({\bf A})$, (2.7) $$\int\limits_{U_{0}({\bf A})}f_{W({\mathcal{E}}_{\tau}),s}(\delta u_{0}t(bg,b^{% \iota}h))\psi^{-1}_{U}(u_{0})du_{0}=\int\limits_{U_{0}({\bf A})}f_{W({\mathcal% {E}}_{\tau}),s}(\delta u_{0}t(g,h))\psi^{-1}_{U}(u_{0})du_{0}.$$ The second fact is that carrying out the unfolding process for the integral on the right hand side of (1.4), we obtain for $\text{Re}(s)$ large, (2.8) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})(g)=\int\limits_{G({\bf A})}\int% \limits_{U_{0}({\bf A})}\varphi_{\pi}(h)f_{W({\mathcal{E}}_{\tau}),s}(\delta u% _{0}t(g,h))\psi^{-1}_{U}(u_{0})du_{0}dh.$$ A similar calculation was also done in [G-S] Sections 2 an 3, and a similar identity is given in [G-S] equation (3.26). Using (2.7) in (2.8) and a simple change of variables, we get for $\text{Re}(s)$ large, (2.9) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})(g)=\int\limits_{G({\bf A})}\int% \limits_{U_{0}({\bf A})}\varphi_{\pi}(g^{\iota}h)f_{W({\mathcal{E}}_{\tau}),s}% (\delta u_{0}t(1,h))\psi^{-1}_{U}(u_{0})du_{0}dh.$$ Hence, for $\text{Re}(s)$ large, (2.10) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})=\int\limits_{G({\bf A})}(\int% \limits_{U_{0}({\bf A})}f_{W({\mathcal{E}}_{\tau}),s}(\delta u_{0}t(1,h))\psi^% {-1}_{U}(u_{0})du_{0})\iota(\pi(h)\varphi_{\pi})dh,$$ where for a cusp form $\varphi_{\pi}$, $\iota(\varphi_{\pi})(g)=\varphi_{\pi}(g^{\iota})$. Fix an isomorphism $\ell:\otimes_{v}\pi_{v}\mapsto\pi$, and assume that $\varphi_{\pi}$ is the image of a decomposable vector $\otimes_{v}\varphi_{\pi_{v}}$. Assume, also, that $f_{W({\mathcal{E}}_{\tau}),s}$ is a product of local sections $f_{W({\mathcal{E}}_{\tau_{v}}),s}$, where $W({\mathcal{E}}_{\tau_{v}})$ denotes the corresponding local unique model of $\mathcal{E}_{\tau_{v}}$. Let $S$ be a finite set of places, containing the Archimedean places, and outside which $\pi$, $\tau$ and $\psi$ are unramified, and for $v\notin S$, $\varphi_{\pi_{v}}=\varphi^{0}_{\pi_{v}}$ is a pre-chosen unramified vector, and $f_{W({\mathcal{E}}_{\tau_{v}}),s}=f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s}$ is unramified and normalized, such that its value at the identity is the unique unramified element in $W({\mathcal{E}}_{\tau_{v}})$, which has the value 1 at $I$. For $\text{Re}(s)$ large, (2.11) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})=(\iota\circ\ell)(\otimes_{v}\int% \limits_{G(F_{v})}(\int\limits_{U_{0}(F_{v})}f_{W({\mathcal{E}}_{\tau_{v}}),s}% (\delta u_{0}t(1,h))\psi^{-1}_{v,U}(u_{0})du_{0})\pi_{v}(h)\varphi_{\pi_{v}}dh).$$ Denote the factor at $v$, inside $\iota\circ\ell$, in (2.11), by $\xi(\varphi_{\pi_{v}},f_{W({\mathcal{E}}_{\tau_{v}}),s})$. Let us compute this factor at $v\notin S$. Let $K_{G,v}$, $K_{H,v}$ denote the standard maximal compact subgroups of $G(F_{v})$ and $H(F_{v})$ respectively. For $r\in K_{G,v}$, change variable $h\mapsto rh$ and integrate over $r\in K_{G,v}$, taking the measure of $K_{G,v}$ to be 1. We have $$f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s}(\delta u_{0}t(1,rh))=f^{0}_{W({\mathcal{% E}}_{\tau_{v}}),s}(\delta u_{0}t(r^{\iota},rh)).$$ Using (2.7), (2.12) $$\xi(\varphi^{0}_{\pi_{v}},f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s})=\int\limits_{% G(F_{v})}(\int\limits_{U_{0}(F_{v})}f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s}(% \delta u_{0}t(1,h))\psi^{-1}_{v,U}(u_{0})du_{0})\int\limits_{K_{G,v}}\pi_{v}(% rh)\varphi^{0}_{\pi_{v}}drdh.$$ The $dr$-integration in (2.12) is equal to $\omega_{\pi_{v}}(h)\varphi^{0}_{\pi_{v}}$, where $\omega_{\pi_{v}}$ is the unique spherical function attached to $\pi_{v}$. Hence, for $v\notin S$, and $\text{Re}(s)$ large, (2.13) $$\xi(\varphi^{0}_{\pi_{v}},f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s})=(\int\limits_% {G(F_{v})}\int\limits_{U_{0}(F_{v})}f^{0}_{W({\mathcal{E}}_{\tau_{v}}),s}(% \delta u_{0}t(1,h))\psi^{-1}_{v,U}(u_{0})\omega_{\pi_{v}}(h)du_{0}dh)\varphi^{% 0}_{\pi_{v}}.$$ The integral in (2.13) is equal to $\frac{L(\pi_{v}\times\tau_{v},s+\frac{1}{2})}{d_{v}(s)}$, where the denominator comes from the normalizing factor of the Eisenstein series in the global integral (1.2). This is the unramified computation carried in [C-F-G-K1] (and similarly in [PS-R1], [G-H].) Denote $\varphi_{\pi}^{S}=\otimes_{v\notin S}\varphi^{0}_{\pi_{v}}$. Then it follows that (2.14) $$\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})=\frac{L^{S}(\pi\times\tau,s+\frac{% 1}{2})}{d^{S}(s)}(\iota\circ\ell)(\otimes_{v\in S}\xi(\varphi_{\pi_{v}},f_{W({% \mathcal{E}}_{\tau_{v}}),s})\otimes\varphi_{\pi}^{S}).$$ For $v\in S$, a similar argument, using the $K_{H,v}$-finiteness of $f_{W({\mathcal{E}}_{\tau_{v}}),s}$, shows that $\xi(\varphi_{\pi_{v}},f_{W({\mathcal{E}}_{\tau_{v}}),s})$ is equal to a finite sum of elements of the form (2.15) $$(\int\limits_{G(F_{v})}\int\limits_{U_{0}(F_{v})}f^{\prime}_{W({\mathcal{E}}_{% \tau_{v}}),s}(\delta u_{0}t(1,h))\psi^{-1}_{v,U}(u_{0})<\pi_{v}(h)\varphi_{v},% \alpha_{\pi_{v}}>du_{0}dh)\beta_{\pi_{v}},$$ where $<\pi_{v}(h)\varphi_{\pi_{v}},\alpha_{v}>$ denotes a matrix coefficient of $\pi_{v}$ corresponding to $\varphi_{\pi_{v}}$ and a vector $\alpha_{v}$ in the dual of $\pi_{v}$; $\beta_{\pi_{v}}$ is a vector in the space of $\pi_{v}$. Denote the integral in (2.15) by $\mathcal{L}(\varphi_{\pi_{v}},\alpha_{\pi_{v}},f^{\prime}_{W({\mathcal{E}}_{% \tau_{v}}),s})$. This integral is absolutely convergent for $\text{Re}(s)$ large, and continues to a meromorphic function in the complex plane. This is the local integral of the doubling method. See [C-F-G-K1]. All in all, we can express the meromorphic function $\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})$ as a finite sum of elements of the form (2.16) $$\frac{L^{S}(\pi\times\tau,s+\frac{1}{2})}{d^{S}(s)}\prod_{v\in S}\mathcal{L}(% \varphi_{\pi_{v}},\alpha_{\pi_{v}},f^{\prime}_{W({\mathcal{E}}_{\tau_{v}}),s})% \cdot(\iota\circ\ell)(\otimes_{v\in S}\beta_{\pi_{v}}\otimes\varphi_{\pi}^{S})% ^{\iota}.$$ In particular, $\xi(\varphi_{\pi},f_{\mathcal{E}_{\tau},s})$ lies in the space of $\pi^{\iota}$. This completes the proof of Theorem 1. 2.4. The Lemma on Exchanging Roots We will need the lemma on exchanging roots, Lemma 7.1 in [G-R-S2]. Typically, we have $F$-unipotent subgroups $A,B,C,D,X,Y$, of $H$, such that $X,Y$ are abelian, intersect $C$ trivially, normalize $C$, and satisfy $[X,Y]\subset C$, $B=CY$, $D=CX$, $A=BX=DY$. We have a nontrivial character $\psi_{C}$ of $C(\bf A)$, trivial on $C(F)$, such that $X({\bf A}),Y(\bf A)$ preserve $\psi_{C}$, when acting by conjugation, and, finally, we assume that the form $(x,y)\mapsto\psi_{C}([x,y])$ defines a non-degenerate bi-multiplicative pairing on $X({\bf A})\times Y(\bf A)$. Extend the character $\psi_{C}$ to a character $\psi_{B}$ of $B(\bf A)$ and a character $\psi_{D}$ of $D(\bf A)$, by making $\psi_{B}$ trivial on $Y(\bf A)$ and $\psi_{D}$ trivial on $X(\bf A)$. Lemma 1. 1. Let $f$ be a smooth, automorphic function on $H(\bf A)$, of moderate growth. Then for every $h\in H(\bf A)$, (2.17) $$\int\limits_{B(F)\backslash B({\bf A})}f(vh)\psi^{-1}_{B}(v)dv=\int\limits_{Y(% {\bf A})}\int\limits_{D(F)\backslash D({\bf A})}f(uyh)\psi^{-1}_{D}(u)dudy.$$ 2. Let $\mathcal{A}$ be a space of smooth automorphic functions of moderate growth on $H(\bf A)$. Assume that $\mathcal{A}$ is invariant to right translations by elements of $H(\bf A)$, and that, as an $H(\bf A)$-module, it satisfies the Dixmier-Malliavin Lemma ([D-M]). Let $\Omega\subset H(\bf A)$ be the subset of elements $h$ which satisfy the following two properties. (1) $[Y({\bf A}),h]\subset D$ and $\psi_{D}([Y({\bf A}),h])=1$; (2) For every $x\in X(\bf A)$, $y\in Y(\bf A)$, $[yxy^{-1},h]\in D(\bf A)$ and $\psi_{D}([yxy^{-1},h])=1$. Then for each $f\in\mathcal{A}$, there exists $\varphi\in{\mathcal{A}}$ such that for all $h\in\Omega$, (2.18) $$\int\limits_{B(F)\backslash B({\bf A})}f(vh)\psi_{B}^{-1}(v)dv=\int\limits_{D(% F)\backslash D({\bf A})}\varphi(uh)\psi^{-1}_{D}(u)du.$$ 3. With the same assumptions as in the beginning of (2), let $\Omega^{\prime}\subset H({\bf A})$ be the subset of elements $h$ which satisfy the following three properties. (1) $h$ normalizes $Y({\bf A})$ and preserves the measure $dy$; (2) $h$ normalizes $X({\bf A})$; (3) For every $x\in X(\bf A)$, $y\in Y(\bf A)$, $[h,[y,x]]\in D(\bf A)$ and $\psi_{D}([h,[y,x]])=1$. Then the assertion of (2) holds for $h\in\Omega^{\prime}$. Proof. The first statement (2.17) is Lemma 7.1 in [G-R-S2]. The proof of the second statement follows by similar arguments as in [G-R-S2], Corollary 7.2. Let $f\in\mathcal{A}$. By the lemma of Dixmier and Malliavin, there exist $f_{1},...,f_{r}\in\mathcal{A}$, and $\xi_{1},...,\xi_{r}\in C_{c}^{\infty}(X({\bf A}))$, such that for all $z\in H(\bf A)$, (2.19) $$f(z)=\sum_{i=1}^{r}\int\limits_{X({\bf A})}\xi_{i}(x)f_{i}(zx)dx.$$ Plug this into the right hand side of equation (2.17). We obtain (2.20) $$\sum_{i=1}^{r}\ \int\limits_{Y({\bf A})}\int\limits_{D(F)\backslash D({\bf A})% }\int\limits_{X({\bf A})}\xi_{i}(x)f_{i}(uyhx)\psi^{-1}_{D}(u)dxdudy.$$ Since $h\in\Omega$, the first property defining $\Omega$ shows that, for $y\in Y(\bf A)$, there is $u^{\prime}\in D(\bf A)$, such that $yh=u^{\prime}hy$ and $\psi_{D}(u^{\prime})=1$. Changing variable, $uu^{\prime}\mapsto u$, we get that in the integral (2.20), we can replace $yh$ by $hy$, and get (2.21) $$\sum_{i=1}^{r}\ \int\limits_{Y({\bf A})}\int\limits_{D(F)\backslash D({\bf A})% }\int\limits_{X({\bf A})}\xi_{i}(x)f_{i}(uhyx)\psi^{-1}_{D}(u)dxdudy.$$ By the second condition defining $\Omega$, we have in (2.21), $hyx=[y,x]u^{\prime}hy$, where $u^{\prime}\in D(\bf A)$ is such that $\psi_{D}(u^{\prime})=1$. Change variable $u[y,x]u^{\prime}\mapsto u$, to get (2.22) $$\sum_{i=1}^{r}\ \int\limits_{D(F)\backslash D({\bf A})}\int\limits_{Y({\bf A})% }\hat{\xi}_{i}(y)f_{i}(uhy)\psi^{-1}_{D}(u)dydu,$$ where $$\hat{\xi}_{i}(y)=\int\limits_{X({\bf A})}\xi_{i}(x)\psi^{-1}_{D}([x,y])dy.$$ Choosing $$\varphi(z)=\sum_{i=1}^{r}\int\limits_{Y({\bf A})}\hat{\xi}_{i}(y)f_{i}(zy)dy,% \quad z\in H(\bf A),$$ we get (2.18). The proof of the third part is similar. Here, for $h\in\Omega^{\prime}$, by the first property defining $\Omega^{\prime}$, the r.h.s. of (2.17) is equal to (2.23) $$\int\limits_{Y({\bf A})}\int\limits_{D(F)\backslash D({\bf A})}f(uhy)\psi^{-1}% _{D}(u)dudy.$$ Write $f$ as in (2.19), and now (2.23) becomes (2.24) $$\sum_{i=1}^{r}\ \int\limits_{Y({\bf A})}\int\limits_{D(F)\backslash D({\bf A})% }\int\limits_{X({\bf A})}\xi_{i}(x)f_{i}(uhyx)\psi^{-1}_{D}(u)dxdudy.$$ By the third property defining $\Omega^{\prime}$, (2.24) is equal to (2.25) $$\sum_{i=1}^{r}\ \int\limits_{Y({\bf A})}\int\limits_{X({\bf A})}\int\limits_{D% (F)\backslash D({\bf A})}\xi_{i}(x)f_{i}(uhxy)\psi^{-1}([x,y])\psi^{-1}_{D}(u)dudxdy.$$ Since $h$ normalizes $X({\bf A})$, changing variable $uhxh^{-1}\mapsto u$, we get that (2.25) is equal to (2.22). ∎ 3. Example: the Jacquet - Langlands integral Let $\pi$ be an irreducible, automorphic, cuspidal representation of $GL_{n}({\bf A})$. We assume, for simplicity, that $\pi$ has a trivial central character. A doubling integral for the standard $L$-function of $\pi$ is given in [PS-R1], Section 3. Proposition 3.2 in [PS-R1] relates the resulting doubling integral to the global integral of Godement-Jacquet [G-J]. More precisely, by correctly choosing the section defining the Eisenstein series, this doubling integral represents $L(\pi,ns-\frac{1}{2}(n-1))$. This is the section denoted by $f^{\phi}(h;s)$ on p. 12 in [PS-R1]. When we take it to be decomposable, its local components almost everywhere are not normalized; the product of their values at $1$, at all places $v$ outside an appropriate finite set $S$ containing the Archimedean places, is $L^{S}(4s)$. In this case, (1.4) is given by (3.1) $$\xi(\varphi_{\pi},f_{s})(g)=\int\limits_{Z({\bf A})GL_{n}(F)\backslash GL_{n}(% {\bf A})}\varphi_{\pi}(g)E(f_{s})(t(g,h))dh.$$ The Eisenstein series is defined on the group $GL_{n^{2}}({\bf A})$, and it is attached to the section $f_{s}$ of the induced representation $Ind_{P({\bf A})}^{GL_{n^{2}}({\bf A})}\delta_{P}^{s-\frac{1}{2}}$ (normalized induction). Here $P$ is the maximal parabolic subgroup of $GL_{n^{2}}$, whose Levi part is isomorphic to $GL_{1}\times GL_{n^{2}-1}$. The embedding of $(g,h)\in GL_{n}({\bf A})\times GL_{n}({\bf A})$, $t(g,h)$, is given by the tensor product map. Also, $Z$ is the center of $GL_{n}$. In the rest of this section we assume that $n=2$. Then the doubling integral mentioned above, and motivating the definition of (3.1), represents $\frac{L(\pi,2s-\frac{1}{2})}{L(4s)}$. (Now we assume that the data in (3.1) are decomposable and the section $f_{s}$ is normalized at almost all places.) Recall the well known global integral of Jacquet and Langlands in [J-L], representing the standard $L$-function $L(\pi,s)$. This is the integral (3.2) $$\int\limits_{F^{*}\backslash{\bf A}^{*}}\varphi_{\pi}\begin{pmatrix}a&\\ &1\end{pmatrix}|a|^{s-1/2}da^{*}.$$ This integral unfolds to an integral which involves the $\psi$- Whittaker coefficient of $\varphi_{\pi}$. Hence, we will now prove that when computing the $\psi$- Whittaker coefficient of $\xi(\varphi_{\pi},f_{s})$, we obtain the integral (3.2), with $s$ shifted, as inner integration. The $\psi$-Whittaker coefficient of $\xi(\varphi_{\pi},f_{s})$ is the integral (3.3) $$\int\limits_{Z({\bf A})GL_{2}(F)\backslash GL_{2}({\bf A})}\int\limits_{F% \backslash{\bf A}}\varphi_{\pi}(h)E(f_{s})\left(\begin{pmatrix}I_{2}&xI_{2}\\ &I_{2}\end{pmatrix}\begin{pmatrix}h&\\ &h\end{pmatrix},s\right)\psi^{-1}(x)dxdh.$$ We prove Theorem 2. For all $h\in GL_{2}(\bf A)$ the integral (3.4) $$\int\limits_{F^{*}\backslash{\bf A}^{*}}\varphi_{\pi}(\begin{pmatrix}a&\\ &1\end{pmatrix}h)|a|^{2s-1}d^{*}a$$ is an inner integration to integral (3.3). Proof. Let $U$ denote the unipotent radical of the standard parabolic subgroup of $GL_{4}$, whose Levi part is isomorphic to $GL_{2}\times GL_{2}$. Thus, $U({\bf A})$ consists of all matrices of the form $$u(y)=\begin{pmatrix}I_{2}&y\\ &I_{2}\end{pmatrix},\ \ \ \ \ \ y\in Mat_{2}({\bf A}).$$ Let $\psi_{U}$ be the character of $U(\bf A)$ defined by $\psi_{U}(u(y))=\psi(y_{2,1})$. Write the Fourier expansion of $E(f_{s})$ along $U(F)\backslash U({\bf A})$. Each Fourier coefficient corresponds to a matrix $\gamma\in Mat_{2}(F)$, which defines the character $\psi_{U,\gamma}(u(y))=\psi(tr(\gamma y))$, and the corresponding Fourier coefficient is $$E^{\psi_{U,\gamma}}(f_{s})(m)=\int\limits_{U(F)\backslash U({\bf A})}E(f_{s})(% um)\psi^{-1}_{U,\gamma}(u)du.$$ Note that $\psi_{U}=\psi_{U,\scriptsize{\begin{pmatrix}0&1\\ 0&0\end{pmatrix}}}$. Denote $E^{U}(f_{s})=E^{\psi_{U,0}}(f_{s})$. This the constant term of $E(f_{s})$ along $U(F)\backslash U({\bf A})$. If $\gamma$ is invertible, then $E^{\psi_{U,\gamma}}(f_{s})=0$. This follows from the smallness of the representation generated by our Eisenstein series. In the language of unipotent orbits, when $\gamma$ is invertible, the character $\psi_{U,\gamma}$ corresponds to the partition $(2^{2})$ of $4$. A similar proof to that of Prop. 5.2 in [G] shows that if a Fourier coefficient corresponds to a unipotent orbit attached to a given partition $\underline{p}$ of $4$, then we must have $\underline{p}\leq(2,1^{2})$. Now, our assertion follows since $(2^{2})>(2,1^{2})$. Thus, only $\gamma=0$ and $\gamma$ of rank one contribute to the Fourier expansion, and we have the identity (3.5) $$E^{U}(f_{s})=E^{U}(f_{s})(m)+\sum_{(\gamma_{1},\gamma_{2})\in B(F)\times B(F)% \backslash GL_{2}(F)\times GL_{2}(F)}\ \ \sum_{\alpha\in F^{*}}E^{\psi_{U}}(f_% {s})\left(j(\alpha)\begin{pmatrix}\gamma_{1}&\\ &\gamma_{2}\end{pmatrix}m\right).$$ Here $j(\alpha)=\text{diag}(1,\alpha,1,1)$, and $B$ denotes the standard Borel subgroup of $GL_{2}$. Plug identity (3.5) into equation (3.3). The constant term $E^{U}$ contributes zero to the expansion. Indeed, changing variables in $U$, we obtain as an inner integral $\int\psi^{-1}(x)dx$, along $F\backslash{\bf A}$, and this is zero. As for the contribution of the second term in equation (3.5), we first observe that the space of double cosets $B(F)\times B(F)\backslash GL_{2}(F)\times GL_{2}(F)/GL_{2}^{\Delta}(F)$ contains two elements. Here $GL_{2}^{\Delta}$ is the diagonal embedding of $GL_{2}$. As representatives we can choose $e=(I_{2},I_{2})$ and $w_{0}=\text{diag}(J_{2},I_{2})$. The matrix $J_{2}$ was defined in the beginning of Section 2. The contribution of $e$ is zero, again using the fact that we obtain $\int\psi(x)dx$ as an inner integration. As for $w_{0}$, we notice that the stabilizer of $w_{0}$ in $GL_{2}^{\Delta}(F)$ is $T^{\Delta}(F)$, where $T$ is the diagonal subgroup of $GL_{2}$. Hence, integral (3.3) is equal to (3.6) $$\int\limits_{Z({\bf A})T(F)\backslash GL_{2}({\bf A})}\sum_{\alpha\in F^{*}}\ % \int\limits_{F\backslash{\bf A}}\varphi_{\pi}(h)E^{\psi_{U}}(f_{s})\left(j(% \alpha)w_{0}\begin{pmatrix}I_{2}&xI_{2}\\ &I_{2}\end{pmatrix}\begin{pmatrix}h&\\ &h\end{pmatrix}\right)\psi^{-1}(x)dxdh.$$ Note that $$j(\alpha)w_{0}\begin{pmatrix}I_{2}&xI_{2}\\ &I_{2}\end{pmatrix}=u\left(\begin{pmatrix}0&x\\ \alpha x&0\end{pmatrix}\right)j(\alpha)w_{0};\quad\psi_{U}(u\left(\begin{% pmatrix}0&x\\ \alpha x&0\end{pmatrix}\right))=\psi(\alpha x).$$ Therefore conjugating in the integrand of (3.6) the unipotent matrix to the left, we obtain the integral $\int\psi^{-1}((1-\alpha)x)dx$ as inner integration. Thus, for all $\alpha\neq 1$ we get zero contribution. Factoring the measure, integral (3.6) is equal to $$\int\limits_{T({\bf A})\backslash GL_{2}({\bf A})}\int\limits_{F^{*}\backslash% {\bf A}^{*}}\varphi_{\pi}\left(\begin{pmatrix}a&\\ &1\end{pmatrix}h\right)E^{\psi_{U}}(f_{s})\left(\hat{a}w_{0}\begin{pmatrix}h&% \\ &h\end{pmatrix}\right)d^{*}adh.$$ Here $\hat{a}=\text{diag}(1,a,a,1)$. We claim that $E^{\psi_{U}}(f_{s})(\hat{a}m)=|a|^{2s-1}E^{\psi_{U}}(f_{s})(m)$. To prove this identity, let $V$ denote the maximal unipotent subgroup of $GL_{4}$, which contains $U$. Let $\psi_{V}$ denote the character of $V({\bf A})$ obtained by the trivial extension of $\psi_{U}$ from $U({\bf A})$ to $V({\bf A})$. Then $$E^{\psi_{U}}(f_{s})(m)=E^{\psi_{V}}(f_{s})(m)=\int\limits_{V(F)\backslash V({% \bf A})}E(f_{s})(vm)\psi^{-1}_{V}(v)dv.$$ This follows from the smallness of our Eisenstein series. To show this, put $$v(x,y)=\text{diag}(\begin{pmatrix}1&x\\ &1\end{pmatrix},\begin{pmatrix}1&y\\ &1\end{pmatrix}).$$ Consider the following function on $F\backslash{\bf A}\times F\backslash{\bf A}$, $(x,y)\mapsto E^{\psi_{U}}(f_{s})(v(x,y)m)$ and its Fourier expansion. Its Fourier coefficients have the form $$\int\limits_{(F\backslash{\bf A})^{2}}E^{\psi_{U}}(f_{s})(v(x,y)m)\psi^{-1}(% \alpha x+\beta y)dxdy,$$ where $\alpha,\beta\in F$. As before, if $(\alpha,\beta)\neq(0,0)$, this Fourier coefficient is zero. Only $\alpha=\beta=0$ contribute to the Fourier expansion. Unfolding the Eisenstein series, it is easy to prove that $E^{\psi_{V}}(f_{s})(\hat{a}m)=|a|^{2s-1}E^{\psi_{V}}(f_{s})(m)$. From this the claim follows. We conclude that integral (3.3) is equal to (3.7) $$\int\limits_{T({\bf A})\backslash GL_{2}({\bf A})}\left(\int\limits_{F^{*}% \backslash{\bf A}^{*}}\varphi_{\pi}\left(\begin{pmatrix}a&\\ &1\end{pmatrix}h\right)|a|^{2s-1}d^{*}a\right)E^{\psi_{V}}(f_{s})\left(w_{0}% \begin{pmatrix}h&\\ &h\end{pmatrix}\right)dh.$$ From this the theorem follows. ∎ We remark that in (3.7) the Fourier coefficient $E^{\psi_{V}}(f_{s})$ is Eulerian. Unfolding the Eisenstein series, we get, for $Re(s)$ sufficiently large, (3.8) $$E^{\psi_{V}}(f_{s})\left(w_{0}\begin{pmatrix}h&\\ &h\end{pmatrix}\right)=\int\limits_{{\bf A}^{2}}f_{s}\left(\begin{pmatrix}1\\ x&1\\ y&0&1\\ 0&0&0&1\end{pmatrix}w^{0}\begin{pmatrix}h\\ &h\end{pmatrix}\right)\psi^{-1}(y)dxdy,$$ where $w^{0}=\text{diag}(J_{3},1)$. Assume that $f_{s}$ is decomposable. Let $S$ be a finite set of places, containing the Archimedean places, outside which $f_{s}$ is spherical and normalized. Assume that $\varphi_{\pi}$ is right $GL_{2}(\mathcal{O}_{v})$, for all $v\notin S$. Thus, for such places, it is enough to take in the local integration over $T(F_{v})\backslash GL_{2}(F_{v})$ in (3.7), $h=\begin{pmatrix}1&z\\ &1\end{pmatrix}$. Denote the local factor of $f_{s}$, at $v\notin S$, by $f^{0}_{s,v}$. Assume also that, for such $v$, $\psi_{v}$ is normalized. Then it is not hard to see that the following local integral of (3.8), at $v$, (3.9) $$\int\limits_{F_{v}^{2}}f^{0}_{s,v}\left(\begin{pmatrix}1\\ x&1\\ y&0&1\\ 0&0&0&1\end{pmatrix}w^{0}\begin{pmatrix}1&z\\ &1\\ &&1&z\\ &&&1\end{pmatrix}\right)\psi_{v}^{-1}(y)dxdy,$$ is supported in $z\in\mathcal{O}_{v}$, and the integration is supported in $x\in\mathcal{O}_{v}$, so that for such $z$, (3.9) is equal to (3.10) $$\int\limits_{F_{v}}f^{0}_{s,v}\left(\begin{pmatrix}1\\ 0&1\\ y&0&1\\ 0&0&0&1\end{pmatrix}\right)\psi_{v}^{-1}(y)dy.$$ It is easy to compute (3.10). It is equal to $1-q_{v}^{-4s}$, where $q_{v}$ is the number of elements in the residue field of $F_{v}$. Thus, the product of local integrals (3.10), over all $v\notin S$ (and $\text{Re}(s)>1$) is $(L^{S}(4s))^{-1}$. 4. Examples of new way type integrals In this section we consider two examples. The first example is the famous construction of global integrals given by Piatetski-Shapiro and Rallis in [PS-R2]. This was the first example of the the so-called New Way type integrals (after the title of the paper [PS-R2]). These integrals represent the standard $L$-function $L(\pi,s)$ of an irreducible, automorphic, cuspidal representation $\pi$ of $Sp_{2k}({\bf A})$. In the second example we extend the above construction and introduce a (new) New Way type of integrals which represent the standard $L$-function $L(\pi\times\tau,s)$. Here $\tau$ is an irreducible, automorphic, cuspidal representation of $GL_{n}({\bf A})$. We will do it by considering the case $k=n=2$. This will demonstrate how can one use the procedure described in the introduction to actually construct new global integrals which we conjecture to be Eulerian. 4.1. The New Way Construction of Piatetski-Shapiro and Rallis First, we describe the functional used in the integrals in [PS-R2]. Let $\pi$ denote an irreducible, automorphic, cuspidal representation of $Sp_{2k}({\bf A})$. Let $T_{0}$ denote a symmetric matrix in $GL_{k}(F)$. It is not essential, but we may assume that $T_{0}$ is diagonal. Denote $T=J_{k}T_{0}$. With these notations, the functional used in the New Way integral is given by (4.1) $$\varepsilon_{T}(\varphi_{\pi})=\int\limits_{Mat_{k}^{0}(F)\backslash Mat_{k}^{% 0}({\bf A})}\varphi_{\pi}\left(\begin{pmatrix}I_{k}&Z\\ &I_{k}\end{pmatrix}\right)\psi^{-1}(\text{tr}(TZ))dZ$$ To describe the global integral, we assume that $k$ is even. This assumption is also made in [PS-R2] at the beginning of Section 2. It is made to avoid the use of Eisenstein series on metaplectic groups. Denote by $\chi_{T}$ the quadratic character of ${\bf A}^{*}$, $\chi_{T}(x)=(x,\text{det}(T))$. Here, $(,)$ is the global Hilbert symbol. This is the quadratic character corresponding to $\text{disc}(T_{0})$. Let $SO_{T_{0}}$ denote the special orthogonal group in $k$ variables over $F$, corresponding to $T_{0}$. Consider the dual pair $SO_{T_{0}}\times Sp_{2k}$ inside $Sp_{2k^{2}}$. Since $k$ is even, $SO_{T_{0}}({\bf A})\times Sp_{2k}(\bf A)$ splits in $Sp^{(2)}_{2k^{2}}(\bf A)$. Denote by $\omega_{\psi_{T}}$ the restriction of the Weil representation $\omega_{\psi}$ of $Sp^{(2)}_{2k^{2}}(\bf A)$, corresponding to $\psi$, to the image of $Sp_{2k}(\bf A)$ under the splitting. It can be realized in $\mathcal{S}(Mat_{k}({\bf A}))$, such that for $\phi\in\mathcal{S}(Mat_{k}({\bf A}))$, $g\in Sp_{2k}(\bf A)$, $$\omega_{\psi_{T}}\left(\begin{pmatrix}I_{k}&Z\\ &I_{k}\end{pmatrix}g\right)\phi(I_{k})=\psi(\text{tr}(TZ))\omega_{\psi_{T}}(g)% (I_{k}).$$ See [PS-R2], p. 117. Denote by $\theta_{\psi_{T}}^{\phi}$ the corresponding theta series, restricted to $Sp_{2k}(\bf A)$. More generally, we have, for $\phi\in\mathcal{S}(Mat_{k}({\bf A}))$, the theta series $\theta^{\phi}_{\psi,2k^{2}}$ on $Sp^{(2)}_{2k^{2}}(\bf A)$ corresponding to $\phi$. Denote by $i_{T}$ the above splitting of $SO_{T_{0}}({\bf A})\times Sp_{2k}(\bf A)$. Then, for $(m,g)\in SO_{T_{0}}({\bf A})\times Sp_{2k}(\bf A)$, $\phi\in\mathcal{S}(Mat_{k}({\bf A}))$, $$\theta^{\phi}_{\psi,2k^{2}}(i_{T}(m,g))=\sum_{x\in Mat_{k}(F)}\omega_{\psi_{T}% }(g)\phi(m^{-1}x).$$ We should denote $i_{T,2k}$, but we will drop $2k$, as it will be clear from the context. We have, for $g\in Sp_{2k}({\bf A})$, $\theta_{\psi_{T}}^{\phi}(g)=\theta^{\phi}_{\psi,2k^{2}}(i_{T}(1,g))$. Let $E(\eta_{\chi_{T},s})(g)$ denote an Eisenstein series on $Sp_{2k}({\bf A})$, attached to a smooth, holomorphic section $\eta_{\chi_{T},s}$ of $Ind_{Q_{k}({\bf A})}^{Sp_{2k}({\bf A})}(\chi_{T}\circ\text{det})|\text{det}% \cdot|^{s}$. Then the global integral introduced in [PS-R2], equation (2.1), is given by (4.2) $$\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(g)\theta_{% \psi_{T}}^{\phi}(g)E(\eta_{\chi_{T},s})(g)dg.$$ Let $S$ be a finite set of places of $F$, containing the Archimedean places, such that $\pi_{v}$ is unramified for $v\notin S$, and the diagonal coordinates of $T_{0}$ are units outside $S$. Then the integral (4.2) represents outside $S$ the ratio (4.3) $$\frac{L^{S}(\pi,s+\frac{1}{2})}{d^{S}_{k}(\chi_{T},s)},$$ where (4.4) $$d^{S}_{k}(\chi_{T},s)=L^{S}(\chi_{T},s+\frac{k+1}{2})\prod_{i=0}^{\frac{k}{2}-% 1}L^{S}(2s+2i+1).$$ In [PS-R1], the authors construct a doubling integral, which represents outside $S$ the ratio (4.5) $$\frac{L^{S}(\pi,s+\frac{1}{2})}{d^{S}_{2k}(1,s)},$$ where $d^{S}_{2k}(1,s)$ is obtained from (4.4), with $2k$ instead of $k$ and the trivial character instead of $\chi_{T}$. This construction uses an Eisenstein series ${\mathcal{E}}(f_{s})$ on $Sp_{4k}({\bf A})$, corresponding to a smooth, holomorphic section $f_{s}$ of $Ind_{Q_{2k}({\bf A})}^{Sp_{4k}({\bf A})}|\text{det}\cdot|^{s}$. As explained in Section 2.3, for the case $n=1$, the function (4.6) $$\xi(\varphi_{\pi},f_{s})(g)=\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})% }\varphi_{\pi}(h){\mathcal{E}}(f_{s})\left(\begin{pmatrix}g_{1}&&g_{2}\\ &h&\\ g_{3}&&g_{4}\end{pmatrix}\right)dh$$ is a cusp form in the space of $\pi^{\iota}$. Here $g=\begin{pmatrix}g_{1}&g_{2}\\ g_{3}&g_{4}\end{pmatrix}\in Sp_{2k}(\bf A)$. Our goal in this example is to compute integral (4.1) with $(\varphi_{\pi})^{\iota}$ instead of $\varphi_{\pi}$, and then replace $(\varphi_{\pi})^{\iota}$ by $\xi(\varphi_{\pi},f_{s})$. In other words, we compute the integral (4.7) $$\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\int\limits_{Mat_{k}^{0}(F)% \backslash Mat_{k}^{0}({\bf A})}\varphi_{\pi}(h){\mathcal{E}}(f_{s})\left(% \begin{pmatrix}I_{k}&&Z\\ &h&\\ &&I_{k}\end{pmatrix}\right)\psi^{-1}(\text{tr}(TZ))dZdh.$$ We prove Theorem 3. Given $\phi\in\mathcal{S}(Mat_{k}({\bf A}))$, there are nontrivial choices of the sections $f_{s}$, $\eta_{\chi_{T},s}$, such that integral (4.7) is equal to integral (4.2), for any cusp form $\varphi_{\pi}$ in the space of $\pi$. The choice of sections is made explicit in the proof, and is such that $f_{s}$ is a certain convolution of any given section $f^{\prime}_{s}$ by a Schwartz function depending on $\phi$ and another given Schwartz function $\phi_{2}\in\mathcal{S}(Mat_{k}({\bf A}))$; $\eta_{\chi_{T},s}$ is given by an explicit (integral) formula $\eta_{\chi_{T},s}=\eta(f^{\prime}_{s},\phi_{2})$. Proof. Recall that we assume that $k$ is even. Consider the the unipotent radical $U_{k,4k}$, which we denote for short by $U$. It consists of all matrices of the form (4.8) $$u=\begin{pmatrix}I_{k}&a&b&Z\\ &I_{k}&&\star\\ &&I_{k}&\star\\ &&&I_{k}\end{pmatrix}\in Sp_{4k}.$$ This group is a generalized Heisenberg group, as follows. Consider the Heisenberg group ${\mathcal{H}}_{2k^{2}+1}$ realized as $Mat_{k}\times Mat_{k}\times Mat_{1}$. Then there is a homomorphism from $U$ onto ${\mathcal{H}}_{2k^{2}+1}$. We choose the following homomorphism, $l_{T}(u)=(a,b,\text{tr}(TZ))$. Embed $SO_{T_{0}}\times Sp_{2k}$ inside $Sp_{4k}$ by $(m,h)\to\text{diag}(m,h,m^{*})\in Sp_{4k}$. Note that the action by conjugation of $\text{diag}(m,h,m^{*})$ on the element (4.8) takes $(a,b,Z)$ to $((m^{-1}a,m^{-1}b)h,m^{-1}ZJ_{k}{}^{t}m^{-1}J_{k})$. Denote, for short, $(m,h)=\text{diag}(m,h,m^{*})$. Let $D$ denote the semi-direct product of $U$ and the image of $SO_{T_{0}}\times Sp_{2k}$ inside $Sp_{4k}$. Consider, for $d\in D(\bf A)$, the Fourier coefficient (4.9) $${\mathcal{E}}^{\psi_{T}}(f_{s})(d)=\int\limits_{Mat_{k}^{0}(F)\backslash Mat_{% k}^{0}({\bf A})}{\mathcal{E}}(f_{s})\left(\begin{pmatrix}I_{k}&&Z\\ &I_{2k}&\\ &&I_{k}\end{pmatrix}d\right)\psi^{-1}(\text{tr}(TZ))dZ.$$ It defines a function of $d\in D(F)\backslash D({\bf A})$. Now we use a theorem of Ikeda on Fourier-Jacobi coefficients. See [I], Sec. 1. It says that the following set of functions of $d=v(m,h)$, $v\in U({\bf A})$, $m\in SO_{T_{0}}(\bf A)$, $h\in Sp_{2k}(\bf A)$, (4.10) $$\theta_{\psi,2k^{2}}^{\phi_{1}}(l_{T}(v)i_{T}(m,h))\int\limits_{U(F)\backslash U% ({\bf A})}\overline{\theta_{\psi,2k^{2}}^{\phi_{2}}(l_{T}(u)i_{T}(m,h))}{% \mathcal{E}}(f_{s})(u(m,h))du$$ spans a dense subspace of the space of functions given by integrals (4.9). In more details, Ikeda introduced a family of functions $\varphi_{\phi_{1},\phi_{2}}$ on ${\mathcal{H}}_{2k^{2}+1}(\bf A)$, which transform by $\psi^{-1}$ under translations by the center, and such that $(x,y)\mapsto\varphi_{\phi_{1},\phi_{2}}(x,y,0)$ lies in $\mathcal{S}(Mat_{k}({\bf A})\times Mat_{k}({\bf A}))$. Here $\phi_{1},\phi_{2}\in\mathcal{S}(Mat_{k}({\bf A})$. See [I], p. 621. Let, for $v\in U(\bf A)$, $m\in SO_{T_{0}}(\bf A)$, $h\in Sp_{2k}(\bf A)$, (4.11) $$\rho(\varphi_{\phi_{1},\phi_{2}}){\mathcal{E}}^{\psi_{T}}(f_{s})(v(m,h))=\int% \limits_{C({\bf A})\backslash U({\bf A})}\varphi_{\phi_{1},\phi_{2}}(l_{T}(u))% {\mathcal{E}}^{\psi_{T}}(f_{s})(v(m,h)u)du,$$ where $C$ is the center of $U$. Then the functions (4.11) span a dense subspace of the closure in the Frechet topology of the functions $v(m,h)\mapsto{\mathcal{E}}^{\psi_{T}}(f_{s})(v(m,h))$. Here $s$ is fixed away from the set of poles of our Eisenstein series, and we let the section vary. The proof of Prop. 1.3 in [I] shows that the r.h.s. of (4.11) is equal to (4.10). Note that (4.7) can be rewritten as (4.12) $$\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(h){\mathcal{E% }}^{\psi_{T}}(f_{s})((1,h))dh.$$ Let us realize $C({\bf A})\backslash U({\bf A})$ as the subset which is the product of subgroups $U_{1}(\bf A)$, $U_{2}(\bf A)$ where $U_{1}$ (resp. $U_{2}$) is the subgroup of elements $u$ of the form (4.8), such that $Z=0$ and $b=0$ (resp. $a=0$). Then (4.13) $$f^{\phi_{1},\phi_{2}}_{s}=\int\limits_{U_{2}({\bf A})}\int\limits_{U_{1}({\bf A% })}\varphi_{\phi_{1},\phi_{2}}(l_{T}(u_{2})l_{T}(u_{1}))\rho(u_{2})\rho(u_{1})% f_{s}du_{1}du_{2}$$ is a smooth, holomorphic section of $Ind_{Q_{2k}({\bf A})}^{Sp_{4k}({\bf A})}|\text{det}\cdot|^{s}$; $\rho$ denotes a right translation. Now (4.11) reads as (4.14) $$\rho(\varphi_{\phi_{1},\phi_{2}}){\mathcal{E}}^{\psi_{T}}(f_{s})(v(m,h))={% \mathcal{E}}^{\psi_{T}}(f^{\phi_{1},\phi_{2}}_{s})(v(m,h)).$$ Thus, given the section $f_{s}$ and $\phi_{1},\phi_{2}\in\mathcal{S}(Mat_{k}({\bf A}))$, we construct the section (4.13), and substitute in the integral (4.7) (or in (4.12)) $f^{\phi_{1},\phi_{2}}_{s}$ instead of $f_{s}$. By (4.14) and what we explained before, we get (4.15) $$\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(h)\theta_{% \psi,2k^{2}}^{\phi_{1}}(i_{T}(1,h))\int\limits_{U(F)\backslash U({\bf A})}% \overline{\theta_{\psi,2k^{2}}^{\phi_{2}}(l_{T}(u)i_{T}(1,h))}{\mathcal{E}}(f_% {s})(u(1,h))dudh.$$ Note that $\theta_{\psi,2k^{2}}^{\phi_{1}}(i_{T}(1,h))=\theta_{T}^{\phi_{1}}(h)$. From [I], Theorem 3.2, the inner $du$-integral in (4.15) is an Eisenstein series $E(\eta(f_{s},\phi_{2}))(h)$ on $Sp_{2k}(\bf A)$, corresponding to an explicitly written section $\eta(f_{s},\phi_{2})$ of $Ind_{Q_{k}({\bf A})}^{Sp_{2k}({\bf A})}(\chi_{T}\circ\text{det})|\text{det}% \cdot|^{s}$. Thus (4.15) is equal to (4.16) $$\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(h)\theta_{T}^% {\phi_{1}}(h)E(\eta(f_{s},\phi_{2}))(h)dh,$$ which is an integral of the form (4.2). This completes the proof of the theorem. ∎ Let us examine in detail the equality of the last theorem, using the choice of data made in the proof. Assume that $\varphi_{\pi}$ corresponds to a decomposable vector, which is unramified outside $S$. Similarly, assume that $f_{s}=\prod_{v}f_{v,s}$ is a product of local sections, which are unramified and normalized outside $S$. Assume also that the Schwartz functions $\phi_{1}$, $\phi_{2}$ in the proof are decomposable $\phi_{i}=\prod_{v}\phi_{i,v}$, $i=1,2$, where, for $v\notin S$, $\phi_{i,v}=\phi^{0}$ -the characteristic function of $Mat_{k}(\mathcal{O}_{v})$. It is easy to check that $f^{\phi_{1},\phi_{2}}_{s}$ is decomposable as the product of the analogous local sections $f^{\phi_{1,v},\phi_{2,v}}_{v,s}$ , and that for $v\notin S$ the corresponding factor is the normalized unramified section. Using the notation of (2.14), the integral (4.7), with $f_{s}$ replaced by $f^{\phi_{1},\phi_{2}}_{s}$, is equal to (4.17) $$\frac{L^{S}(\pi\times\tau,s+\frac{1}{2})}{d_{2k}^{S}(1,s)}\varepsilon_{T}((% \iota\circ\ell)(\otimes_{v\in S}\xi(\varphi_{\pi_{v}},f^{\phi_{1,v},\phi_{2,v}% }_{v,s})\otimes\varphi_{\pi}^{S})).$$ This is a meromorphic function. Denote $\xi_{S}(\varphi_{\pi},f^{\phi_{1},\phi_{2}}_{s})=\otimes_{v\in S}\xi(\varphi_{% \pi_{v}},f^{\phi_{1,v},\phi_{2,v}}_{v,s})$. We proved that (4.16) is equal to (4.17), that is (4.18) $$\displaystyle\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(% h)\theta_{T}^{\phi_{1}}(h)E(\eta(f_{s},\phi_{2}))(h)dh=\\ \displaystyle\frac{L^{S}(\pi\times\tau,s+\frac{1}{2})}{d_{2k}^{S}(1,s)}% \varepsilon_{T}((\iota\circ\ell)(\xi_{S}(\varphi_{\pi},f^{\phi_{1},\phi_{2}}_{% s})\otimes\varphi_{\pi}^{S})).$$ Examining the section $\eta(f_{s},\phi_{2})$, we see that it is decomposable and has the form $$\frac{d^{S}_{k}(\chi_{T},s)}{d_{2k}^{S}(1,s)}(\otimes_{v\in S}\eta_{v}(f_{v,s}% ,\phi_{2,v})\otimes\eta^{S}_{\chi_{T},s}),$$ where $\eta_{v}(f_{v,s},\phi_{2,v})$ is the local section of $Ind_{Q_{k}(F_{v})}^{Sp_{2k}(F_{v})}(\chi_{T,v}\circ\text{det})|\text{det}\cdot% |^{s}$, which is the local analog at $v$ of $\eta(f_{s},\phi_{2})$ defined in [I], Theorem 3.2; $\eta^{S}_{\chi_{T},s}=\otimes_{v\notin S}\eta^{0}_{\chi_{T,v},s}$, where, for $v\notin S$, $\eta^{0}_{\chi_{T,v},s}$ is the normalized, spherical section of $Ind_{Q_{k}(F_{v})}^{Sp_{2k}(F_{v})}(\chi_{T,v}\circ\text{det})|\text{det}\cdot% |^{s}$. Denote $\eta_{S}(f_{s},\phi_{2})=\otimes_{v\in S}\eta_{v}(f_{v,s},\phi_{2,v})$. Then (4.18) can be rewritten as (4.19) $$\displaystyle\int\limits_{Sp_{2k}(F)\backslash Sp_{2k}({\bf A})}\varphi_{\pi}(% h)\theta_{T}^{\phi_{1}}(h)E(\eta_{S}(f_{s},\phi_{2})\otimes\eta^{S}_{\chi_{T},% s})(h)dh=\\ \displaystyle\frac{L^{S}(\pi\times\tau,s+\frac{1}{2})}{d_{2k}^{S}(1,s)}% \varepsilon_{T}((\iota\circ\ell)(\xi_{S}(\varphi_{\pi},f^{\phi_{1},\phi_{2}}_{% s})\otimes\varphi_{\pi}^{S})).$$ This explains the work in [PS-R2], namely the theorem that the global integral (4.2) represents (4.3). 4.2. Extending the Construction of Piatetski-Shapiro and Rallis We keep the notations of the previous sub-section. In this section we take $k=2$. Thus, $\pi$ is an irreducible, automorphic, cuspidal representation of $Sp_{4}({\bf A})$. Let $\tau$ denote an irreducible, automorphic, cuspidal representation of $GL_{2}({\bf A})$. Our goal is to introduce a new global integral which will unfold to an integral involving the Fourier coefficient given by integral (4.1). We then conjecture that this integral represents the standard $L$-function $L(\pi\times\tau,s)$. This is an example showing how to use our procedure to obtain new global integrals representing an $L$-function coming from doubling integrals. First we need some preliminaries. Consider the unipotent radical $U_{2^{2},16}$. It consists of all matrices of the form (4.20) $$\begin{pmatrix}I_{2}&a_{1}&a_{2}&a_{3}&a_{4}\\ &I_{2}&y&Z&a_{3}^{\prime}\\ &&I_{8}&y^{\prime}&a_{2}^{\prime}\\ &&&I_{2}&a_{1}^{\prime}\\ &&&&I_{2}\end{pmatrix}\in Sp_{16}.$$ Let $U^{0}_{2^{2},16}$ denote the subgroup of $U_{2^{2},16}$ consisting of all matrices of the form (4.20), such that $y=\begin{pmatrix}0_{2\times 6},&y_{0}\end{pmatrix}$, $y_{0}\in Mat_{2}$. Notice that $U_{2^{2},16}$ has a structure of a generalized Heisenberg group. Recall the matrix $T=J_{2}T_{0}$ from the previous sub-section. Write $T_{0}=diag(t_{1},t_{2})$. Define $l_{T}^{0}:U_{2^{2},16}\to{\mathcal{H}}_{17}$ by $l_{T}^{0}(u)=(y,\text{tr}(TZ))\in{\mathcal{H}}_{17}$, where $u$ is in the form (4.20). Consider the dual pair $SO_{T_{0}}\times Sp_{8}$ inside $Sp_{16}$. Its Adele points split in $Sp^{(2)}_{16}({\bf A})$. We may realize the Weil representation $\omega_{\psi,16}$ of $Sp^{(2)}_{16}({\bf A})$, corresponding to $\psi$, in $\mathcal{S}(Mat_{2\times 4}({\bf A}))$. Consider, for $\phi\in\mathcal{S}(Mat_{2\times 4}({\bf A}))$, the corresponding theta series $\theta_{\psi,16}^{\phi}$. We may take a splitting $i^{0}_{T}$ of $SO_{T_{0}}({\bf A})\times Sp_{8}({\bf A})$, such that $\omega_{\psi,16}(i^{0}_{T}(m,1))\phi(x)=\phi(m^{-1}x)$, for $m\in SO_{T_{0}}({\bf A})$, $x\in Mat_{2\times 4}({\bf A})$. Let $\psi_{U_{2^{2},16}}$ denote the character of $U_{2^{2},16}$ defined by $\psi_{U_{2^{2},16}}(u)=\psi(\text{tr}(a_{1}))$. Let $\psi_{U^{0}_{2^{2},16},T}$ denote the character of $U^{0}_{2^{2},16}({\bf A})$ defined by $\psi_{U^{0}_{2^{2},16},T}(u)=\psi(\text{tr}(a_{1}))\psi(\text{tr}(TZ))\psi^{-1% }(\text{tr}(y_{0}))$. Again, we wrote $u$ in the form (4.20). Let $f_{\Delta(\tau,4),s}$ be a smooth, holomorphic section of $Ind_{Q_{8}({\bf A})}^{Sp_{16}({\bf A})}\Delta(\tau,4)|\text{det}\cdot|^{s}$, and let $E(f_{\Delta(\tau,4),s})$ be the corresponding Eisenstein series on $Sp_{16}({\bf A})$. Consider the following Fourier-Jacobi coefficient of $E(f_{\Delta(\tau,4),s})$, (4.21) $$\int\limits_{U_{2^{2},16}(F)\backslash U_{2^{2},16}({\bf A})}\theta_{\psi,16}^% {\phi}(l_{T}^{0}(u)i^{0}_{T}(h_{0},g_{0}))E(f_{\Delta(\tau,4),s})(u{}^{d}(h_{0% },g_{0}))\psi^{-1}_{U_{2^{2},16}}(u)du.$$ Here, ${}^{d}(h_{0},g_{0})=\text{diag}(h_{0},h_{0},g_{0},h_{0}^{*},h_{0}^{*})$, $\phi\in\mathcal{S}(Mat_{2\times 4}({\bf A}))$, $h_{0}\in SO_{T_{0}}({\bf A})$ and $g_{0}\in Sp_{8}({\bf A})$. Recall the quadratic character $\chi_{T}$ of $F^{*}\backslash{\bf A}^{*}$ ; $\chi_{T}(x)=(x,\text{det}(T))=(x,-t_{1}t_{2})$. We have Lemma 2. Fix $h_{0}\in SO_{T_{0}}({\bf A})$. Then, as a function of $g_{0}\in Sp_{8}({\bf A})$, the integral (4.21) is an Eisenstein series $E_{\tau\otimes\chi_{T},2}(g_{0},s)$ as in the beginning of Sec. 2.2. More precisely, there is a smooth, meromorphic section $\lambda(f_{\Delta(\tau,4),s},\phi)$ of $Ind_{Q_{4}({\bf A})}^{Sp_{8}({\bf A})}\Delta(\tau\otimes\chi_{T},2)|\text{det}% \cdot|^{s}$, such that the integral (4.21) is equal to the Eisenstein series $E(\lambda(f_{\Delta(\tau,4),s},\phi))$. Proof. This lemma can be proved in two ways. The first proof uses the identities (2.1) and (2.2). Indeed, after some root exchange process, we can use Identity (2.1) to obtain the Eisenstein series $E_{\tau,3}^{(2)}(\cdot,s)$ defined on $Sp_{12}^{(2)}({\bf A})$. Then, using more root exchange, and Identity 2.2, the proof will follow. A second approach is to prove the lemma directly by unfolding the Eisenstein series. We will give some details for this approach, and to simplify the computations, we will assume that $SO_{T_{0}}({\bf A})$ is anisotropic. In other words, we assume that $-t_{1}t_{2}$ is not a square in $F$. Set $h_{0}=1$. We start by unfolding the Eisenstein series in (4.21), assuming that $Re(s)$ is sufficienly large. We consider the space of double cosets $Q_{8}(F)\backslash Sp_{16}(F)/U_{2^{2},16}(F){}^{d}(1,Sp_{8}(F))$. See Sec. 2.2 for notations. It follows from the Bruhat decomposition that all representatives can be chosen in the form (4.22) $$\gamma=wx_{\alpha_{1}}(c_{1})x_{\alpha_{3}}(c_{3})\ \ \ \ \ \ \ \ w=\begin{% pmatrix}\epsilon_{1}&&\epsilon_{2}\\ &I_{8}&\\ \epsilon_{3}&&\epsilon_{4}\end{pmatrix}.$$ Here, $x_{\alpha_{1}}(c_{1})=I_{16}+c_{1}e^{\prime}_{1,2}$, and $x_{\alpha_{3}}(c_{3})=I_{16}+c_{3}e^{\prime}_{3,4}$ where $c_{i}\in F$. Also, $w$ is a Weyl element in $Sp_{16}(F)$, which is assumed to be a permutation matrix with nonzero entries $\pm 1$. We can choose the matrix $\epsilon_{1}$ to be a diagonal matrix, and $\epsilon_{2}$ can be chosen so that all nonzero entries are on the other diagonal. Moreover, we can assume that all nonzero entries of $\epsilon_{1}$ and $\epsilon_{2}$ are ones. These conditions determine $w$ uniquely. Let $U_{8}$ denote the unipotent radical of $Q_{8}$. Let $\gamma$ denote a representative as in (4.22). If for some $u\in U_{2^{2},16}({\bf A})$, such that $l^{0}_{T}(u)$ lies in the center of $\mathcal{H}_{17}({\bf A})$, we have $\psi_{U^{0}_{2^{2},16},T}(u)\neq 1$, and $\gamma u\gamma^{-1}\in U_{8}({\bf A})$, then the double coset of $\gamma$ contributes zero to the integral (4.21). Note that $u$ as above lies in $U^{0}_{2^{2},16}({\bf A})$. For $1\leq i\leq 4$, let $\epsilon_{1}(i)$ denote the $(i,i)$ entry of $\epsilon_{1}$. Assume that $\epsilon_{1}(3)=1$. Thus, for $u=I_{16}+de_{3,14}$ we have $\psi_{U^{0}_{2^{2},16},T}(u)\neq 1$, and $\gamma u\gamma^{-1}\in U_{8}({\bf A})$. Hence, we may assume that $\epsilon_{1}(3)=0$, and that the $(2,3)$ entry of $\epsilon_{2}$ is one. Similarly, if $\epsilon_{1}(4)=1$, we can then chose a suitable matrix $u=I_{16}+d_{1}e_{4,13}+d_{2}e^{\prime}_{3,13}+d_{3}e_{3,14}$ such that $\psi_{U^{0}_{2^{2},16},T}(u)\neq 1$, and $\gamma u\gamma^{-1}\in U_{8}({\bf A})$. We mention that at this point we use the fact that $-t_{1}t_{2}$ is not a square. We conclude that $\epsilon_{1}(4)=0$. If $\epsilon_{1}(1)=1$, we use $u=I_{16}+de^{\prime}_{1,3}$, and if $\epsilon_{1}(2)=1$ we use $u=I_{16}+d_{1}e^{\prime}_{2,4}+d_{2}e^{\prime}_{1,4}$. We omit the details. Thus, we deduce that $\gamma$ is such that $\epsilon_{1}=0$. Matrix multiplication implies that we may assume that $\gamma$ in this case can be chosen so that $c_{1}=c_{3}=0$. Modifying the representative $\gamma$, we deduce that the only possible double coset representative which can contribute a nonzero term to integral (4.21) is the containing the following representative $$w_{0}=\begin{pmatrix}&I_{4}&&\\ &&&I_{4}\\ -I_{4}&&&\\ &&I_{4}&\end{pmatrix}.$$ Computing the stabilizer for $w_{0}$, we deduce that, for $Re(s)$ large enough, the integral (4.21) is equal to (4.23) $$\displaystyle\sum_{\gamma\in Q_{4}(F)\backslash Sp_{8}(F)}\ \ \int\limits_{U^{% 1}({\bf A})}\int\theta_{\psi,16}^{\phi}((0_{2\times 4},b_{2},0)l_{T}^{0}(u_{1}% )i^{0}_{T}(1,\gamma g_{0}))\\ \displaystyle f_{\Delta(\tau,4),s}\left(\begin{pmatrix}I_{6}&x&&\\ &I_{2}&&\\ &&I_{2}&x^{\prime}\\ &&&I_{6}\end{pmatrix}w_{0}\hat{b}_{2}u_{1}{}^{d}(1,\gamma g_{0})\right)% \widetilde{\psi}(x)\psi^{-1}_{U_{1},T}(u_{1})db_{2}dxdu_{1}.$$ We explain the notation. We start with the definition of the group $U^{1}$. It is the subgroup of $U_{2^{2},16}$ consisting of all matrices of the form (4.8) with $k=4$, such that $b=0$. Notice that the projection $l_{T}^{0}$ is non-trivial on $U^{1}$; the character $\psi_{U^{1},T}$ is defined as follows. Write the matrix $Z$ in (4.8) as $Z=\begin{pmatrix}*&*\\ Z_{1}&*\end{pmatrix}$, where $Z_{1}\in Mat_{2}({\bf A})$. Then $\psi_{U^{1},T}(u_{1})=\psi(\text{tr}(TZ_{1}))$. The integration domain for the $x$ variable is $Mat_{6\times 2}(F)\backslash Mat_{6\times 2}({\bf A})$. The character $\widetilde{\psi}$ is defined as follows. Write $x=\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}$, where $x_{1}\in Mat_{4\times 2}({\bf A})$ and $x_{2}\in Mat_{2}({\bf A})$. Then $\widetilde{\psi}(x)=\psi^{-1}(\text{tr}(x_{2}))$. Finally, the variable $b_{2}$ is integrated over $Mat_{2\times 4}(F)\backslash Mat_{2\times 4}({\bf A})$. It is embedded in $Sp_{16}$ as all matrices in (4.8) with $a=0$, $Z=0$, and $b=\begin{pmatrix}0_{2\times 4}\\ b_{2}\end{pmatrix}$. We claim that for all $h\in Sp_{16}({\bf A})$, we have (4.24) $$\displaystyle\int f_{\Delta(\tau,4),s}\left(\begin{pmatrix}I_{6}&x&&\\ &I_{2}&&\\ &&I_{2}&x^{\prime}\\ &&&I_{6}\end{pmatrix}h\right)\widetilde{\psi}(x)dx=\\ \displaystyle=\int f_{\Delta(\tau,4),s}\left(\begin{pmatrix}I_{4}&x_{3}&x_{1}&% &&\\ &I_{2}&x_{2}&&&\\ &&I_{2}&&&\\ &&&I_{2}&x_{2}^{\prime}&\star\\ &&&&I_{2}&x^{\prime}_{3}\\ &&&&&I_{4}\end{pmatrix}h\right)\psi^{-1}(tr(x_{2}))dx_{i}.$$ Here, the integral on the left hand side is integrated as in integral (4.23). The integral on the right hand side has an extra integration over the variable $x_{3}$ which is integrated over $Mat_{4\times 2}(F)\backslash Mat_{4\times 2}({\bf A})$. The proof of this identity is exactly as the proof of Proposition 2.4 in [G-S]. For simplicity, we shall denote by $f^{\psi^{\prime}}_{\Delta(\tau,4),s}(h)$ the integral on the right hand side of equation (4.24). Plugging this identity in integral (4.23), we then conjugate the matrix $\hat{b}_{2}$ across $w_{0}$, and then we change variables in $x_{3}$. Thus, integral (4.21) is equal to $$\sum_{\gamma\in Q_{4}(F)\backslash Sp_{8}(F)}\ \ \int\limits_{U^{1}({\bf A})}% \int\theta_{\psi,16}^{\phi}((0_{2\times 4},b_{2},0)l_{T}^{0}(u_{1})i^{0}_{T}(1% ,\gamma g_{0}))db_{2}\\ f^{\psi^{\prime}}_{\Delta(\tau,4),s}\left(w_{0}u_{1}{}^{d}(1,\gamma g_{0})% \right)\psi^{-1}_{U^{1},T}(u_{1})du_{1}.$$ Unfolding the theta series and integrating over $b_{2}$, we deduce that integral (4.21) is equal to (for $Re(s)$ large) (4.25) $$\sum_{\gamma\in Q_{4}(F)\backslash Sp_{8}(F)}\ \ \int\limits_{U^{1}({\bf A})}% \omega_{\psi,16}(l_{T}^{0}(u_{1})i^{0}_{T}(1,\gamma g_{0}))\phi(0)f^{\psi^{% \prime}}_{\Delta(\tau,4),s}(w_{0}u_{1}{}^{d}(1,\gamma g_{0}))\psi^{-1}_{U^{1},% T}(u_{1})du_{1}.$$ We claim that the integral $$\lambda(f_{\Delta(\tau,4),s},\phi)=\int\limits_{U^{1}({\bf A})}\omega_{\psi,16% }(l_{T}^{0}(u_{1})i^{0}_{T}(1,g_{0}))\phi(0)f^{\psi^{\prime}}_{\Delta(\tau,4),% s}(w_{0}u_{1}{}^{d}(1,g_{0}))\psi^{-1}_{U^{1},T}(u_{1})du_{1}$$ is a section of the induced representation $Ind_{Q_{4}({\bf A})}^{Sp_{8}({\bf A})}\Delta(\tau\otimes\chi_{T},2)|\text{det}% \cdot|^{s}$. To prove this we consider a matrix of the form $g_{0}=\begin{pmatrix}A&B\\ &A^{*}\end{pmatrix}\in Q_{8}({\bf A})$ where $A\in GL_{4}({\bf A})$. It is easy to check that the integral is left invariant under $g=\begin{pmatrix}I_{4}&B\\ &I_{4}\end{pmatrix}$. Next plug $g_{0}=\begin{pmatrix}A&\\ &A^{*}\end{pmatrix}$, and conjugate it to the left. We obtain $\chi_{T}(\text{det}A)|\text{det}A|^{-4}$ from the change of variables in $u_{1}$. Then we obtain a factor of $|\text{det}A|$ from the action of the Weil representation, and another factor of $|\text{det}A|^{s}\delta_{Q_{8}}^{1/2}(\text{diag}(A,I_{8},A^{*}))=|\text{det}A% |^{s+9/2}$ from the section $f_{\Delta(\tau,4),s}$. Finally, notice that the integral $f^{\psi^{\prime}}_{\Delta(\tau,4),s}$ contains the constant term along the standard parabolic subgroup $L$ of $GL_{8}$ whose Levi part is isomorphic to $GL_{4}\times GL_{4}$. This constant term contributes the factor $\Delta(\tau,2)|\text{det}A|^{-1}\delta_{L}^{1/2}(\text{diag}(A,I)=\Delta(\tau,% 2)|\text{det}A|$. Combining all these factors we get that $g_{0}$ is transformed as $\Delta(\tau\otimes\chi_{T},2)|\cdot|^{s}\delta_{Q_{4}}^{1/2}$. ∎ To introduce the new integral construction for the tensor product $L$-function $L(\pi\times\tau,s)$, we start with the global doubling construction for this $L$-function given in [C-F-G-K1]. Then we write down the corresponding function $\xi(\varphi_{\pi},f_{\Delta(\tau,4),s})$ given by integral (1.4), and finally compute the Fourier coefficient of $\xi(\varphi_{\pi},f_{\Delta(\tau,4),s})$ given by integral (4.1). It follows from [C-F-G-K1] that in this case, the function $\xi(\varphi_{\pi},f_{\Delta(\tau,4),s})$ is given by (4.26) $$\xi(\varphi_{\pi},f_{\Delta(\tau,4),s})(g)=\int\limits_{Sp_{4}(F)\backslash Sp% _{4}({\bf A})}\int\limits_{U_{4,16}(F)\backslash U_{4,16}({\bf A})}\varphi_{% \pi}(h)E(f_{\Delta(\tau,4),s})(u{}t(g,h))\psi^{-1}_{U_{4,16}}(u)dudh.$$ As in equation (4.1), our goal is to compute the integral $$\int\limits_{Mat_{2}^{0}(F)\backslash Mat_{2}^{0}({\bf A})}\xi(\varphi_{\pi},f% _{\Delta(\tau,4),s})\left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix}\right)\psi^{-1}(\text{tr}(TZ))dZ.$$ It is equal to (4.27) $$\displaystyle\int\limits_{Mat_{2}^{0}(F)\backslash Mat_{2}^{0}({\bf A})}\int% \limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{U_{4,16}(F)% \backslash U_{4,16}({\bf A})}\varphi_{\pi}(h)E(f_{\Delta(\tau,4),s})(u\cdot{}t% \left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix},h\right))\\ \displaystyle\psi^{-1}_{U_{4,16}}(u)\psi^{-1}(\text{tr}(TZ))dudhdZ.$$ As explained in the introduction, after the statement of Theorem 1, if we unfold this integral by first unfolding the Eisenstein series, we will obtain an integral involving the functional (4.1). We conjecture that this integral will be Eulerian even-though this functional is not unique (locally at each place). However, our goal is to apply Fourier expansions and identities between Eisenstein series to obtain a ”simpler” integral. More precisely, we will prove Theorem 4. Given $\phi\in\mathcal{S}(Mat_{2}({\bf A}))$, there are nontrivial choices of sections $f_{\Delta(\tau,4),s}$, $f^{\prime}_{\Delta(\tau\otimes\chi_{T},2),s}$ of $Ind_{Q_{8}({\bf A})}^{Sp_{16}({\bf A})}\Delta(\tau,4)|det\cdot|^{s}$, $Ind_{Q_{4}({\bf A})}^{Sp_{8}({\bf A})}\Delta(\tau\otimes\chi_{T},2)|det\cdot|^% {s}$, respectively, such that integral (4.27) is equal to the integral (4.28) $$\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{U_{2,8}(F)% \backslash U_{2,8}({\bf A})}\varphi_{\pi}(h)\theta_{\psi,8}^{\phi}(l_{T}(v)i_{% T}(1,h))E(f^{\prime}_{\Delta(\tau\otimes\chi_{T},2),s})(v(1,h))dvdh.$$ Proof. The first step is to perform certain root exchanges in the inner $du$-integration inside integral (4.27). The process of root exchange was defined in general in [G-R-S2] Section 7. In the notations of Lemma 1, let $B=U_{4,16}$; $Y$ is the subgroup of $B$ consisting of the elements (4.8), such that $b=Z=0$, and the coordinates of $a$ are zero, except those at the entries $(i,1)$, $(i,2)$, for $i=3,4$, which can be arbitrary; $C$ is the subgroup of $B$ generated by the root subgroups in $U_{4,16}$ which do not lie in $Y$; $X$ is generated by the root subgroups $I_{16}+x_{i,j}e^{\prime}_{i,j}$, for $i=1,2$, $j=3,4$. Then $D=CX$. One can check that the first part of Lemma 1 applies, and that the elements $t\left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix},h\right)$ in the integrand of (4.27) belong to the set $\Omega$ in Lemma 1. By the proof of the second part of Lemma 1, let $\tilde{f}_{\Delta(\tau,4),s}$ be a smooth, holomorphic section of $Ind_{Q_{8}({\bf A})}^{Sp_{16}({\bf A})}\Delta(\tau,4)|\text{det}\cdot|^{s}$, and let $\xi\in C_{c}^{\infty}(X({\bf A}))$. Take $$f_{\Delta(\tau,4),s}(g)=\int\limits_{X({\bf A})}\xi(x)\tilde{f}_{\Delta(\tau,4% ),s}(gx)dx,\ g\in Sp_{16}(\bf A).$$ Denote $$f^{\prime}_{\Delta(\tau,4),s}(g)=\int\limits_{Y_{1}({\bf A})}\int\limits_{Y({% \bf A})}\hat{\xi}(y)\tilde{f}_{\Delta(\tau,4),s}(gy)dy.$$ Then the proof of the second part of Lemma 1 shows that, for all $Z\in Mat_{2}^{0}(\bf A)$, $h\in Sp_{4}(\bf A)$, (4.29) $$\displaystyle\int\limits_{U_{4,16}(F)\backslash U_{4,16}({\bf A})}E(f_{\Delta(% \tau,4),s})(u\cdot t\left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix},h\right))\psi^{-1}_{U_{4,16}}(u)du=\\ \displaystyle\int\limits_{D(F)\backslash D({\bf A})}E(f^{\prime}_{\Delta(\tau,% 4),s})(u\cdot t\left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix},h\right))\psi^{-1}_{D}(u)du.$$ Let $w_{0}$ denote the Weyl element in $Sp_{16}(F)$ defined as $$w=\begin{pmatrix}I_{2}&&&\\ &&I_{2}&\\ &I_{2}&&\\ &&&I_{2}\end{pmatrix}\ \ \ \ \ \ \ w_{0}=\begin{pmatrix}w&\\ &w^{*}\end{pmatrix}.$$ On the r.h.s. of (4.29), we may replace in the Eisenstein series $u$ by $w_{0}u$. Now carry out the conjugation by $w_{0}$ in the integrand. Denote the right $w_{0}$-translate of $f^{\prime}_{\Delta(\tau,4),s}$ by $f^{\prime\prime}_{\Delta(\tau,4),s}$. Note that the subgroup of all elements $w_{0}ut\left(\begin{pmatrix}I_{2}&Z\\ &I_{2}\end{pmatrix},I_{4}\right)w_{0}^{-1}$, for $u\in D$, $Z\in Mat^{0}_{2}$, is equal to $U^{0}_{2^{2},16}\hat{U}_{2,8}$, where $\hat{U}_{2,8}$ is the image of $U_{2,8}$ inside $Sp_{16}$ under the embedding $v\mapsto diag(I_{4},v,I_{4})$. On Adele points, the character $\psi_{D}(u)\psi(tr(TZ))$ goes to the character of $U^{0}_{2^{2},16}({\bf A})\hat{U}_{2,8}({\bf A})$, which is trivial on $\hat{U}_{2,8}({\bf A})$ and on $U^{0}_{2^{2},16}({\bf A})$ is equal to the character $\psi_{U^{0}_{2^{2},16},T}$. Thus, substituting (4.29) in (4.27), we get (4.30) $$\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{U_{2,8}(F)% \backslash U_{2,8}({\bf A})}\int\limits_{U^{0}_{2^{2},16}(F)\backslash U^{0}_{% 2^{2},16}({\bf A})}\varphi_{\pi}(h)E(f^{\prime\prime}_{\Delta(\tau,4),s})(u% \hat{v}t(I_{4},h),s)\psi^{-1}_{U^{0}_{2^{2},16},T}(u)dudvdh.$$ Note that for $h\in Sp_{4}$, $t(I_{4},h)={}^{d}(I_{2},diag(I_{2},h,I_{2}))$. It is convenient to denote this by $\hat{h}$. We will use this notation also for $diag(I_{2},h,I_{2})$. Factor the group $U^{0}_{2^{2},16}$ as $U^{0}_{2^{2},16}=U_{0}Y_{0}$, where in the notation right after (4.20), $U_{0}$ is the subgroup of $U^{0}_{2^{2},16}$ such that $y_{0}=0$, and $Y_{0}$ is the subgroup of $U^{0}_{2^{2},16}$, such that in (4.20), $Z=0$, and for all $1\leq i\leq 4$, $a_{i}=0$. Denote by $\psi_{U_{0},T}$ the restriction of $\psi_{U^{0}_{2^{2},16},T}$ to $U_{0}$, and by $\psi_{Y_{0}}$ the restriction of $\psi_{U^{0}_{2^{2},16},T}$ to $Y_{0}$. Consider the integral (4.31) $$\int\limits_{U_{0}(F)\backslash U_{0}({\bf A})}E(f^{\prime\prime}_{\Delta(\tau% ,4),s})(u_{0}u{}^{d}(h_{0},g_{0}),s)\psi^{-1}_{U_{0},T}(u_{0})du_{0}$$ as a function of $u\in U_{2^{2},16}({\bf A})$ and of $(h_{0},g_{0})\in SO_{T_{0}}({\bf A})\times Sp_{8}({\bf A})$. Let us apply the theorem of Ikeda [I], as we did in the previous sub-section. The relevant (Adelic) Heisenberg group here is $\mathcal{H}_{17}({\bf A})$, realized as $Mat_{2\times 8}({\bf A})\times{\bf A}$. Let $\phi_{1},\phi_{2}\in\mathcal{S}(Mat_{2\times 4}{\bf A})$, and consider the function $\varphi_{\phi_{1},\phi_{2}}$ on $\mathcal{H}_{17}(\bf A)$ as in [I], p. 621. Let $\check{f}_{\Delta(\tau,4),s}$ be a smooth, holomorphic section of $Ind_{Q_{8}({\bf A})}^{Sp_{16}({\bf A})}\Delta(\tau,4)|\text{det}\cdot|^{s}$. Construct the section $\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4),s}$ similar to (4.13). As in (4.11), Ikeda’s theorem tells us the following. Fix in (4.31) $u\in U_{2^{2},16}({\bf A})$, $h_{0}\in SO_{T_{0}}({\bf A})$, $g_{0}\in Sp_{8}({\bf A})$. Then (4.32) $$\displaystyle\int\limits_{U_{0}(F)\backslash U_{0}({\bf A})}E(\check{f}^{\phi_% {1},\phi_{2}}_{\Delta(\tau,4),s})(u_{0}u{}^{d}(h_{0},g_{0}),s)\psi^{-1}_{U_{0}% ,T}(u_{0})du_{0}=\\ \displaystyle\theta_{\psi,16}^{\phi_{1}}(l^{0}_{T}(u)i^{0}_{T}(h_{0},g_{0}))% \int\limits_{U_{2^{2},16}(F)\backslash U_{2^{2},16}({\bf A})}\overline{\theta_% {\psi,16}^{\phi_{2}}(l^{0}_{T}(u)i^{0}_{T}(h_{0},g_{0}))}E(\check{f}_{\Delta(% \tau,4),s})(u{}^{d}(h_{0},g_{0}))\psi^{-1}_{U_{2^{2},16}}(u)du.$$ It follows from Lemma 2, with $h_{0}=1$, that there is a smooth, meromorphic section $\lambda(\check{f}_{\Delta(\tau,4),s},\phi_{2})$ of $Ind_{Q_{4}({\bf A})}^{Sp_{8}({\bf A})}\Delta(\tau\otimes\chi_{T},2)|\text{det}% \cdot|^{s}$, such that the r.h.s. of (4.32) is equal (4.33) $$\theta_{\psi,16}^{\phi_{1}}(l^{0}_{T}(u)i^{0}_{T}(1,g_{0}))E(\lambda(f_{\Delta% (\tau,4),s},\phi_{2}))(g_{0}).$$ Since we don’t know that $f^{\prime\prime}_{\Delta(\tau,4),s}$ is a sum of sections of the form $\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4),s}$, let us start with a smooth, holomorphic section $\check{f}_{\Delta(\tau,4),s}$. We can find Schwartz functions $\xi_{i}\in\mathcal{S}(X({\bf A}))$, $1\leq i\leq r$, and smooth, holomorphic sections $f^{(i)}_{\Delta(\tau,4),s}$, such that (4.34) $$\rho(w_{0})\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4),s}=\sum_{i=1}^{r}\int% \limits_{Y({\bf A})}\hat{\xi}_{i}(y)\rho(y)f^{(i)}_{\Delta(\tau,4),s}dy.$$ Now define (4.35) $$f_{\Delta(\tau,4),s}=\sum_{i=1}^{r}\int\limits_{X({\bf A})}\xi_{i}(x)\rho(x)f^% {(i)}_{\Delta(\tau,4),s}dx.$$ For this choice of $f_{\Delta(\tau,4),s}$, the integral (4.27) is equal to (4.36) $$\displaystyle\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{U_{% 2,8}(F)\backslash U_{2,8}({\bf A})}\int\limits_{Y_{0}(F)\backslash Y_{0}({\bf A% })}\varphi_{\pi}(h)\theta_{\psi,16}^{\phi_{1}}(l^{0}_{T}(y)i^{0}_{T}(1,v\hat{h% }))\\ \displaystyle E(\lambda(\check{f}_{\Delta(\tau,4),s},\phi_{2}))(v\hat{h})\psi_% {Y_{0}}^{-1}(y)dydvdh.$$ Using the action of the Weil representation it is not hard to check that the integral over $Y_{0}(F)\backslash Y_{0}({\bf A})$ is equal to $\theta_{\psi,8}^{\phi^{\prime}_{1}}(l_{T}(v)i_{T}(1,h))$, where $\phi^{\prime}_{1}\in\mathcal{S}(Mat_{2}({\bf A}))$ is obtained from $\phi_{1}$ by $\phi^{\prime}_{1}(x)=\phi_{1}(J_{2}T_{0}^{-1},x)$, $x\in Mat_{2}({\bf A})$. From this the theorem follows. ∎ With this result we can state the following, Conjecture 1. The integral (4.28) is Eulerian, and represents the tensor product $L$-function $L(\pi\times\tau,s)$. 5. An Example with a Fourier-Jacobi mixed model for symplectic groups Denote, for an odd number $n$, $V_{n}=U_{1,n+1}$. This is the unipotent radical of the standard parabolic subgroup of $Sp_{n+1}$, whose Levi part is isomorphic to $GL_{1}\times Sp_{n-1}$. Thus, $V_{n}$ is isomorphic to the Heisenberg group ${\mathcal{H}}_{n}$. Recall that we denote this isomorphism by $i_{n}$. Let $\pi$ denote an irreducible, automorphic, cuspidal representation of $Sp_{4}^{(2)}({\bf A})$. Assume that there is an irreducible, automorphic, cuspidal representation $\sigma$ of $SL_{2}({\bf A})$, such that the integral (5.1) $$\int\limits_{SL_{2}(F)\backslash SL_{2}({\bf A})}\int\limits_{V_{3}(F)% \backslash V_{3}({\bf A})}\varphi_{\pi}^{(2)}(v_{3}\hat{g})\theta_{\psi^{-1},2% }^{\phi}(i_{3}(v_{3})g)\varphi_{\sigma}(g^{\iota})dv_{3}dg$$ is not zero for some choice of data. Here, $\hat{g}=diag(1,g,1)$. For the definition of $g^{\iota}$, see equation (2.5). Also, $\phi\in\mathcal{S}({\bf A})$. We say that $\pi$ has a (global) Fourier-Jacobi 9mixed) model with respect to $\sigma$ (and $\psi$). Let $\tau$ denote an irreducible, automorphic, cuspidal representation of $GL_{2}({\bf A})$. In [G-J-R-S], an integral representation is introduced which represents the (partial) standard $L$-function $L^{S}_{\psi}(\pi\times\tau,s)$. That integral unfolds to the Fourier-Jacobi coefficient given by integral (5.1). The integral introduced in [G-J-R-S] for this case is (5.2) $$\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{V_{5}(F)% \backslash V_{5}({\bf A})}\varphi_{\pi}^{(2)}(g)\theta_{\psi,4}^{\phi^{\prime}% }(i_{5}(v_{5})g)E_{\tau,\sigma}(v_{5}g,s)dv_{5}dg$$ The Eisenstein series $E_{\tau,\sigma}(\cdot,s)$ was defined in sub-section 2.3 right before equation (2.5). Also, $\phi^{\prime}\in\mathcal{S}({\bf A}^{2})$. The doubling integral which represents the above partial $L$-function was announced in [C-F-G-K2]. Thus, in this case, we take, for $g\in Sp^{(2)}_{4}({\bf A})$, (5.3) $$\xi^{(2)}(\varphi_{\pi},f_{\Delta(\tau,4)\gamma_{\psi},s})(g)=\int\limits_{Sp_% {4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{U_{4,16}(F)\backslash U_{4,16}({% \bf A})}\varphi^{(2)}_{\pi}(h)E^{(2)}(f_{\Delta(\tau,4)\gamma_{\psi},s})(ut(g,% h))\psi^{-1}_{U_{4,16}}(u)dudh.$$ Here, $E^{(2)}(f_{\Delta(\tau,4)\gamma_{\psi},s})$ is an Eisenstein series corresponding to a smooth, holomorphic section $f_{\Delta(\tau,4)\gamma_{\psi},s}$ of $Ind_{Q^{(2)}_{8}({\bf A})}^{Sp^{(2)}_{16}({\bf A})}\Delta(\tau,4)\gamma_{\psi}% |\text{det}\cdot|^{s}$. Therefore, we want to consider the integral (5.4) $$\int\limits_{SL_{2}(F)\backslash SL_{2}({\bf A})}\int\limits_{V_{3}(F)% \backslash V_{3}({\bf A})}\xi^{(2)}(\varphi_{\pi},f_{\Delta(\tau,4)\gamma_{% \psi},s})(v_{3}\hat{g})\theta_{\psi^{-1},2}^{\phi}(i_{3}(v_{3})g)\varphi_{% \sigma}(g^{\iota})dv_{3}dg,$$ and we prove Theorem 5. There is a nontrivial choice of data such that integral (5.4) is equal to integral (5.2). Proof. The proof of this theorem is very similar to the proof of Theorem 4. We start with the inner $dv_{3}$-integration of (5.4), that is we start with (5.5) $$\displaystyle\int\limits_{U_{4,16}(F)\backslash U_{4,16}({\bf A})}E^{(2)}(f_{% \Delta(\tau,4)\gamma_{\psi},s})(ut(v_{3}\hat{g},h))\psi^{-1}_{U_{4,16}}(u)du.$$ Here, we identify $\hat{g}$ with $(\hat{g},1)$ or any fixed element projecting to $\hat{g}$. Next we perform certain root exchanges in (5.5). We will do this three times. In the notations of Lemma 1, let $B_{1}=U_{4,16}$; $Y_{1}$ is the subgroup of $B_{1}$ consisting of the matrices of the form (4.8), such that all coordinates of $b,Z$ are zero, and all coordinates of $a$ are zero except $a_{4,1},a_{4,2}$; $C_{1}$ is the subgroup of $B_{1}$ generated by its root subgroups which do not lie in $Y_{1}$; $X_{1}$ is the subgroup of elements $I_{16}+x_{1}e^{\prime}_{1,4}+x_{2}e^{\prime}_{2,4}$. Let $D_{1}=C_{1}X_{1}$. With these definitions we can apply the first part of Lemma 1 (with $B=B_{1}$, $C=C_{1}$, etc.). Next, let $B_{2}=D_{1}$; $Y_{2}$ is the root subgroup of $B_{1}$ consisting of the elements $I_{16}+ye^{\prime}_{4,11}$. (In the notation of (4.8), $a=Z=0$, and all coordinates of $b$ are zero, except $b_{4,3}$). Let $C_{2}$ be the subgroup of $B_{2}$ generated by its root subgroups which do not lie in $Y_{2}$; $X_{2}$ is the root subgroup consisting of the elements $I_{16}+xe^{\prime}_{3,4}$. Let $D_{2}=C_{2}X_{2}$. With these definitions we can apply the first part of Lemma 1 (with $B=B_{2}$, $C=C_{2}$, etc.). Finally, let $B_{3}=D_{2}$; $Y_{3}$ is the subgroup of $B_{2}$ consisting of the matrices of the form (4.8), with $b=Z=0$, and all coordinates of $a$ are zero, except $a_{2,1},a_{3,1}$. Let $C_{3}$ be the subgroup of $B_{3}$ generated by its root subgroups which do not lie in $Y_{3}$; $X_{3}$ is the subgroup of elements $I_{16}+x_{1}e^{\prime}_{1,2}+x_{2}e^{\prime}_{1,3}$. Let $D_{3}=C_{3}X_{3}$. Again, we can apply the first part of Lemma 1 (with $B=B_{3}$, etc.). It follows that the integral (5.5) is equal to (5.6) $$\int\limits_{Y_{1}({\bf A})}\int\limits_{Y_{2}({\bf A})}\int\limits_{Y_{3}({% \bf A})}\int\limits_{D_{3}(F)\backslash D_{3}({\bf A})}E^{(2)}(f_{\Delta(\tau,% 4)\gamma_{\psi},s})(uy_{3}y_{2}y_{1}t(v_{3}\hat{g},h))\psi^{-1}_{D_{3}}(u)dudy% _{3}dy_{2}dy_{1}.$$ Let $Z_{4,1}$ the subgroup of $D_{2}$ consisting of the matrices (4.8) with $a=b=0$, and all coordinates of $Z$ are zero, except $Z_{4,1}$. Denote by $Y^{\prime}_{1},Y^{\prime\prime}_{1}$ the subgroup of $Y_{1}$ such that, in the notation above, $a_{4,2}=0$, or $a_{4,1}=0$, respectively. Then $Y^{\prime\prime}_{1}Y_{2}Z_{4,1}$ is group whose center is $Z_{4,1}$. Also $Y_{1}Y_{3}$ is a commutative subgroup, and the elements of $Y^{\prime}_{1}Y_{3}$ commute with the elements of $Y_{2}Z_{4,1}$. Denote by $Y$ the quotient of $Y_{1}Y_{2}Y_{3}Z_{4,1}$ by $Z_{4,1}$. Denote $X=X_{1}X_{2}X_{3}$. This is a group; $X_{1}X_{2}$ is abelian and is normalized by $X_{3}$ (which is also abelian). Note that in (5.6), $t(v_{3}\hat{g},h)$ normalizes $Y_{1}Y_{2}$ modulo $Z_{4,1}$, and normalizes $Y_{3}$ modulo $X_{2}$. Hence the integral (5.6) is equal to (5.7) $$\int\limits_{Y({\bf A})}\int\limits_{D_{3}(F)\backslash D_{3}({\bf A})}E^{(2)}% (f_{\Delta(\tau,4)\gamma_{\psi},s})(ut(v_{3}\hat{g},h)y)\psi^{-1}_{D_{3}}(u)dudy.$$ Now, we can apply the proof of the third part of Lemma 1 in two stages. The first is for $(D_{2},Z_{4,1}\backslash Y_{2}Y_{1}Z_{4,1},X_{1}X_{2})$. Here, $t(v_{3}\hat{g},h)$ satisfies the three properties defining $\Omega^{\prime}$ in Lemma 1, except that in the first property $t(v_{3}\hat{g},h)$ normalizes $Y_{2}({\bf A})Y_{1}({\bf A})$ only modulo $Z_{4,1}({\bf A})$, but this does not affect the argument. Similarly, for $(D_{3},Y_{3},X_{3})$, $t(v_{3}\hat{g},h)$ satisfies the three properties defining $\Omega^{\prime}$ in Lemma 1, except that in the first two properties $t(v_{3}\hat{g},h)$ normalizes $Y_{3}({\bf A})$, $X_{3}({\bf A})$ only modulo the root subgroup of elements $I_{16}+xe^{\prime}_{1,4}$, which is a subgroup of $X_{1}({\bf A})$. Again, this does not affect the argument. Denote $X_{1,2}=X_{1}X_{2}$, $Y_{1,2}=Z_{4,1}\backslash Y_{2}Y_{1}Z_{4,1}$. Thus, write $$f_{\Delta(\tau,4)\gamma_{\psi},s}=\sum_{i=1}^{r}\sum_{j=1}^{\ell}\int\limits_{% X_{1,2}({\bf A})}\int\limits_{X_{3}({\bf A})}\xi_{i}(x_{1,2})\eta_{j}(x_{3})% \rho(x_{1,2}x_{3})f^{(i,j)}_{\Delta(\tau,4)\gamma_{\psi},s}dx_{3}dx_{1,2},$$ where, $\rho$ denotes a right translation, and for $1\leq i\leq r$, $1\leq j\leq\ell$, $\xi_{i}\in C_{c}^{\infty}(X_{1,2}({\bf A}))$, $\eta_{j}\in C_{c}^{\infty}(X_{3}({\bf A}))$, and $f^{(i,j)}_{\Delta(\tau,4)\gamma_{\psi},s}$ are smooth, holomorphic sections of $Ind^{Sp_{16}^{(2)}({\bf A})}_{Q^{(2)}_{8}({\bf A})}\Delta(\tau,4)\gamma_{\psi}% |\text{det}\cdot|^{s}$. Then repeating the proof of the third part of Lemma 1 twice, we get that (5.7) and hence (5.5) is equal to (5.8) $$\int\limits_{D_{3}(F)\backslash D_{3}({\bf A})}E^{(2)}(f^{\prime}_{\Delta(\tau% ,4)\gamma_{\psi},s})(ut(v_{3}\hat{g},h))\psi^{-1}_{D_{3}}(u)du,$$ where (5.9) $$f^{\prime}_{\Delta(\tau,4)\gamma_{\psi},s})=\sum_{i=1}^{r}\sum_{j=1}^{\ell}% \int\limits_{Y_{1,2}({\bf A})}\int\limits_{Y_{3}({\bf A})}\hat{\xi}_{i}(y_{1,2% })\hat{\eta}_{j}(y_{3})\rho(y_{1,2}y_{3})f^{(i,j)}_{\Delta(\tau,4)\gamma_{\psi% },s}dy_{3}dy_{1,2}.$$ We now bring in the $dv_{3}$-integration, and so we consider (5.10) $$\int\limits_{D_{3}(F)\backslash D_{3}({\bf A})}\int\limits_{V_{3}(F)\backslash V% _{3}({\bf A})}\theta_{\psi^{-1},2}^{\phi}(i_{3}(v_{3})g)E^{(2)}(f^{\prime}_{% \Delta(\tau,4)\gamma_{\psi},s})(ut(v_{3}\hat{g},h))\psi^{-1}_{D_{3}}(u)dudv_{3}.$$ Consider the following Weyl element $w_{0}=\begin{pmatrix}w&\\ &w^{*}\end{pmatrix}$, where $w$ is a permutation matrix in $GL_{8}$ such that $w_{i,j}=1$ at the $(1,1),(2,5),(3,2),(4,3)$ $(5,6),(6,4),(7,7),(8,8)$ entries. Conjugate by $w_{0}$ inside the Eisenstein series in (5.10), that is write $$E^{(2)}(f^{\prime}_{\Delta(\tau,4)\gamma_{\psi},s})(ut(v_{3}\hat{g},h))=E^{(2)% }(f^{\prime}_{\Delta(\tau,4)\gamma_{\psi},s})(w_{0}ut(v_{3},1)w_{0}^{-1}t^{w_{% 0}}(\hat{g},h)w_{0}),$$ where $t^{w_{0}}(\hat{g},h)=w_{0}t(\hat{g},h)w_{0}^{-1}$. Carry out this conjugation in (5.10), change variables, and denote $f^{\prime\prime}_{\Delta(\tau,4)\gamma_{\psi},s}=\rho(w_{0})f^{\prime}_{\Delta% (\tau,4)\gamma_{\psi},s}$. Then (5.10) becomes (5.11) $$\displaystyle\int\limits_{v_{5}\in V_{5}(F)\backslash V_{5}({\bf A})}\int% \limits_{v\in U_{2,12}(F)\backslash U_{2,12}({\bf A})}\int\limits_{F^{5}% \backslash{\bf A}^{5}}\int\limits_{u\in U^{0}_{1^{2},16}(F)\backslash U^{0}_{1% ^{2},16}({\bf A})}\theta_{\psi^{-1},2}^{\phi}((x_{1},x_{2},0)g)\\ \displaystyle E^{(2)}(f^{\prime\prime}_{\Delta(\tau,4)\gamma_{\psi},s})(u\mu(x% _{1},x_{2},a_{1},a_{2},a_{3})\hat{v}i(g,v_{5}h))\psi^{-1}_{U^{0}_{1^{2},16},1}% (u)\psi^{-1}_{U_{2,12}}(v)\psi^{-1}(a_{1})dud(x,a)dvdv_{5}.$$ Here, $(x_{1},x_{2},0)$ in the theta series is an element of $\mathcal{H}_{3}({\bf A})$, with zero coordinate in the center. For $v\in U_{2,12}$, $\hat{v}=diag(I_{2},v,I_{2})$. The element $v_{5}h$ is thought of as an element of $Sp_{6}({\bf A})$, with $h$ embedded in $Sp_{6}({\bf A})$ as $diag(1,h,1)$. Now, for $h^{\prime}\in Sp_{6}$, and $g\in SL_{2}$, (5.12) $$i(g,h^{\prime})=diag(I_{2},g,\begin{pmatrix}g_{1,1}&&g_{1,2}\\ &I_{6}\\ g_{2,1}&&g_{2,2}\end{pmatrix},g^{*},I_{2}).$$ Put in (5.12), $i(g,h^{\prime})=diag(I_{2},t^{\prime}(g,h^{\prime}),I_{2})$. Note that $t^{\prime}(g,h)\in Sp_{12}$. The character $\psi_{U^{0}_{1^{2},16},1}$ is defined as in Sec. 2.2 right before (2.1), where we take $t=1$. Finally, $$\mu(x_{1},x_{2},a_{1},a_{2},a_{3})=x_{1}e_{2,5}^{\prime}+x_{2}e_{2,12}^{\prime% }+a_{1}e_{2,11}^{\prime}+a_{2}e_{2,13}^{\prime}+a_{3}e_{2,14}^{\prime}.$$ The group of matrices $\mu(x_{1},x_{2},a_{1},a_{2},a_{3})$ is a subgroup of $U_{1^{2},16}$, and the projection $j_{13}$ of $U_{1^{2},16}$ on ${\mathcal{H}}_{13}$ is injective on this subgroup. (See Sec. 2.1). We have $$j_{13}(\mu(x_{1},x_{2},a_{1},a_{2},a_{3}))=(0_{2},x_{1},0_{5},a_{1},x_{2},a_{3% },a_{4},0)\in{\mathcal{H}}_{13}.$$ Also, as mentioned above, we have $u_{2}t^{\prime}(v_{5}g,h)\in Sp_{12}$. In integral (5.11), consider the inner $du$- integration over $U_{1^{2},16}^{0}(F)\backslash U_{1^{2},16}^{0}({\bf A})$. Let us apply the theorem of Ikeda [I], as we did in the previous sub-sections. The relevant (Adelic) Heisenberg group here is $\mathcal{H}_{13}({\bf A})$. Let $\phi_{1},\phi_{2}\in\mathcal{S}({\bf A}^{6})$, and consider the function $\varphi_{\phi_{1},\phi_{2}}$ on $\mathcal{H}_{13}(\bf A)$ as in [I], p. 621. Let $\check{f}_{\Delta(\tau,4)\gamma_{\psi},s}$ be a smooth, holomorphic section of $Ind_{Q^{(2)}_{8}({\bf A})}^{Sp^{(2)}_{16}({\bf A})}\Delta(\tau,4)\gamma_{\psi}% |\text{det}\cdot|^{s}$. Construct the section $\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4)\gamma_{\psi},s}$ similar to (4.13). By Ikeda’s theorem, the inner $du$-integral of (5.11), with $\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4)\gamma_{\psi},s}$ replacing $f^{\prime\prime}_{\Delta(\tau,4)\gamma_{\psi},s}$, is equal to (5.13) $$\displaystyle\theta_{\psi,12}^{\phi_{1}}((0_{2},x_{1},0_{5},a_{1},x_{2},a_{3},% a_{4},0)vt^{\prime}(v_{5}g,h))\\ \displaystyle\int\limits_{U_{1^{2},16}(F)\backslash U_{1^{2},16}({\bf A})}% \overline{\theta_{\psi,12}^{\phi_{2}}(j_{13}(u)vt^{\prime}(g,v_{5}h))}E^{(2)}(% \check{f}_{\Delta(\tau,4)\gamma_{\psi},s})(u{}vt^{\prime}(g,v_{5}h))\psi^{-1}_% {1}(u)du.$$ Since we don’t know that $f^{\prime\prime}_{\Delta(\tau,4)\gamma_{\psi},s}$ is a sum of sections of the form $\check{f}^{\phi_{1},\phi_{2}}_{\Delta(\tau,4)\gamma_{\psi},s}$, we can use a similar argument as in the last sub-section and construct, for a given smooth, holomorphic section $\check{f}_{\Delta(\tau,4)\gamma_{\psi},s}$, another such section $f_{\Delta(\tau,4)\gamma_{\psi},s}$, similar to (4.35), such that the $du$-inner integral in (5.11) is equal to (5.13). By Identity (2.2), the integral in (5.13) is equal to $E(\Lambda(\check{f}_{\Delta(\tau,4)\gamma_{\psi},s},\phi_{2}))(vt^{\prime}(g,v% _{5}h))$, where $\Lambda(\check{f}_{\Delta(\tau,4)\gamma_{\psi},s},\phi_{2})$ is a smooth, meromorphic section of $Ind_{Q_{6}({\bf A})}^{Sp_{12}({\bf A})}\Delta(\tau,3)|\text{det}\cdot|^{s}$. We denote, for short, $f_{\Delta(\tau,3),s}=\Lambda(\check{f}_{\Delta(\tau,4)\gamma_{\psi},s},\phi_{2})$. Plugging this into integral (5.11) we obtain the integral (5.14) $$\displaystyle\int\limits_{v_{5}\in V_{5}(F)\backslash V_{5}({\bf A})}\int% \limits_{v\in U_{2,12}(F)\backslash U_{2,12}({\bf A})}\int\limits_{F^{5}% \backslash{\bf A}^{5}}\theta_{\psi^{-1},2}^{\phi}((x_{1},x_{2},0)g)\\ \displaystyle\theta_{\psi,12}^{\phi_{1}}((0_{2},x_{1},0_{5},a_{1},x_{2},a_{3},% a_{4},0)vt^{\prime}(g,v_{5}h))E(f_{\Delta(\tau,3),s})(vt^{\prime}(g,v_{5}h))% \psi^{-1}_{U_{2,12}}(v)\psi^{-1}(a_{1})d(x,a)dvdv_{5}.$$ Consider the inner integration over $a$. We have (5.15) $$\displaystyle\int\limits_{(F\backslash{\bf A})^{3}}\theta_{\psi,12}^{\phi_{1}}% ((0_{2},x_{1},0_{5},a_{1},x_{2},a_{3},a_{4},0)vt^{\prime}(g,v_{5}h))\psi^{-1}(% a_{1})da=\\ \displaystyle\sum_{\xi_{3},\xi_{5},\xi_{6}\in F}\omega_{\psi,12}((0_{2},x_{1},% 0_{6},x_{2},0_{3})vt^{\prime}(g,v_{5}h))\phi_{1}(0,0,\xi_{3},1,\xi_{5},\xi_{6}).$$ It follows from the action of the Weil representation that the sum on the r.h.s. of (5.15) is left invariant under all $v\in U_{2,12}({\bf A})$. Hence, this sum is equal to (5.16) $$\sum_{\xi_{3},\xi_{5},\xi_{6}\in F}\omega_{\psi,8}((x_{1},0_{6},x_{2},0)j(g,v_% {5}h))\phi^{\prime}_{1}(\xi_{3},1,\xi_{5},\xi_{6}).$$ Here, for $x\in{\bf A}^{4}$, $\phi_{1}^{\prime}(x)=\phi_{1}(0_{2},x)$. Notice that the elements $t^{\prime}(v_{5}g,h)$ in equation (5.15) are replaced by $j(v_{5}g,h)\in Sp_{8}({\bf A})$ in equation (5.16). See (2.4) for the definition of $j(v_{5}g,h)$. Also, in the summation (5.16), we have $(x_{1},0_{6},x_{2},0)\in{\mathcal{H}}_{9}(F)$. All this follows from the formulas of the action of the Weil representation. Take $\phi^{\prime}_{1}$ of the form $\phi^{\prime}_{1}(x_{1},...,x_{4})=\varphi_{1}(x_{1})\varphi_{2}^{\prime}(x_{2% },x_{3},x_{4})$, where $\varphi_{1},\varphi^{\prime}_{2}$ are Schwartz functions. Separating the summation in (5.16) over $\xi_{3}$ from the sum over $\xi_{5},\xi_{6}$, it follows from the factorization properties of theta series, that the sum (5.16) is equal to $$\theta_{\psi,2}^{\varphi_{1}}((x_{1},x_{2},0)g)\theta_{\psi,4}^{\varphi_{2}}(i% _{5}(v_{5})h),$$ where $\varphi_{2}(y_{1},y_{2})=\varphi_{2}^{\prime}(1,y_{1},y_{2})$. Thus, for the above choices of functions, integral (5.14) is equal to (5.17) $$\displaystyle\int\limits_{v_{5}\in V_{5}(F)\backslash V_{5}({\bf A})}\int% \limits_{v\in U_{2,12}(F)\backslash U_{2,12}({\bf A})}\int\limits_{(F% \backslash{\bf A})^{2}}\theta_{\psi^{-1},2}^{\phi}((x_{1},x_{2},0)g)\theta_{% \psi,2}^{\varphi_{1}}((x_{1},x_{2},0)g)\\ \displaystyle\theta_{\psi,4}^{\varphi_{2}}(i_{5}(v_{5})h)E(f_{\Delta(\tau,3),s% })(vt^{\prime}(g,v_{5}h))\psi^{-1}_{U_{2,12}}(v)dxdvdv_{5}.$$ Consider the function $\phi\otimes\varphi_{1}\in\mathcal{S}({\bf A}^{2})$, and view it as a function in the space of the Weil representation $\omega_{\psi,4}$ of $Sp^{(2)}_{4}({\bf A})\mathcal{H}_{5}({\bf A})$. Using the factorization properties of theta series, we can find a Weyl element $\gamma_{0}\in Sp_{4}(F)$, such that for the function $\Phi_{\phi,\varphi_{1}}=\omega_{\psi,4}(\gamma_{0})(\phi\otimes\varphi_{1})$ $$\int\limits_{(F\backslash{\bf A})^{2}}\theta_{\psi^{-1},2}^{\phi}((x_{1},x_{2}% ,0)g)\theta_{\psi,2}^{\varphi_{1}}((x_{1},x_{2},0)g)dx=\int\limits_{(F% \backslash{\bf A})^{2}}\theta_{\psi,4}^{\Phi_{\phi,\varphi_{1}}}\left((0_{2},x% _{1},x_{2},0)\begin{pmatrix}h&\\ &h^{*}\end{pmatrix}\right)dx.$$ Unfolding the theta series, the right hand side of this equality is equal to $\Phi_{\phi,\varphi_{1}}(0_{2})=\omega_{\psi,4}(\gamma_{0})(\phi\otimes\varphi_% {1})(0_{2})$. Thus, up to this value, the integral (5.17) is equal to (5.18) $$\displaystyle\int\limits_{v_{5}\in V_{5}(F)\backslash V_{5}({\bf A})}\int% \limits_{v\in U_{2,12}(F)\backslash U_{2,12}({\bf A})}\theta_{\psi,4}^{\varphi% _{2}}(i_{5}(v_{5})h)E(f_{\Delta(\tau,3),s})(vt^{\prime}(g,v_{5}h))\psi^{-1}_{U% _{2,12}}(v)dvdv_{5}.$$ To summarize, we proved that for the choice of data described above, the inner unipotent integration in integral (5.4) is equal (up to multiplication by $\omega_{\psi,4}(\gamma_{0})(\phi\otimes\varphi_{1})(0_{2})$) to integral (5.18), and then the integral (5.4) is equal (up to $\omega_{\psi,4}(\gamma_{0})(\phi\otimes\varphi_{1})(0_{2})$) to (5.19) $$\displaystyle\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{V_{% 5}(F)\backslash V_{5}({\bf A})}\varphi^{(2)}_{\pi}(h)\theta_{\psi,4}^{\varphi_% {2}}(i_{5}(v_{5})h)\\ \displaystyle\int\limits_{SL_{2}(F)\backslash SL_{2}({\bf A})}\int\limits_{U_{% 2,12}(F)\backslash U_{2,12}({\bf A})}\varphi_{\sigma}(g^{\iota})E(f_{\Delta(% \tau,3),s})(vt^{\prime}(g,v_{5}h))\psi^{-1}_{U_{2,12}}(v)dvdgdv_{5}dh.$$ Now we apply Identity (2.5), which tells us, that there is a smooth, meromorphic section of $Ind_{Q_{2,2}({\bf A})}^{Sp_{6}({\bf A})}\tau|\text{det}\cdot|^{s}\times\sigma$, $\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma})$, such that the inner $dvdg$-integral in (5.19) is equal to $E(\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma}))(v_{5}h)$. Thus, integral (5.19) is equal to $$\int\limits_{Sp_{4}(F)\backslash Sp_{4}({\bf A})}\int\limits_{V_{5}(F)% \backslash V_{5}({\bf A})}\varphi^{(2)}_{\pi}(h)\theta_{\psi,4}^{\varphi_{2}}(% i_{5}(v_{5})h)E(\Lambda(f_{\Delta(\tau,3),s},\varphi_{\sigma}))(v_{5}h)dv_{5}dh,$$ and this is an integral of the form (5.2). ∎ References [B-F-G] Bump, Daniel; Furusawa, Masaaki; Ginzburg, David. Non-unique models in the Rankin-Selberg method. J. Reine Angew. Math. 468 (1995), 77–111. [C-F-G-K1] Yuanqing Cai, Solomon Friedberg, David Ginzburg, Eyal Kaplan. Doubling Constructions and Tensor Product L-Functions: the linear case. In arXiv:1710.00905. [C-F-G-K2] Yuanqing Cai, Solomon Friedberg, David Ginzburg, Eyal Kaplan. Doubling Constructions for Covering Groups and Tensor Product L-Functions. In arXiv:1601.08240. [D-M] Dixmier, Jacques; Malliavin, Paul. Factorisations de fonctions et de vecteurs indefiniment differentiables. Bull. Sci. Math. (2) 102 (1978), no. 4, 307–330. [G] Ginzburg, David. Certain conjectures relating unipotent orbits and automorphic representations. Israel J. Math. 151 (2006), 323–355. [G-H] Ginzburg, David; Hundley, Joseph. A doubling integral for G2. Israel J. Math. 207 (2015), no. 2, 835–879. [G-J] Godement, Roger; Jacquet, Herve/ Zeta functions of simple algebras. Lecture Notes in Mathematics, Vol. 260. Springer-Verlag, Berlin-New York, 1972. ix+188 pp. [G-J-R-S] Ginzburg, David; Jiang, Dihua; Rallis, Stephen; Soudry, David/ L-functions for symplectic groups using Fourier-Jacobi models. Arithmetic geometry and automorphic forms, 183–207, Adv. Lect. Math. (ALM), 19, Int. Press, Somerville, MA, 2011. [G-R-S1] Ginzburg, David; Rallis, Stephen; Soudry, David. L-functions for symplectic groups. Bull. Soc. Math. France 126 (1998), no. 2, 181–244. [G-R-S2] Ginzburg, David; Rallis, Stephen; Soudry, David. The descent map from automorphic representations of GL(n) to classical groups. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2011. x+339 pp. ISBN: 978-981-4304-98-6; 981-4304-98-0. [G-S] Ginzburg, David; Soudry, David. arXiv 1808.01572. [I] Ikeda, Tamotsu. On the theory of Jacobi forms and Fourier-Jacobi coefficients of Eisenstein series. J. Math. Kyoto Univ. 34 (1994), no. 3, 615–636. [J] Jacquet, Herve. On the residual spectrum of $GL(n)$. Lie group representations, II (College Park, Md., 1982/1983), 185–208, Lecture Notes in Math., 1041, Springer, Berlin, 1984 [J-L] Jacquet, Herve.; Langlands, Robert. Automorphic forms on GL(2). Lecture Notes in Mathematics, Vol. 114. Springer-Verlag, Berlin-New York, 1970. vii+548 pp. [PS-R1] Piatetski-Shapiro, Ilya; Rallis, Stephen. $L$ functions for the classical groups. Explicit constructions of automorphic L-functions. Lecture Notes in Mathematics, 1254. Springer-Verlag, Berlin, 1987. vi+152 pp. ISBN: 3-540-17848-1 [PS-R2] Piatetski-Shapiro, Ilya.; Rallis, Stephen. A new way to get Euler products. J. Reine Angew. Math. 392 (1988), 110–124.
[ [ ${}^{1}$ INAF OAB, via E. Bianchi 46, 23807 Merate (LC), Italy email: degrandi@mi.astro.it ${}^{2}$ IASF, Sez. di Milano, Via Bassini 15, I-20133 Milano Italy email: silvano@mi.iasf.cnr.it (#1; ) Abstract We derive here the mean temperature profile for a sample of hot, medium distant clusters recently observed with XMM-Newton, whose profiles are available from the literature, and compare it with the mean temperature profile found from BeppoSAX data. The XMM-Newton and BeppoSAX profiles are in good agreement between 0.05 and 0.25 $r_{180}$. From 0.25 to about 0.5 $r_{180}$ both profiles decline, however the BeppoSAX profile does so much more rapidly than the XMM-Newton profile. ††volume: #1\checkfont eurm10 \checkfontmsam10 Outskirts of Galaxy Clusters: intense life in the suburbs] A note on Temperature Profiles of rich Clusters of Galaxies De Grandi & Molendi]Sabrina De Grandi${}^{1}$ and Silvano Molendi ${}^{2}$ 2004 195 \pagerange \jnameOutskirts of Galaxy Clusters: intense life in the suburbs \editorsA. Diaferio, ed. \firstsection 1 Introduction Temperature profiles of galaxy clusters are of great importance for two main reasons: firstly they allow us to measure the total mass of these systems through the hydrostatic equilibrium equation and secondly they provide informations on the thermodynamic state of the Intra Cluster Medium (hereafter ICM). Measurements of temperature profiles have been performed with the the first generation of X-ray satellites carrying telescopes operating in the medium energy band (2-10 keV), namely ASCA and BeppoSAX. A detailed description of results obtained with these experiments may be found in De Grandi & Molendi (2002) and refs. therein. In the last 3 years various authors have published XMM-Newton temperature profiles of individual clusters. Arnaud et al. (2003) with a sample of 7 objects comprising 5 clusters and 2 groups find that the temperature profiles are essentially isothermal within $0.5~{}r_{200}$, and possibly declining at larger radii, where the statistics is rather limited. Recently, Zhang et al. (2003) have published temperature profiles for a sample of 9 clusters, in the outer regions they find both flat and strongly decreasing profiles, however they do not provide a mean temperature profiles of their sample. In this note we derive the mean temperature profile for a sample of hot, medium distant clusters whose profiles are available from the literature and compare it with our mean BeppoSAX profile. 2 XMM-Newton sample selection from the literature We have selected from the literature all hot (i.e. kT$>3$ keV) clusters in the redshift range between 0.1 and 0.3 with an available projected radial temperature profile. The resulting sample comprises a total of 15 clusters: 9 REFLEX clusters at redshift $\sim 0.3$ (Zhang et al. 2003), A1413 at z=0.143 (Pratt & Arnaud 2002), A2163 at z=0.201 (Pratt et al. 2001), A1835 at z=0.250 (Majerovitz et al. 2002), PKS 0745$-$191 at z=0.1028 (Chen et al. 2003), ZW 3146 at z=0.291 and E1455$+$223 at z=0.258 both taken from Mushotzky (2003). The adopted redshift range allows us to compare the temperature profiles measured from XMM-Newton data with those derived from BeppoSAX observations. The profiles published for the clusters in the XMM-Newton sample extend out to $\sim 9^{\prime}-10^{\prime}$ from the core, corresponding to physical radii of $\sim 1.5-3$ Mpc (H${}_{0}=50$ km s${}^{-1}$ Mpc${}^{-1}$). The same physical radii are reached by BeppoSAX observations of clusters laying in the $\sim 0.05-0.08$ redshift range and detected out to $\sim 20^{\prime}$ (see De Grandi & Molendi 2002). The average cluster temperature of the XMM-Newton sample is $\sim$ 7 keV, very similar to that of the BeppoSAX sample ($\sim 6$ keV). We have converted all temperature uncertainties at the 90% confidence level into errors at the 68% c.l. by dividing them by the scaling factor 1.65. For each cluster we have computed the virial radius $r_{vir}$ ($=r_{180}$) from the relation derived by Evrard et al. (1996): $r_{vir}=3.9~{}\sqrt{T\over{10~{}{\rm keV}}}~{}(1+z)^{-3/2}~{}~{}~{}{\rm Mpc},$ using published cluster mean temperatures and redshifts. Visual inspection of Fig.1, where we report the individual XMM-Newton temperature profiles, shows that the profiles are about isothermal from 0.10 to $0.3-0.5~{}r_{180}$; beyond $0.3-0.5~{}r_{180}$ there seems to be a decline. It is also clear that points at $r_{180}>0.6-0.7$ are heavily fluctuating and tend to have larger errors. We have modelled the XMM-Newton profiles in Fig. 1 with a power law and converted the slope into the polytrophic gas index accordingly to the calculation reported in the appendix of De Grandi & Molendi (2002). In the range $0.2<r_{180}<0.5$ we have obtained a polytrophic index $\gamma=1.09\pm 0.03$, which is close to the isothermal value 1. Whereas, for $0.5<$r${}_{180}<0.75$ we have found $\gamma=2.4\pm 0.5$, which is formally above the adiabatic limit of 5/3, although consistent at the 68% confidence level with values below 5/3. The drop in the profiles at $\sim$ 0.55 r${}_{180}$ (clearly visible in Fig. 2) highlights a possible problem in the XMM-Newton profiles. 3 The mean profile The binned error-weighted average temperature profile computed from the 15 clusters is shown in Fig. 2, where we also plot the BeppoSAX cool core and non cool core clusters profiles (see De Grandi & Molendi 2002 for details). We have added here to the BeppoSAX cool core clusters the Ophiuchus cluster. The XMM-Newton and BeppoSAX profiles are in good agreement between 0.05 and 0.25 $r_{180}$. From 0.25 to about 0.5 $r_{180}$ both profiles decline, however the BeppoSAX profile does so more rapidly than the XMM-Newton profile. The presence of a temperature jump, at the limit of convective stability, at the largest radii explored with XMM-Newton, hints to a problem possibly related to the low signal-to-noise ratio of the outermost cluster regions and to the difficulties in operating a correct background subtraction (see Molendi these proceedings for a discussion of this issue). References [] Arnaud, M., Pratt, G. W. & Pointecouteau, E. 2004, Mem. S.A.It. astro-ph/0312398. [] Chen, Y., Ikebe, Y. & Boehringer, H. 2003, A&A, 407, 41. [] De Grandi, S. & Molendi, S. 2002, ApJ, 567, 163. [] Majerowicz, S., Neumann, D. M. & Reiprich, T. H. 2002, A&A, 394, 77. [] Molendi, S. 2004, these proceedings. [] Mushotzky, R. F. 2003, astro-ph/0311105. [] Pratt, G. W. & Arnaud, M. 2002, A&A, 394, 375. [] Pratt, G. W. et al. 2001, Proc. of XXIth Moriond Astroph. Meeting, p.38. [] Zhang, Y.-Y., Finoguenov, A., Boehringer, H., et al. 2004, A&A, 413, 49.
Going beyond GHZ paradox Dagomir Kaszlikowski${}^{1,2}$, Darwin Gosal${}^{1}$, E.J. Ling${}^{1}$, L.C. Kwek${}^{1,3}$, Marek Żukowski${}^{4}$, and C.H. Oh${}^{1}$ ${}^{1}$Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 ${}^{2}$Instytut Fizyki Doświadczalnej, Uniwersytet Gdański, PL-80-952, Gdańsk, Poland, ${}^{3}$National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 639798 ${}^{4}$Instytut Fizyki Teoretycznej i Astrofizyki, Uniwersytet Gdański, PL-80-952, Gdańsk, Poland. Abstract We present numerical data showing, that three qutrit correlations for a pure state, which is not maximally entangled, violate local realism more strongly than three-qubit correlations. The strength of violation is measured by the minimal amount of noise that must be admixed to the system so that the noisy correlations have a local and realistic model. The seminal paper of Greenberger, Horne, and Zeilinger GHZ has initiated a completely new phase in the discussions regarding the Bell theorem BELL . Einstein-Podolsky-Rosen EPR elements of reality were suddenly ridiculed by a straightforward argumentation. The physics community immediately noticed that the increasing complexity of entangled systems does not lead to a less pronounced disagreement with the classical views, but just the opposite! Moreover, the disagreement exponentially grew with the number of qubits involved in the GHZ-type entangled states. Indeed, prior to the publication of Ref. GHZ , it was commonly perceived that everything regarding the Bell theorem is known. However, the new insight has renewed the interest in the Bell theorem and its implications. Another widely shared perspective was that one cannot gain additional useful insight into the Bell Theorem by increasing the dimensionality of the entangled systems. Some papers even suggested that in $N$ dimensional systems, increasing the dimension $N$ effectively brings the system closer and closer to the classical realm. However, due to the fact that the $N>2$ dimensional systems can reveal the Kochen-Specker paradox KOCHEN-SPECKER , this view could be challenged. The advent of the quantum information theory created the awareness that such systems require much less entanglement to be non-separable than qubits SEP . Certain strange features like bound entanglement HORODECKI or inextensible product bases BENNET , suddenly emerged. Recently, it was shown that higher dimensional entangled systems indeed may lead to stronger violations of local realism, even in straightforward experimental situations involving only the von Neumann-type experiments (with no sequential measurements, etc). In the early nineties, the blueprints for straightforward Bell tests involving higher dimensional systems were given (for a summary see ZZH ). The idea was to use unbiased multiport beam splitters to define the local observables. Surprisingly, it turned out, that such observables suffice to reveal the fact that pair of entangled higher dimensional systems violate local realism more strongly than qubits KASZLIKOWSKI-PRL-2000 . This result was obtained numerically by employing the linear optimization procedures to search for underlying local realistic joint probability distribution that would reproduce the quantum prediction (with some noise admixture). The results were confirmed analytically in Refs. CHQUTRIT and COLLINS . Later in Ref. ACIN-GISIN , it was shown that in the case of pairs of entangled higher-dimensional systems, violations of local realism are even stronger for non-maximally entangled states. In a parallel research, it has been shown that higher dimensional systems can lead to the GHZ-like paradox without inequalities GHZKASZLIKOWSKI , MASSAR . In view of all these facts, it is tempting to test the strength of violation of local realism by triples of higher dimensional systems (starting of course with three qutrits), and that for non-maximally entangled states. Since Bell-type inequalities for three qutrit systems are unknown at the moment, it is necessary to invoke the numerical algorithm first presented in BATURO . As we shall see, some surprising results can be obtained in this way. We show below the result of our numerical analysis. It turns out that 1. There is a strong violation of local realism (for the standard von-Neumann type measurements) for three qutrit systems in the maximally entangled state, however it is not as strong as in the case of the three entangled qubits. 2. Allowing non-maximally entangled states, the situation changes. We find the three qutrit state which reveals correlations much much more resistant to noise, than those for entangled three qubits (maximally entangled three qubit states give maximal violation of local realism WERNER , BRUKNER , SCARANI ). In our numerical analysis, we consider a class of pure states of three qutrits in the form of $$|\psi\rangle=\sum_{g,i,j=1}^{3}d_{gij}|g\rangle|i\rangle|j\rangle$$ (1) with real coefficients $d_{gij}$. The kets $|g\rangle,|i\rangle,|j\rangle$ denote the orthonormal basis states for the first, second and the third qutrit respectively. Three spatially separated observers, Alice, Bob and Cecil, are allowed to perform the measurement of two alternative local noncommuting trichotomic observables on the state $|\psi\rangle$. We assume that they measure observables defined by unbiased symmetric three-port beamsplitters ZZH . In such a situation the kets in (1) represent spatial beams, in which the particles can propagate. The observers select the specific local observables by setting appropriate phase shifts in the beams leading to the entry ports of the beamsplitters. The overall unitary transformation performed by such a device is given by $$U_{j^{\prime}j}={1\over\sqrt{3}}\exp({i2\pi\over 3}j^{\prime}j)\exp(\phi_{j}),$$ (2) where $j$ denotes an input beam to the device, and $j^{\prime}$ an output one, and $\phi_{j}$ are the three phases that can be set by the local observer (for a more detailed description see ZZH ). Please note, that the actual physics of the device is irrelevant for our theoretical discussion here, thus it suffices just to assume that the observers perform their von Neumann measurements in the basis which is related to the “computational” basis of the initial state (1) by the transformation (2). It is interesting that the unitary transformation for all phase settings leads to a new basis for the local qutrit, which is unbiased with the respect to the “computational” one. Let us denote Alice’s local unitary transformations associated with her device by $U_{A}(\vec{\phi}_{0}),U_{A}(\vec{\phi}_{1})$, Bob’s by $U_{B}(\vec{\chi}_{0}),U_{B}(\vec{\chi}_{1})$ and Cecil’s by $U_{C}(\vec{\delta}_{0}),U_{C}(\vec{\delta}_{1})$, where the three component vectors $\vec{\phi}_{k},\vec{\chi}_{l},\vec{\delta}_{m}$ ($k,l,m=0,1$) denote the set of the phases defining the appropriate observables. The measurement of each observable can yield three possible results which we denote by $a$ for Alice, $b$ for Bob and $c$ for Cecil ($a,b,c=1,2,3$). The probability $P_{QM}(a_{k},b_{l},c_{m})$, that Alice, Bob and Cecil obtain the specific results after performing the unitary transformations $U_{A}(\vec{\phi}_{k})$, $U_{B}(\vec{\chi}_{l})$ and $U_{C}(\vec{\delta}_{m})$, respectively, is given by the following formula $$\displaystyle P_{QM}(a_{k},b_{l},c_{m})=|\langle a_{k}|\langle b_{l}|\langle c% _{m}|U_{A}(\vec{\phi}_{k})U_{B}(\vec{\chi}_{l})U_{C}(\vec{\delta}_{m})|\psi% \rangle|^{2}$$ (3) $$\displaystyle={1\over 27}+{1\over 27}\sum_{g^{\prime}i^{\prime}j^{\prime}\neq gij% }d_{g^{\prime}i^{\prime}j^{\prime}}d_{gij}$$ $$\displaystyle\times\cos({2\pi\over 3}(a_{k}(g-g^{\prime})+b_{l}(i-i^{\prime})+% c_{m}(j-j^{\prime}))+\phi_{k}^{g}-\phi_{k}^{g^{\prime}}+\chi_{l}^{i}-\chi_{l}^% {i^{\prime}}+\delta_{m}^{j}-\delta_{m}^{j^{\prime}})$$ $$\displaystyle,$$ where, for instance, $\phi_{k}^{g}$ denotes the $g$-th component of $\vec{\phi}_{k}$. In the presence of random noise, in order to describe the system one has to introduce the mixed state $\rho_{F}=(1-F)|\psi\rangle\langle\psi|+F\rho_{noise}$, where $\rho_{noise}=\frac{1}{27}I$, and $I$ is the identity operator. The non-negative parameter $F$ specifies the amount of noise present in the system. In such a case, the quantum probabilities read $$P_{QM}^{F}(a_{k},b_{l},c_{m})=(1-F)P_{QM}(a_{k},b_{l},c_{m})+{F\over 27}$$ . The hypothesis of local realism assumes that there exists some joint probability distribution $P_{LR}(a_{0},a_{1};b_{0},b_{1};c_{0},c_{1})$ that returns quantum probabilities $P_{QM}^{F}(a_{k},b_{l},c_{m})$ as marginals, e.g., $$\displaystyle P_{QM}^{F}(a_{0},b_{0},c_{0})$$ $$\displaystyle=\sum_{a_{1}=1}^{3}\sum_{b_{1}=1}^{3}\sum_{c_{1}=1}^{3}P_{LR}(a_{% 0},a_{1};b_{0},b_{1};c_{0},c_{1}).$$ (4) Please note, that a concise notation of the full set of such conditions can be given by $$\displaystyle P_{QM}^{F}(a_{k},b_{l},c_{m})$$ $$\displaystyle=\sum_{a_{k+1}=1}^{3}\sum_{b_{l+1}=1}^{3}\sum_{c_{m+1}=1}^{3}P_{% LR}(a_{0},a_{1};b_{0},b_{1};c_{0},c_{1}).$$ (5) where $k+1,l+1,m+1$ are understood as modulo $2$. For each pure state $|\psi\rangle$, one can find the threshold $F_{thr}$ (the minimal value of $F$) above which such a joint probability distribution satisfying (5) exists (obviously, for any separable state $F_{thr}=0$, however this may hold also for some non-separable states). There is a well defined mathematical procedure called linear programming that allows us to find the threshold $F_{thr}$ for the given state $|\psi\rangle$ and for the given set of observables. We should stress that $F_{thr}$ found in this way gives us sufficient and necessary conditions for violation of local realism. The procedure works as follows. The computation of the threshold $F_{thr}$ is equivalent to finding the joint probability distribution $P_{LR}(a_{0},a_{1};b_{0},b_{1};c_{0},c_{1})$, i.e., the set of $3^{6}$ of positive numbers summing up to one and fulfilling $8\times 27=216$ conditions given by (5) such that $F$ is minimal. Therefore, $F$ and $P_{LR}(a_{0},a_{1};b_{0},b_{1};c_{0},c_{1})$ can be treated as variables lying in a $3^{6}+1$-dimensional real space. The set of linear conditions (5) and the condition that $0\leq F\leq 1$ defines a convex set in this space. Next, we define a linear function, whose domain is the convex set defined above so that it returns the number $F$. The task of finding $F_{thr}$ is then equivalent to the search for the minimum of this function. As the domain of the function is very complicated, the procedure can only be done numerically (we have used the numerical procedure HOPDM 2.30, see gondzio ). It is obvious that the $F_{thr}$ depends on the observables measured by Alice, Bob and Cecil (which in turn depend on the set of phases) as well as on the state $|\psi\rangle$ (indeed, for some unfortunate choices of observables, or the states or both, one can have $F_{thr}=0$). Let us clarify, that the task of the linear optimization procedure is each time to find the minimal $F$, for which the relation (5) can be satisfied by some positive probabilities on its right hand side. However, the left hand side of Eq. (5) depends on the chosen states and observables, and we are interested in the case when getting the local realistic model requires a maximal possible admixture of noise, therefore we search for such states and observables, for which the minimal $F_{thr}$ has the largest possible value. There are two possible interesting scenarios. We can fix the state $|\psi\rangle$ and maximize $F_{thr}$ over the observables. In this way we find the best violation of local realism for this given state. Alternatively, we can maximize $F_{thr}$ over the coefficients defining the state, as well as over the observables. This procedure allows us to find the optimal state, and optimal observables measured on this state, which can yield the best possible violation of local realism by the class of pure states with real coefficients (1). Of course, we do not have to limit ourselves to pure states with real coefficients, nor even to pure states but then in these cases the number of parameters over which we have to optimize becomes too large for our computers to handle. We have applied the procedure described above for the fixed state $|\psi\rangle$, which we have chosen to be a maximally entangled state, i.e., $|\psi\rangle={1\over\sqrt{3}}(|111\rangle+|222\rangle+|333\rangle$. Running the program we have found that the threshold amount of noise, that has to be admixed to the maximally entangled state, so that the correlations generated by it, for any sets of pairs of local settings of the phases, become describable in a local and realistic way, is $F_{thr}=0.4$. The optimal observables form the point of view of violations of local realism, i.e., exactly those for which the noise admixture must be maximal to get a local realistic model, are defined by the following sets of phases $\vec{\phi}_{0}=(0,0,{2\over 3}\pi),\vec{\phi}_{1}=(0,0,0);\vec{\chi}_{0}=(0,0,% \pi),\vec{\chi}_{1}=(0,0,{5\over 3}\pi);\vec{\delta}_{0}=(0,{1\over 3}\pi,0),% \vec{\delta}_{1}=(0,\pi,0)$. We can therefore say, that the violation of local realism in this case is stronger than for two maximally entangled qutrits, in which case the threshold amount of noise is only $0.304$. However, it is weaker than the violation by three entangled qubits, for which the threshold amount of noise is $0.5$. Naturally, one should check whether one can obtain better violations for non maximally entangled states. Therefore we have taken the predictions for (1), and used a procedure for the maximalization of $F_{thr}$ over the parameters $d_{gij}$ as well as the observables. We have found that the there exists a non-maximally entangled state, and a certain set of local observables, for which one requires $F_{thr}=0.571$ noise admixture for the correlations to have a local realistic description. The expansion coefficients of the state are given in the table below, whereas the phases defining the optimal observables will not be presented here, as they are not easily interpretable. However, for very close local settings given by: $\vec{\phi}_{0}=(0,{2\over 3}\pi,-{5\over 9}\pi),\vec{\phi}_{1}=(0,{2\over 3}% \pi,0);\vec{\chi}_{0}=(0,{17\over 18}\pi,-{1\over 18}\pi),\vec{\chi}_{1}=(0,0,% 0);\vec{\delta}_{0}=(0,\pi,{23\over 36}\pi),\vec{\delta}_{1}=(0,{7\over 36}\pi% ,-{2\over 3}\pi)$, there is a state for which the threshold noise equals $0.570$. Basis $$|000\rangle$$ $$|001\rangle$$ $$|002\rangle$$ $$|010\rangle$$ $$|011\rangle$$ $$|012\rangle$$ $$|020\rangle$$ $$|021\rangle$$ $$|022\rangle$$ Coeff $$+0.186$$ $$+0.076$$ $$+0.230$$ $$+0.218$$ $$+0.046$$ $$+0.112$$ $$+0.172$$ $$+0.033$$ $$+0.247$$ Basis $$|100\rangle$$ $$|101\rangle$$ $$|102\rangle$$ $$|110\rangle$$ $$|111\rangle$$ $$|112\rangle$$ $$|120\rangle$$ $$|121\rangle$$ $$|122\rangle$$ Coeff $$+0.216$$ $$+0.050$$ $$+0.110$$ $$+0.160$$ $$+0.049$$ $$+0.236$$ $$+0.204$$ $$+0.055$$ $$+0.235$$ Basis $$|200\rangle$$ $$|201\rangle$$ $$|202\rangle$$ $$|210\rangle$$ $$|211\rangle$$ $$|212\rangle$$ $$|220\rangle$$ $$|221\rangle$$ $$|222\rangle$$ Coeff $$-0.078$$ $$+0.406$$ $$-0.029$$ $$-0.023$$ $$+0.385$$ $$+0.035$$ $$-0.123$$ $$+0.393$$ $$-0.128$$ In summary, we have shown, that for the maximally entangled state three entangled qutrits violate local realism stronger than two entangled qutrits (the threshold amount of noise $0.304$, see KASZLIKOWSKI-PRL-2000 ). The threshold amount of noise to get local realistic correlations is $0.4$. This violation is not as strong as for three entangled qubits for which one has to admix $50\%$ of noise to make the system describable by local realistic theories. However, we can obtain much a stronger violation for the non-maximally entangled states. In this case there exists a non-maximally entangled state (see the table) for which $F_{thr}=0.57$, i.e., we have to add $57\%$ of noise before we enter the region in which the state admits local and realistic description. We must stress, that although for the state given in the table, the threshold amount of noise $F_{thr}=0.57$ gives the necessary and sufficient conditions for the existence of local realism, for the measurement of the observables given by unbiased symmetric three-port beamsplitters, it does not mean that with a different choice of observables, or by allowing complex coefficients in the state (1), one cannot increase $F_{thr}$. Moreover, it is reasonable to expect, that for four or higher number of entangled qutrits the difference between the robustness against noise (i.e., the resistance of quantum correlations to classical description) of maximally entangled states and non-maximally entangled ones will still increase. Note that, optimal non-maximally entangled state of two qutrits (for which the threshold amount of noise is 0.3139) is around $3\%$ more resistant to noise than the maximally entangled one (for which the threshold amount of noise is 0.3038). In the case of three entangled qutrits the difference between the threshold amount of noise for non-maximally entangled state (0.571) and for maximally entangled state (0.4) is about $40\%$! MZ thanks Nicolas Gisin for discussions on this topic. MZ and DK acknowledge the support of KBN, project No. 5 PO3B 088 20. DK, LCK and CHO would also like to acknowledge the support of A$\ast$Star Grant No: 012-104-0040. References (1) D. M. Greenberger, M. A. Horne and A. Zeilinger, in Bell’s theorem and the Conceptions of the Universe, edited by M. Kafatos (Kluwer Academic, Dordrecht, 1989). (2) J. Bell, Physics 1, 195 (1964). (3) A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47, 777 (1935). (4) S. Kochen and E. Specker, J. Math. Mech. 17, 59 (1967). (5) M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999). (6) M. Horodecki, P. Horodecki and R. Horodecki, Phys. Rev. Lett 80, 5239 (1998). (7) C. H. Bennet, D. DiVincenzo, T. Mor, P. Shor, J. Smolin and B. Terhal, Phys. Rev. Lett. 82, 5385 (1999). (8) M. Żukowski, A. Zeilinger, M. A. Horne, Phys. Rev. A 55, 2564 (1997). (9) D. Kaszlikowski, P. Gnaciński, M. Żukowski, W. Miklaszewski and A. Zeilinger, Phys. Rev. Lett. 85, 4418 (2000). (10) D. Kaszlikowski, L. C. Kwek, J.-L. Chen, M. Żukowski and C. H. Oh, Phys. Rev. A (in press, 2002), see also quant-ph//0106010. (11) D. Collins, N. Gisin, N. Linden, S. Massar, S. Popescu, Phys. Rev. Lett. 88, 040404 (2002). (12) A. Acin, T. Durt, N. Gisin, J. I. Latorre, quant-ph//0111143. (13) M. Żukowski, D. Kaszlikowski, Phys. Rev A 59, 3200 (1999); M. Żukowski, D. Kaszlikowski, Vienna Circle Yearbook, edited by D. Greenberger, W. L. Reiter, A. Zeilinger, vol. 7, Kluwer Academic Publishers, 1999; D. Kaszlikowski, M. Żukowski, quant-ph//0108097. (14) N. J. Cerf, S. Massa, S. Pironio, quant-ph/0107031. (15) M. Żukowski, D. Kaszlikowski, A. Baturo, J.-A. Larsson, quant-ph//9910058. (16) R. F. Werner, M. M. Wolf, Phys. Rev. A 64, 032112 (2001) (17) M. Żukowski, C. Brukner, quant-ph//0102039. (18) V. Scarani, N. Gisin, J.Phys. A 34 (2001) 6043. (19) J. Gondzio, Eur. J. Op. Res. 85 221 (1995); J. Gondzio, Comp. Opt. Appl. 6, 137 (1996).
Laguerre-Freud equations for Generalized Hahn polynomials of type I Diego Dominici Department of Mathematics State University of New York at New Paltz 1 Hawk Dr. New Paltz, NY 12561-2443 USA e-mail: dominicd@newpaltz.edu Abstract We derive a system of difference equations satisfied by the three-term recurrence coefficients of some families of discrete orthogonal polynomials. 1 Introduction Let $\left\{\mu_{n}\right\}$ be a sequence of complex numbers and $L:\mathbb{C}\left[x\right]\rightarrow\mathbb{C}$ be a linear functional defined by $$L\left[x^{n}\right]=\mu_{n},\quad n=0,1,\ldots.$$ Then, $L$ is called the moment functional determined by the formal moment sequence $\left\{\mu_{n}\right\}$. The number $\mu_{n}$ is called the moment of order $n$. A sequence $\left\{P_{n}\left(x\right)\right\}\subset\mathbb{C}\left[x\right],$ of monic polynomials with $\deg\left(P_{n}\right)=n$ is called an orthogonal polynomial sequence with respect to $L$ provided that [4] $$L\left[P_{n}P_{m}\right]=h_{n}\delta_{n,m},\quad n,m=0,1,\ldots,$$ where $h_{n}\neq 0$ and $\delta_{n,m}$ is Kronecker’s delta. Since $$L\left[xP_{n}P_{k}\right]=0,\quad k\notin\left\{n-1,n,n+1\right\},$$ the monic orthogonal polynomials $P_{n}\left(x\right)$ satisfy the three-term recurrence relation $$xP_{n}\left(x\right)=P_{n+1}\left(x\right)+\beta_{n}P_{n}\left(x\right)+\gamma% _{n}P_{n-1}\left(x\right),$$ (1) where $$\beta_{n}=\frac{1}{h_{n}}L\left[xP_{n}^{2}\right],\quad\gamma_{n}=\frac{1}{h_{% n-1}}L\left[xP_{n}P_{n-1}\right].$$ (2) If we define $P_{-1}\left(x\right)=0,$ $P_{0}\left(x\right)=1,$ we see that $$P_{1}\left(x\right)=x-\beta_{0},$$ (3) and $$P_{2}\left(x\right)=\left(x-\beta_{1}\right)\left(x-\beta_{0}\right)-\gamma_{1}.$$ (4) Because $$L\left[xP_{n}P_{n-1}\right]=L\left[P_{n}^{2}\right],$$ we have $$\gamma_{n}=\frac{h_{n}}{h_{n-1}},\quad n=1,2,\ldots,$$ (5) and we define $$\gamma_{0}=0.$$ (6) Note that from (2) we get $$\beta_{0}=\frac{1}{h_{0}}L\left[x\right]=\frac{\mu_{1}}{\mu_{0}}.$$ (7) If the coefficients $\beta_{n},\gamma_{n}$ are known, the recurrence (1) can be used to compute the polynomials $P_{n}\left(x\right).$ Stability problems and numerical aspects arising in the calculations have been studied by many authors [12], [14], [32], [43]. If explicit representations of the polynomials $P_{n}\left(x\right)$ are given, symbolic computation techniques can be applied to obtain recurrence relations and, in particular, to find expressions for the coefficients $\beta_{n},\gamma_{n}$ (see [5], [20], [35], [36], [44]). If, alas, the only knowledge we have is the linear functional $L,$ the computation of $\beta_{n}$ and $\gamma_{n}$ is a real challenge. One possibility is to use the Modified Chebyshev algorithm [13, 2.1.7]. Another is to obtain recurrences for $\beta_{n},\gamma_{n}$ of the form [2], [40] $$\displaystyle\gamma_{n+1}$$ $$\displaystyle=F_{1}\left(n,\gamma_{n},\gamma_{n-1},\ldots,\beta_{n},\beta_{n-1% },\ldots\right),$$ $$\displaystyle\beta_{n+1}$$ $$\displaystyle=F_{2}\left(n,\gamma_{n+1},\gamma_{n},\ldots,\beta_{n},\beta_{n-1% },\ldots\right),$$ for some functions $F_{1},F_{2}.$ This system of recurrences is known as the Laguerre-Freud equations [11], [22]. The name was coined by Alphonse Magnus as part of his work on Freud’s conjecture [23], [24], [25], [26]. In terms of performance, the Modified Chebyshev algorithm requires $O\left(n^{2}\right)$ operations, while the Laguerre-Freud equations require only $O\left(n\right)$ operations for the computation of $\beta_{n}$ and $\gamma_{n}$ [3]. There are several papers on the Laguerre-Freud equations for different types of orthogonal polynomials including continuous [1], [30], [39], discrete [16], [17], [37], [41], $D_{\omega}$ polynomials [10], [29], Laguerre-Hahn [9], and $q$-polynomials [18]. Most of the known examples belong to the set of semiclassical orthogonal polynomials [27], where the linear functional satisfies an equation of the form $$L\left[\phi U\left(\pi\right)\right]=L\left[\lambda\pi\right],\quad\pi\in% \mathbb{C}\left[x\right],$$ called the Pearson equation [34], where $U:\mathbb{C}\left[x\right]\rightarrow\mathbb{C}\left[x\right]$ is a linear operator and $\phi\left(x\right),$ $\lambda\left(x\right)$ are fixed polynomials. The class of the semiclassical orthogonal polynomials is defined by $$c=\max\left\{\deg\left(\phi\right)-2,\ \deg\left(\phi-\lambda\right)-1\right\}.$$ In this paper, we focus our attention on linear functionals defined by $$L\left[f\right]={\displaystyle\sum\limits_{x=0}^{\infty}}f(x)\rho\left(x\right),$$ (8) where the weight function $\rho\left(x\right)$ is of the form $$\rho\left(x\right)=\frac{\left(a_{1}\right)_{x}\left(a_{2}\right)_{x}\cdots% \left(a_{p}\right)_{x}}{\left(b_{1}+1\right)_{x}\left(b_{2}+1\right)_{x}\cdots% \left(b_{q}+1\right)_{x}}\frac{z^{x}}{x!},$$ (9) and $\left(a\right)_{x}$ denotes the Pochhammer symbol (also called shifted or rising factorial) defined by [33, 5.2.4] $$\displaystyle\left(a\right)_{0}$$ $$\displaystyle=1$$ $$\displaystyle\left(a\right)_{x}$$ $$\displaystyle=a\left(a+1\right)\cdots\left(a+x-1\right),\quad x\in\mathbb{N},$$ or by $$\left(a\right)_{x}=\frac{\Gamma\left(a+x\right)}{\Gamma\left(a\right)},$$ where $\Gamma\left(z\right)$ is the Gamma function. Note that we have $$\frac{\rho\left(x+1\right)}{\rho\left(x\right)}=\frac{\lambda\left(x\right)}{% \phi\left(x+1\right)},$$ (10) with $$\displaystyle\lambda\left(x\right)$$ $$\displaystyle=z\left(x+a_{1}\right)\left(x+a_{2}\right)\cdots\left(x+a_{p}% \right),$$ (11) $$\displaystyle\phi\left(x\right)$$ $$\displaystyle=x\left(x+b_{1}\right)\left(x+b_{2}\right)\cdots\left(x+b_{q}% \right).$$ Hence, the weight function $\rho\left(x\right)$ satisfies an alternative form of the Pearson equation $$\Delta_{x}\left(\phi\rho\right)=\left(\lambda-\phi\right)\rho,$$ (12) where $$\Delta_{x}f(x)=f(x+1)-f(x)$$ is the forward difference operator. Using (10) in (8), we get the Pearson equation $$L\left[\lambda\left(x\right)\pi\left(x\right)\right]=L\left[\phi\left(x\right)% \pi\left(x-1\right)\right],\quad\pi\in\mathbb{C}\left[x\right].$$ (13) The rest of the paper is organized as follows: in Section 2 we use (13) and obtain two difference equations satisfied by the discrete semiclassical orthogonal polynomials. As an example, we apply the method to obtain the recurrence coefficients of the Meixner polynomials. In Section 3, we derive the Laguerre-Freud equations for the Generalized Hahn polynomials of type I, introduced in [7] as part of the classification of discrete semiclassical orthogonal polynomials of class one. Specializing one of the parameters in the polynomials, we obtain the recurrence coefficients of the Hahn polynomials. We finish the paper with some remarks and future directions. 2 Laguerre-Feud equations As Maroni remarks at the beginning of [28], “the history of finite-type relations is as old as the history of orthogonality since $$r(x)P_{n}(x)={\displaystyle\sum\limits_{k=n-t}^{n+t}}\lambda_{n,k}P_{k}(x),$$ when $P_{n}(x)$ is a sequence of orthogonal polynomials and $r(x)$ is a polynomial with $\deg\left(r\right)=t.$” The three-term recurrence relation (1) is the most used example, with $r(x)=x.$ We now derive difference equations for orthogonal polynomials whose linear functional satisfies (13). We follow an approach similar to the one used in [38] to find the Laguerre-Freud equations for the generalized Charlier polynomials. Another method used in many articles is to use ladder operators [19]. Proposition 1 Let $\left\{P_{n}(x)\right\}$ be a family of orthogonal polynomials with respect to a linear functional satisfying (13). Then, we have $$\lambda\left(x\right)P_{n}\left(x+1\right)={\displaystyle\sum\limits_{k=-q-1}^% {p}}A_{k}\left(n\right)P_{n+k}\left(x\right)$$ (14) and $$\phi\left(x\right)P_{n}\left(x-1\right)={\displaystyle\sum\limits_{k=-p}^{q+1}% }B_{k}\left(n\right)P_{n+k}\left(x\right),$$ (15) for some coefficients $A_{k}\left(n\right),$ $B_{k}\left(n\right).$ Proof. Since $\deg\lambda\left(x\right)P_{n}\left(x+1\right)=n+p,$ we can write $$\lambda\left(x\right)P_{n}\left(x+1\right)={\displaystyle\sum\limits_{k=-n}^{p% }}A_{k}\left(n\right)P_{n+k}\left(x\right).$$ Using orthogonality and (13), we have $$\displaystyle h_{n+k}A_{k}\left(n\right)$$ $$\displaystyle=L\left[\lambda\left(x\right)P_{n}\left(x+1\right)P_{n+k}\left(x% \right)\right]$$ $$\displaystyle=L\left[\phi\left(x\right)P_{n}\left(x\right)P_{n+k}\left(x-1% \right)\right]=0,\quad k<-q-1.$$ Similarly, writing $$\phi\left(x\right)P_{n}\left(x-1\right)={\displaystyle\sum\limits_{k=-n}^{q+1}% }B_{k}\left(n\right)P_{n+k}\left(x\right),$$ we get $$\displaystyle h_{n+k}B_{k}\left(n\right)$$ $$\displaystyle=L\left[\phi\left(x\right)P_{n}\left(x-1\right)P_{n+k}\left(x% \right)\right]$$ $$\displaystyle=L\left[\lambda\left(x\right)P_{n}\left(x\right)P_{n+k}\left(x+1% \right)\right]=0,\quad k<-p.$$   The coefficients $A_{k}\left(n\right)$ and $B_{k}\left(n\right)$ are not independent of each other. Corollary 2 $$A_{k}\left(n\right)=\frac{h_{n}}{h_{n+k}}B_{-k}\left(n+k\right),\quad-q-1\leq k% \leq p.$$ (16) Proof. If $-q-1\leq k\leq p,$ then $$\displaystyle A_{k}\left(n\right)$$ $$\displaystyle=\frac{1}{h_{n+k}}L\left[\phi\left(x\right)P_{n}\left(x\right)P_{% n+k}\left(x-1\right)\right]$$ $$\displaystyle=\frac{1}{h_{n+k}}L\left[P_{n}\left(x\right){\displaystyle\sum% \limits_{j=-p}^{q+1}}B_{j}\left(n+k\right)P_{n+k+j}\left(x\right)\right]$$ $$\displaystyle=\frac{1}{h_{n+k}}{\displaystyle\sum\limits_{j=-p}^{q+1}}B_{j}% \left(n+k\right)L\left[P_{n}\left(x\right)P_{n+k+j}\left(x\right)\right]$$ $$\displaystyle=\frac{h_{n}}{h_{n+k}}B_{-k}\left(n+k\right).$$   We can now state our main result. Theorem 3 For $-q-1\leq k\leq p,$ we have $$\displaystyle\gamma_{n+k+1}A_{k+1}\left(n\right)-\gamma_{n}A_{k+1}\left(n-1% \right)+A_{k-1}\left(n\right)-A_{k-1}\left(n+1\right)$$ (17) $$\displaystyle=\left(\beta_{n}-\beta_{n+k}-1\right)A_{k}\left(n\right),$$ with $$A_{p}\left(n\right)=z,$$ (18) $$A_{-q-1}\left(n\right)=\gamma_{n}\gamma_{n-1}\cdots\gamma_{n-q},$$ (19) and $$A_{p+1}\left(n\right)=0=A_{-q-2}\left(n\right).$$ Proof. Using (1), we have $$\displaystyle\lambda\left(x\right)\left(x+1\right)P_{n}\left(x+1\right)=% \lambda\left(x\right)P_{n+1}\left(x+1\right)$$ $$\displaystyle+\beta_{n}\lambda\left(x\right)P_{n}\left(x+1\right)+\gamma_{n}% \lambda\left(x\right)P_{n-1}\left(x+1\right),$$ and from (14) $$\displaystyle\lambda\left(x\right)\left(x+1\right)P_{n}\left(x+1\right)={% \displaystyle\sum\limits_{k=-q}^{p+1}}A_{k-1}\left(n+1\right)P_{n+k}\left(x\right)$$ (20) $$\displaystyle+{\displaystyle\sum\limits_{k=-q-1}^{p}}\beta_{n}A_{k}\left(n% \right)P_{n+k}\left(x\right)+{\displaystyle\sum\limits_{k=-q-2}^{p-1}}\gamma_{% n}A_{k+1}\left(n-1\right)P_{n+k}\left(x\right).$$ On the other hand, if we multiply (14) by $x,$ we get $$\lambda\left(x\right)xP_{n}\left(x+1\right)={\displaystyle\sum\limits_{k=-q-1}% ^{p}}A_{k}\left(n\right)xP_{n+k}\left(x\right),$$ and using (1) we obtain $$\displaystyle\lambda\left(x\right)xP_{n}\left(x+1\right)={\displaystyle\sum% \limits_{k=-q}^{p+1}}A_{k-1}\left(n\right)P_{n+k}\left(x\right)$$ (21) $$\displaystyle+{\displaystyle\sum\limits_{k=-q-1}^{p}}\beta_{n+k}A_{k}\left(n% \right)P_{n+k}\left(x\right)+{\displaystyle\sum\limits_{k=-q-2}^{p-1}}\gamma_{% n+k+1}A_{k+1}\left(n\right)P_{n+k}\left(x\right).$$ Using (14), (20) and (21) in the identity $$\lambda\left(x\right)P_{n}\left(x+1\right)=\left(x+1\right)\lambda\left(x% \right)P_{n}\left(x+1\right)-x\lambda\left(x\right)P_{n}\left(x+1\right),$$ we have $$\displaystyle{\displaystyle\sum\limits_{k=-q-1}^{p}}A_{k}\left(n\right)P_{n+k}% \left(x\right)$$ $$\displaystyle={\displaystyle\sum\limits_{k=-q}^{p+1}}\left[A_{k-1}\left(n+1% \right)-A_{k-1}\left(n\right)\right]P_{n+k}\left(x\right)$$ $$\displaystyle+{\displaystyle\sum\limits_{k=-q-1}^{p}}\left(\beta_{n}-\beta_{n+% k}\right)A_{k}\left(n\right)P_{n+k}\left(x\right)$$ $$\displaystyle+{\displaystyle\sum\limits_{k=-q-2}^{p-1}}\left[\gamma_{n}A_{k+1}% \left(n-1\right)-\gamma_{n+k+1}A_{k+1}\left(n\right)\right]P_{n+k}\left(x% \right).$$ Since the polynomials $P_{n}\left(x\right)$ are linearly independent, we get: $$k=p+1:\quad A_{p}\left(n+1\right)-A_{p}\left(n\right)=0,$$ (22) $$k=-q-2:\quad\gamma_{n}A_{-q-1}\left(n-1\right)-\gamma_{n-q-1}A_{-q-1}\left(n% \right)=0,$$ (23) and for $-q-1\leq k\leq p,$ $$\displaystyle\left(1+\beta_{n+k}-\beta_{n}\right)A_{k}\left(n\right)$$ $$\displaystyle=A_{k-1}\left(n+1\right)-A_{k-1}\left(n\right)$$ $$\displaystyle+\gamma_{n}A_{k+1}\left(n-1\right)-\gamma_{n+k+1}A_{k+1}\left(n% \right).$$ Comparing leading coefficients in (14) we obtain $$A_{p}\left(n\right)=z,$$ in agreement with (22). Rewriting (23) as $$\frac{A_{-q-1}\left(n\right)}{A_{-q-1}\left(n-1\right)}=\frac{\gamma_{n}}{% \gamma_{n-q-1}},$$ we see that $$\frac{A_{-q-1}\left(n\right)}{A_{-q-1}\left(q+1\right)}=\frac{\gamma_{n}\gamma% _{n-1}\cdots\gamma_{n-q}}{\gamma_{1}\gamma_{2}\cdots\gamma_{q+1}}.$$ From (16) we have $$A_{-q-1}\left(q+1\right)=\frac{h_{q+1}}{h_{0}}B_{q+1}\left(0\right).$$ Since $\phi\left(x\right)P_{n}\left(x-1\right)$ is a monic polynomial, (15) gives $$B_{q+1}\left(n\right)=1,$$ (24) and using (5) we get $$\frac{h_{q+1}}{h_{0}}B_{q+1}\left(0\right)=\gamma_{1}\gamma_{2}\cdots\gamma_{q% +1},$$ proving (19).    2.1 Meixner polynomials To illustrate the use of Theorem 3, we consider the family of Meixner polynomials introduced by Josef Meixner in [31]. These polynomials are orthogonal with respect to the weight function $$\rho\left(x\right)=\left(a\right)_{x}\frac{z^{x}}{x!},$$ and using (11) we have $$\lambda\left(x\right)=z\left(x+a\right),\quad\phi\left(x\right)=x,$$ and $p=1,\quad q=0.$ From (18) and (19) we get $$A_{1}\left(n\right)=z,\quad A_{-1}\left(n\right)=\gamma_{n},$$ (25) while (17) gives: $$k=1:\quad\left(1+\beta_{n+1}-\beta_{n}\right)A_{1}\left(n\right)=A_{0}\left(n+% 1\right)-A_{0}\left(n\right),$$ $$k=0:\quad A_{0}\left(n\right)=A_{-1}\left(n+1\right)-A_{-1}\left(n\right)+% \gamma_{n}A_{1}\left(n-1\right)-\gamma_{n+1}A_{1}\left(n\right),$$ and $$k=-1:\quad\left(1+\beta_{n-1}-\beta_{n}\right)A_{-1}\left(n\right)=\gamma_{n}A% _{0}\left(n-1\right)-\gamma_{n}A_{0}\left(n\right).$$ Using (25) we obtain $$z\left(1+\beta_{n+1}-\beta_{n}\right)=A_{0}\left(n+1\right)-A_{0}\left(n\right),$$ (26) $$A_{0}\left(n\right)=\gamma_{n+1}-\gamma_{n}+z\left(\gamma_{n}-\gamma_{n+1}% \right)=\left(1-z\right)\left(\gamma_{n+1}-\gamma_{n}\right),$$ (27) and $$1+\beta_{n-1}-\beta_{n}=A_{0}\left(n-1\right)-A_{0}\left(n\right).$$ (28) Summing (26) from $n=0$ and (28) from $n=1,$ we get $$\displaystyle z\left(\beta_{n}-\beta_{0}+n\right)$$ $$\displaystyle=A_{0}\left(n\right)-A_{0}\left(0\right),$$ $$\displaystyle\beta_{n}-\beta_{0}-n$$ $$\displaystyle=A_{0}\left(n\right)-A_{0}\left(0\right).$$ Using (27) and (6), gives $$\beta_{n}-\beta_{0}-n=z\left(\beta_{n}-\beta_{0}+n\right)=\left(1-z\right)% \left(\gamma_{n+1}-\gamma_{n}-\gamma_{1}\right).$$ Therefore, $$\beta_{n}=\beta_{0}+\frac{1+z}{1-z}n,$$ and $$\gamma_{n+1}-\gamma_{n}-\gamma_{1}=\frac{2nz}{\left(1-z\right)^{2}}.$$ (29) Summing (29) from $n=0,$ we conclude that $$\gamma_{n}=n\gamma_{1}+\frac{n\left(n-1\right)z}{\left(1-z\right)^{2}}.$$ If we use (25) and (27) in (14), we get $$\displaystyle z\left(x+a\right)P_{n}\left(x+1\right)=\gamma_{n}P_{n-1}\left(x\right)$$ (30) $$\displaystyle+\left(1-z\right)\left(\gamma_{n+1}-\gamma_{n}\right)P_{n}\left(x% \right)+zP_{n+1}\left(x\right),$$ and using (16), $$\displaystyle B_{1}\left(n\right)$$ $$\displaystyle=\frac{h_{n}}{h_{n+1}}A_{-1}\left(n+1\right)=\frac{A_{-1}\left(n+% 1\right)}{\gamma_{n+1}}=1,$$ $$\displaystyle B_{0}\left(n\right)$$ $$\displaystyle=A_{0}\left(n\right)=\left(1-z\right)\left(\gamma_{n+1}-\gamma_{n% }\right),$$ $$\displaystyle B_{-1}\left(n\right)$$ $$\displaystyle=\frac{h_{n}}{h_{n-1}}A_{1}\left(n-1\right)=\gamma_{n}z.$$ Hence, from (15) we obtain $$xP_{n}\left(x-1\right)=z\gamma_{n}P_{n-1}\left(x\right)+\left(1-z\right)\left(% \gamma_{n+1}-\gamma_{n}\right)P_{n}\left(x\right)+P_{n+1}\left(x\right).$$ (31) Setting $n=0$ in (30) and (31) gives $$\displaystyle z\left(x+a\right)$$ $$\displaystyle=\left(1-z\right)\gamma_{1}+z\left(x-\beta_{0}\right),$$ $$\displaystyle x$$ $$\displaystyle=\left(1-z\right)\gamma_{1}+x-\beta_{0},$$ from which we find $$\left(1-z\right)\gamma_{1}=\beta_{0}=-a+\frac{1-z}{z}\gamma_{1},$$ and therefore $$\beta_{0}=\frac{az}{1-z},\quad\gamma_{1}=\frac{az}{\left(1-z\right)^{2}}.$$ Thus, we recover the well known coefficients [33, 18.22.2 ] $$\beta_{n}=\frac{n+\left(n+a\right)z}{1-z},\quad\gamma_{n}=\frac{n\left(n+a-1% \right)z}{\left(1-z\right)^{2}}.$$ (32) Using the hypergeometric representation [33, 18.20.7 ] $$P_{n}\left(x\right)=\left(a\right)_{n}\left(1-\frac{1}{z}\right)^{-n}\ _{2}F_{% 1}\left[\begin{array}[c]{c}-n,\ -x\\ a\end{array};1-\frac{1}{z}\right],$$ one can easily verify (or re-derive) (32) using (for instance) the Mathematica package HolonomicFunctions [21]. 3 Generalized Hahn polynomials of type I The Generalized Hahn polynomials of type I were introduced in [7]. They are orthogonal with respect to the weight function $$\rho\left(x\right)=\frac{\left(a_{1}\right)_{x}\left(a_{2}\right)_{x}}{\left(b% +1\right)_{x}}\frac{z^{x}}{x!},\quad\left|z\right|<1,\quad b\neq-1,-2,\ldots.$$ The first moments are given by $$\displaystyle\mu_{0}$$ $$\displaystyle=\ _{2}F_{1}\left[\begin{array}[c]{c}a_{1},\ a_{2}\\ b+1\end{array};z\right],$$ (33) $$\displaystyle\mu_{1}$$ $$\displaystyle=z\frac{a_{1}a_{2}}{b+1}\ _{2}F_{1}\left[\begin{array}[c]{c}a_{1}% +1,\ a_{2}+1\\ b+2\end{array};z\right].$$ Since $$\frac{\rho\left(x+1\right)}{\rho\left(x\right)}=\frac{z\left(x+a_{1}\right)% \left(x+a_{2}\right)}{\left(x+1\right)\left(x+b+1\right)},$$ we have $$\lambda\left(x\right)=z\left(x+a_{1}\right)\left(x+a_{2}\right),\quad\phi\left% (x\right)=x\left(x+b\right),$$ and $p=2,\quad q=1.$ We can now derive the Laguerre-Freud equations for the Generalized Hahn polynomials of type I. Theorem 4 The recurrence coefficients of the Generalized Hahn polynomials of type I satisfy the Laguerre-Freud equations $$\left(1-z\right)\nabla_{n}\left(\gamma_{n+1}+\gamma_{n}\right)=zv_{n}\nabla_{n% }\left(\beta_{n}+n\right)-u_{n}\nabla_{n}\left(\beta{}_{n}-n\right),$$ (34) $$\Delta_{n}\nabla_{n}\left[\left(u_{n}-zv_{n}\right)\gamma_{n}\right]=u_{n}% \nabla_{n}\left(\beta_{n}-n\right)+\nabla_{n}\left(\gamma_{n+1}+\gamma_{n}% \right).$$ (35) with initial conditions $\beta_{0}=\frac{\mu_{1}}{\mu_{0}}$ and $$\gamma_{1}=\frac{\left(a_{1}+a_{2}-b\right)\beta_{0}+a_{1}a_{2}}{1-z}-\left(% \beta_{0}+a_{1}\right)\left(\beta_{0}+a_{2}\right),$$ (36) where $$\displaystyle u_{n}$$ $$\displaystyle=\beta_{n}+\beta_{n-1}-n+b+1,$$ $$\displaystyle v_{n}$$ $$\displaystyle=\beta_{n}+\beta_{n-1}+n-1+a_{1}+a_{2},$$ and $$\nabla_{x}f(x)=f(x)-f(x-1).$$ Proof. From (18) and (19), we get $$A_{2}\left(n\right)=z,\quad A_{-2}\left(n\right)=\gamma_{n}\gamma_{n-1},$$ (37) while (17) gives: $$k=2:\quad A_{1}\left(n+1\right)-A_{1}\left(n\right)=z\left(1+\beta_{n+2}-\beta% _{n}\right),$$ (38) $$\begin{array}[c]{cc}k=1:&A_{0}\left(n+1\right)-A_{0}\left(n\right)=A_{1}\left(% n\right)\left(1+\beta_{n+1}-\beta_{n}\right)+z\left(\gamma_{n+2}-\gamma_{n}% \right),\\ k=0:&A_{-1}\left(n+1\right)-A_{-1}\left(n\right)=A_{0}\left(n\right)+A_{1}% \left(n\right)\gamma_{n+1}-A_{1}\left(n-1\right)\gamma_{n},\\ k=-1:&A_{-2}\left(n+1\right)-A_{-2}\left(n\right)\\ &=A_{-1}\left(n\right)\left(1+\beta_{n-1}-\beta_{n}\right)+\gamma_{n}\left[A_{% 0}\left(n\right)-A_{0}\left(n-1\right)\right],\end{array}$$ (39) and $$k=-2:\quad A_{-2}\left(n\right)\left(1+\beta_{n-2}-\beta_{n}\right)=A_{-1}% \left(n-1\right)\gamma_{n}-A_{-1}\left(n\right)\gamma_{n-1}.$$ (40) Solving (38) we get $$A_{1}\left(n\right)=A_{1}\left(0\right)+z\left(\beta_{n+1}+\beta_{n}+n-\beta_{% 0}-\beta_{1}\right).$$ (41) Setting $n=0$ in (14) we have $$z\left(x+a_{1}\right)\left(x+a_{2}\right)=A_{0}\left(0\right)+A_{1}\left(0% \right)P_{1}\left(x\right)+zP_{2}\left(x\right),$$ and using (3)-(4), we get $$A_{0}\left(0\right)=z\left[a_{1}a_{2}+\gamma_{1}+\left(a_{1}+a_{2}\right)\beta% _{0}+\beta_{0}^{2}\right],$$ (42) and $$A_{1}\left(0\right)=z\left(a_{1}+a_{2}+\beta_{0}+\beta_{1}\right).$$ (43) Using (43) in (41), we obtain $$A_{1}\left(n\right)=z\left(\beta_{n+1}+\beta_{n}+n+a_{1}+a_{2}\right).$$ (44) If we use (37) in (40), we get $$1+\beta_{n-2}-\beta_{n}=\frac{A_{-1}\left(n-1\right)}{\gamma_{n-1}}-\frac{A_{-% 1}\left(n\right)}{\gamma_{n}},$$ and summing from $n=2$ we see that $$n-1+\beta_{0}+\beta_{1}-\beta_{n-1}-\beta_{n}=\frac{A_{-1}\left(1\right)}{% \gamma_{1}}-\frac{A_{-1}\left(n\right)}{\gamma_{n}}.$$ (45) Setting $n=0$ in (15), we have $$x\left(x+b\right)=\left(x-\beta_{1}\right)\left(x-\beta_{0}\right)-\gamma_{1}+% B_{1}\left(0\right)\left(x-\beta_{0}\right)+B_{0}\left(0\right)$$ and hence $$B_{1}\left(0\right)=\beta_{0}+\beta_{1}+b,$$ (46) $$B_{0}\left(0\right)=\beta_{0}^{2}+b\beta_{0}+\gamma_{1}.$$ (47) Using (16) with $k=-1$ and (46), we obtain $$A_{-1}\left(1\right)=\gamma_{1}B_{1}\left(0\right)=\gamma_{1}\left(\beta_{0}+% \beta_{1}+b\right).$$ (48) Combining (45) and (48), we conclude that $$A_{-1}\left(n\right)=\gamma_{n}\left(\beta_{n}+\beta_{n-1}-n+b+1\right).$$ (49) If we introduce the functions $$\displaystyle u_{n}$$ $$\displaystyle=\frac{A_{-1}\left(n\right)}{\gamma_{n}}=\beta_{n}+\beta_{n-1}-n+% b+1,$$ $$\displaystyle v_{n}$$ $$\displaystyle=\frac{A_{1}\left(n-1\right)}{z}=\beta_{n}+\beta_{n-1}+n-1+a_{1}+% a_{2},$$ and use (44),(49) in (39), we get $$\displaystyle\nabla_{n}A_{0}$$ $$\displaystyle=zv_{n}\nabla_{n}\left(\beta_{n}+n\right)+z\nabla_{n}\left(\gamma% _{n+1}+\gamma_{n}\right),$$ $$\displaystyle A_{0}$$ $$\displaystyle=\Delta_{n}\left[\left(u_{n}-zv_{n}\right)\gamma_{n}\right],$$ (50) $$\displaystyle\nabla_{n}A_{0}$$ $$\displaystyle=u_{n}\nabla_{n}\left(\beta_{n}-n\right)+\nabla_{n}\left(\gamma_{% n+1}+\gamma_{n}\right).$$ Using (16) with $k=0$ and (47), we obtain $$A_{0}\left(0\right)=B_{0}\left(0\right)=\beta_{0}^{2}+b\beta_{0}+\gamma_{1}.$$ (51) From (42) and (51) we have $$\left(1-z\right)\left[\gamma_{1}+\left(\beta_{0}+a_{1}\right)\left(\beta_{0}+a% _{2}\right)\right]=\left(a_{1}+a_{2}-b\right)\beta_{0}+a_{1}a_{2}.$$ (52) Finally, if we eliminate $A_{0}$ from (50), we conclude that $$zv_{n}\nabla_{n}\left(\beta_{n}+n\right)+z\nabla_{n}\left(\gamma_{n+1}+\gamma_% {n}\right)=u_{n}\nabla_{n}\left(\beta_{n}-n\right)+\nabla_{n}\left(\gamma_{n+1% }+\gamma_{n}\right)$$ and $$\displaystyle\Delta_{n}\left[\left(u_{n}-zv_{n}\right)\gamma_{n}\right]-\Delta% _{n}\left[\left(u_{n-1}-zv_{n-1}\right)\gamma_{n-1}\right]$$ $$\displaystyle=u_{n}\nabla_{n}\left(\beta_{n}-n\right)+\nabla_{n}\left(\gamma_{% n+1}+\gamma_{n}\right)$$ or $$\Delta_{n}\nabla_{n}\left[\left(u_{n}-zv_{n}\right)\gamma_{n}\right]=u_{n}% \nabla_{n}\left(\beta_{n}-n\right)+\nabla_{n}\left(\gamma_{n+1}+\gamma_{n}% \right).$$   3.1 Hahn polynomials We now consider the case $z=1.$ Under the assumptions $$\operatorname{Re}\left(b-a_{1}-a_{2}\right)>0,\quad b-a_{1}-a_{2}\neq 1,2,\ldots,$$ the first two moments (33) are given by [33, 15.4(ii)] $$\displaystyle\mu_{0}$$ $$\displaystyle=\frac{\Gamma\left(b+1\right)\Gamma\left(b+1-a_{1}-a_{2}\right)}{% \Gamma\left(b+1-a_{1}\right)\Gamma\left(b+1-a_{2}\right)},$$ $$\displaystyle\mu_{1}$$ $$\displaystyle=\frac{a_{1}a_{2}}{b+1}\frac{\Gamma\left(b+2\right)\Gamma\left(b-% a_{1}-a_{2}\right)}{\Gamma\left(b-a_{1}\right)\Gamma\left(b-a_{2}\right)}.$$ Hence, $$\beta_{0}=\frac{\mu_{1}}{\mu_{0}}=\frac{a_{1}a_{2}}{b-a_{1}-a_{2}}.$$ (53) Note that we get the same result if we set $z=1$ in (52). Taking limits in (36) as $z\rightarrow 1^{-},$ we obtain $$\gamma_{1}=\frac{a_{1}a_{2}\left(b-a_{1}\right)\left(b-a_{2}\right)}{\left(b-a% _{1}-a_{2}\right)\left(b-1-a_{1}-a_{2}\right)}-a_{1}\frac{b-a_{1}}{b-a_{1}-a_{% 2}}a_{2}\frac{b-a_{2}}{b-a_{1}-a_{2}},$$ or $$\gamma_{1}=\frac{a_{1}a_{2}\left(b-a_{1}\right)\left(b-a_{2}\right)}{\left(b-a% _{1}-a_{2}\right)^{2}\left(b-a_{1}-a_{2}-1\right)},$$ (54) where we have used the formula [33, 15.5.1 ] $$\frac{d}{dz}\ _{2}F_{1}\left[\begin{array}[c]{c}a,\ b\\ c\end{array};z\right]=\frac{ab}{c}\ _{2}F_{1}\left[\begin{array}[c]{c}a+1,\ b+% 1\\ c+1\end{array};z\right].$$ When $z=1,$ the Laguerre-Freud equations (34)-(35) decouple, and we get $$u_{n}\nabla_{n}\left(\beta_{n}-n\right)=v_{n}\nabla_{n}\left(\beta_{n}+n\right),$$ (55) $$\Delta_{n}\nabla_{n}\left[\left(b-a_{1}-a_{2}+2-2n\right)\gamma_{n}\right]-% \nabla_{n}\left(\gamma_{n+1}+\gamma_{n}\right)=u_{n}\nabla_{n}\left(\beta_{n}-% n\right),$$ (56) since in this case $$u_{n}-v_{n}=b-a_{1}-a_{2}+2-2n.$$ Solving for $\beta_{n}$ in (55), we have $$\beta_{n}=\frac{2n+a_{1}+a_{2}-b-4}{2n+a_{1}+a_{2}-b}\beta_{n-1}-\allowbreak% \frac{a_{1}+a_{2}+b}{2n+a_{1}+a_{2}-b}.$$ (57) As it is well known, the general solution of the initial value problem $$y_{n+1}=c_{n}y_{n}+g_{n},\quad y_{n_{0}}=y_{0},$$ is [8, 1.2.4] $$y_{n}=y_{0}{\displaystyle\prod\limits_{j=n_{0}}^{n-1}}c_{j}+{\displaystyle\sum% \limits_{k=n_{0}}^{n-1}}\left(g_{k}{\displaystyle\prod\limits_{j=k+1}^{n-1}}c_% {j}\right).$$ Thus, the solution of (57) is given by $$\displaystyle\beta_{n}$$ $$\displaystyle=\frac{\left(a_{1}+a_{2}-b\right)\left(a_{1}+a_{2}-b-2\right)}{% \left(2n+a_{1}+a_{2}-b\right)\left(2n+a_{1}+a_{2}-b-2\right)}\beta_{0}$$ $$\displaystyle-\frac{\left(a_{1}+a_{2}+b\right)\left(a_{1}+a_{2}-b+n-1\right)}{% \left(2n+a_{1}+a_{2}-b\right)\left(2n+a_{1}+a_{2}-b-2\right)}n,$$ where we have used the identity $${\displaystyle\prod\limits_{k=n_{0}}^{n_{1}}}\frac{2n+K-2}{2n+K+2}=\frac{\left% (2n_{0}+K\right)\left(2n_{0}+K-2\right)}{\left(2n_{1}+K\right)\left(2n_{1}+K+2% \right)}.$$ If we use the initial condition (53), we conclude that $$\beta_{n}=\frac{(b+2-a_{1}-a_{2})a_{1}a_{2}-n\left(a_{1}+a_{2}+b\right)\left(n% +a_{1}+a_{2}-b-1\right)}{\left(2n+a_{1}+a_{2}-b\right)(2n+a_{1}+a_{2}-b-2)}\allowbreak.$$ Re-writing (56), we have $$\displaystyle\left(b-a_{1}-a_{2}-2n-1\right)\gamma_{n+1}-2\left(b-a_{1}-a_{2}-% 2n+2\right)\gamma_{n}$$ $$\displaystyle+\left(b-a_{1}-a_{2}-2n+5\right)\gamma_{n-1}=u_{n}\nabla_{n}\left% (\beta_{n}-n\right).$$ Summing from $n=1,$ we get $$\displaystyle\left(b-a_{1}-a_{2}-2n-1\right)\gamma_{n+1}+\left(a_{1}+a_{2}-b+2% n-3\right)\gamma_{n}$$ $$\displaystyle+\left(a_{1}+a_{2}-b+1\right)\gamma_{1}=-{\displaystyle\sum% \limits_{k=0}^{n-1}}\beta_{k}+\beta_{n}^{2}-\beta_{0}^{2}$$ $$\displaystyle+b\left(\beta_{n}-\beta_{0}-n\right)-n\beta_{n}+\frac{n\left(n-1% \right)}{2}.$$ The solution of this difference equation with initial condition (54) is $$\displaystyle\gamma_{n}$$ $$\displaystyle=-n\frac{(n+a_{1}-1)(n+a_{2}-1)(n+a_{1}-b-1)}{(2n+a_{1}+a_{2}-b-1% )(2n+a_{1}+a_{2}-b-3)}$$ $$\displaystyle\times\frac{(n+a_{2}-b-1)(n+a_{1}+a_{2}-b-2)}{(2n+a_{1}+a_{2}-b-2% )^{2}}.$$ We summarize the results in the following proposition. Proposition 5 The recurrence coefficients of the Hahn polynomials, orthogonal with respect to the weight function $$\rho\left(x\right)=\frac{\left(a_{1}\right)_{x}\left(a_{2}\right)_{x}}{x!\left% (b+1\right)_{x}},$$ with $$\operatorname{Re}\left(b-a_{1}-a_{2}\right)>0,\quad b-a_{1}-a_{2}\neq 1,2,\ldots,$$ are given by $$\beta_{n}=\frac{(b+2-a_{1}-a_{2})a_{1}a_{2}-n\left(a_{1}+a_{2}+b\right)\left(n% +a_{1}+a_{2}-b-1\right)}{\left(2n+a_{1}+a_{2}-b\right)(2n+a_{1}+a_{2}-b-2)}\allowbreak,$$ (58) and $$\displaystyle\gamma_{n}$$ $$\displaystyle=-n\frac{(n+a_{1}-1)(n+a_{2}-1)(n+a_{1}-b-1)}{(2n+a_{1}+a_{2}-b-1% )(2n+a_{1}+a_{2}-b-3)}$$ (59) $$\displaystyle\times\frac{(n+a_{2}-b-1)(n+a_{1}+a_{2}-b-2)}{(2n+a_{1}+a_{2}-b-2% )^{2}}.$$ This family of orthogonal polynomials was introduced by Hahn in [15]. They have the hypergeometric representation [42] $$P_{n}\left(x\right)=\frac{\left(a_{1}\right)_{n}\left(a_{2}\right)_{n}}{\left(% n+a_{1}+a_{2}-b-1\right)_{n}}\ _{3}F_{2}\left[\begin{array}[c]{c}-n,\ -x,\ n+a% _{1}+a_{2}-b-1\\ a_{1},\ a_{2}\end{array};1\right],$$ from which (58) and (59) can be obtained using HolonomicFunctions. As we observed in [6], the finite family of polynomials that are usually called “Hahn polynomials” in the literature [33, 18.19] correspond to the special case $$a_{1}=\alpha+1,\quad a_{2}=-N,\quad b=-N-1-\beta.$$ 4 Conclusions We have presented a method that allows the computation of the recurrence coefficients of discrete orthogonal polynomials. In some cases, a closed-form expression can be given. We plan to extend the results to include other families of polynomials. References [1] M. J. Atia, F. Marcellán, and I. A. Rocha. On semiclassical orthogonal polynomials: a quasi-definite functional of class 1. Facta Univ. Ser. Math. Inform. (17), 13–34 (2002). [2] E. Azatassou, M. N. Hounkonnou, and A. Ronveaux. Laguerre-Freud equations for semi-classical operators. In “Contemporary problems in mathematical physics (Cotonou, 1999)”, pp. 336–346. World Sci. Publ., River Edge, NJ (2000). [3] S. Belmehdi and A. Ronveaux. Laguerre-Freud’s equations for the recurrence coefficients of semi-classical orthogonal polynomials. J. Approx. Theory 76(3), 351–368 (1994). [4] T. S. Chihara. “An introduction to orthogonal polynomials”. Gordon and Breach Science Publishers, New York-London-Paris (1978). [5] F. Chyzak. An extension of Zeilberger’s fast algorithm to general holonomic functions. Discrete Math. 217(1-3), 115–134 (2000). [6] D. Dominici. Polynomial sequences associated with the moments of hypergeometric weights. SIGMA Symmetry Integrability Geom. Methods Appl. 12, Paper No. 044, 18 (2016). [7] D. Dominici and F. Marcellán. Discrete semiclassical orthogonal polynomials of class one. Pacific J. Math. 268(2), 389–411 (2014). [8] S. Elaydi. “An introduction to difference equations”. Undergraduate Texts in Mathematics. Springer, New York, third ed. (2005). [9] G. Filipuk and M. N. Rebocho. Discrete Painlevé equations for recurrence coefficients of Laguerre-Hahn orthogonal polynomials of class one. Integral Transforms Spec. Funct. 27(7), 548–565 (2016). [10] M. Foupouagnigni, M. N. Hounkonnou, and A. Ronveaux. Laguerre-Freud equations for the recurrence coefficients of $D_{\omega}$ semi-classical orthogonal polynomials of class one. In “Proceedings of the VIIIth Symposium on Orthogonal Polynomials and Their Applications (Seville, 1997)”, vol. 99, pp. 143–154 (1998). [11] G. Freud. On the coefficients in the recursion formulae of orthogonal polynomials. Proc. Roy. Irish Acad. Sect. A 76(1), 1–6 (1976). [12] W. Gautschi. Computational aspects of three-term recurrence relations. SIAM Rev. 9, 24–82 (1967). [13] W. Gautschi. “Orthogonal polynomials: computation and approximation”. Numerical Mathematics and Scientific Computation. Oxford University Press, New York (2004). [14] A. Gil, J. Segura, and N. M. Temme. “Numerical methods for special functions”. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2007). [15] W. Hahn. über Orthogonalpolynome, die $q$-Differenzengleichungen genügen. Math. Nachr. 2, 4–34 (1949). [16] C. Hounga, M. N. Hounkonnou, and A. Ronveaux. Laguerre-Freud equations for the recurrence coefficients of some discrete semi-classical orthogonal polynomials of class two. In “Contemporary problems in mathematical physics”, pp. 412–419. World Sci. Publ., Hackensack, NJ (2006). [17] M. N. Hounkonnou, C. Hounga, and A. Ronveaux. Discrete semi-classical orthogonal polynomials: generalized Charlier. J. Comput. Appl. Math. 114(2), 361–366 (2000). [18] M. E. H. Ismail, S. J. Johnston, and Z. S. Mansour. Structure relations for $q$-polynomials and some applications. Appl. Anal. 90(3-4), 747–767 (2011). [19] M. E. H. Ismail and P. Simeonov. Nonlinear equations for the recurrence coefficients of discrete orthogonal polynomials. J. Math. Anal. Appl. 376(1), 259–274 (2011). [20] M. Kauers and P. Paule. “The concrete tetrahedron”. Texts and Monographs in Symbolic Computation. SpringerWienNewYork, Vienna (2011). [21] C. Koutschan. “Advanced Applications of the Holonomic Systems Approach”. ProQuest LLC, Ann Arbor, MI (2009). Thesis (Ph.D.)–Research Institute for Symbolic Computation, Johannes Kepler University Linz. [22] E. Laguerre. Sur la réduction en fractions continues d’une fraction qui satisfait à une équation différentialle linéaire du premier ordre dont les coefficients sont rationnels. J. Math. Pures Appl. (4) 1, 135–165 (1885). [23] A. P. Magnus. A proof of Freud’s conjecture about the orthogonal polynomials related to $|x|^{\rho}\mathrm{exp}(-x^{2m})$, for integer $m$. In “Orthogonal polynomials and applications (Bar-le-Duc, 1984)”, vol. 1171 of “Lecture Notes in Math.”, pp. 362–372. Springer, Berlin (1985). [24] A. P. Magnus. On Freud’s equations for exponential weights. J. Approx. Theory 46(1), 65–99 (1986). [25] A. P. Magnus. Painlevé-type differential equations for the recurrence coefficients of semi-classical orthogonal polynomials. In “Proceedings of the Fourth International Symposium on Orthogonal Polynomials and their Applications (Evian-Les-Bains, 1992)”, vol. 57, pp. 215–237 (1995). [26] A. P. Magnus. Freud’s equations for orthogonal polynomials as discrete Painlevé equations. In “Symmetries and integrability of difference equations (Canterbury, 1996)”, vol. 255 of “London Math. Soc. Lecture Note Ser.”, pp. 228–243. Cambridge Univ. Press, Cambridge (1999). [27] P. Maroni. Une théorie algébrique des polynômes orthogonaux. Application aux polynômes orthogonaux semi-classiques. In “Orthogonal polynomials and their applications (Erice, 1990)”, vol. 9 of “IMACS Ann. Comput. Appl. Math.”, pp. 95–130. Baltzer, Basel (1991). [28] P. Maroni. Semi-classical character and finite-type relations between polynomial sequences. Appl. Numer. Math. 31(3), 295–330 (1999). [29] P. Maroni and M. Mejri. The symmetric $D_{\omega}$-semi-classical orthogonal polynomials of class one. Numer. Algorithms 49(1-4), 251–282 (2008). [30] P. Maroni and M. Mejri. Some semiclassical orthogonal polynomials of class one. Eurasian Math. J. 2(2), 108–128 (2011). [31] J. Meixner. Orthogonale Polynomsysteme Mit Einer Besonderen Gestalt Der Erzeugenden Funktion. J. London Math. Soc. S1-9(1), 6 (1934). [32] F. W. J. Olver. Numerical solution of second-order linear difference equations. J. Res. Nat. Bur. Standards Sect. B 71B, 111–129 (1967). [33] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. “NIST handbook of mathematical functions”. U.S. Department of Commerce National Institute of Standards and Technology, Washington, DC (2010). [34] K. Pearson. Contributions to the Mathematical Theory of Evolution. II. skew Variation in Homogeneous Material. Philos. Trans. Roy. Soc. London Ser. A 186, 343–414 (1895). [35] M. Petkovˇsek, H. S. Wilf, and D. Zeilberger. “$A=B$”. A K Peters, Ltd., Wellesley, MA (1996). [36] E. D. Rainville. “Special functions”. The Macmillan Co., New York (1960). [37] A. Ronveaux. Discrete semiclassical orthogonal polynomials: generalized Meixner. J. Approx. Theory 46(4), 403–407 (1986). [38] C. Smet and W. Van Assche. Orthogonal polynomials on a bi-lattice. Constr. Approx. 36(2), 215–242 (2012). [39] P. E. Spicer and F. W. Nijhoff. Semi-classical Laguerre polynomials and a third-order discrete integrable equation. J. Phys. A 42(45), 454019, 9 (2009). [40] W. Van Assche. Discrete Painlevé equations for recurrence coefficients of orthogonal polynomials. In “Difference equations, special functions and orthogonal polynomials”, pp. 687–725. World Sci. Publ., Hackensack, NJ (2007). [41] W. Van Assche and M. Foupouagnigni. Analysis of non-linear recurrence relations for the recurrence coefficients of generalized Charlier polynomials. J. Nonlinear Math. Phys. 10(suppl. 2), 231–237 (2003). [42] M. Weber and A. Erdélyi. On the finite difference analogue of Rodrigues’ formula. Amer. Math. Monthly 59, 163–168 (1952). [43] J. Wimp. “Computation with recurrence relations”. Applicable Mathematics Series. Pitman (Advanced Publishing Program), Boston, MA (1984). [44] D. Zeilberger. A holonomic systems approach to special functions identities. J. Comput. Appl. Math. 32(3), 321–368 (1990).
Einstein’s Approach to Statistical Mechanics: The 1902–04 Papers Luca Peliti L. Peliti M. A. and H. Chooljan Member, Simons Center for Systems Biology, Institute for Advanced Study, Einstein Drive, Princeton NJ 08540 (USA) 22email: luca@peliti.orgR. Rechtman Instituto de Energías Renovables, Universidad Nacional Autónoma de México, Priv. Xochicalco S/N, Temixco, Morelos 62580 (México) 44email: rrs@ier.unam.mx    Raúl Rechtman L. Peliti M. A. and H. Chooljan Member, Simons Center for Systems Biology, Institute for Advanced Study, Einstein Drive, Princeton NJ 08540 (USA) 22email: luca@peliti.orgR. Rechtman Instituto de Energías Renovables, Universidad Nacional Autónoma de México, Priv. Xochicalco S/N, Temixco, Morelos 62580 (México) 44email: rrs@ier.unam.mx (Received: date / Accepted: date) Abstract We summarize the papers published by Einstein in the Annalen der Physik in the years 1902–04 on the derivation of the properties of thermal equilibrium on the basis of the mechanical equations of motion and of the calculus of probabilities. We point out the line of thought that led Einstein to an especially economical foundation of the discipline, and to focus on fluctuations of the energy as a possible tool for establishing the validity of this foundation. We also sketch a comparison of Einstein’s approach with that of Gibbs, suggesting that although they obtained similar results, they had different motivations and interpreted them in very different ways. Keywords:Foundations of statistical mechanics ensemble theorythermodynamics fluctuations Einstein pacs: 01.65.+g 05.20.Gg ††journal: Journal of Statistical Physics ∎ 1 Introduction By the end of June 1902, just after being accepted as Technical Assistant level III at the Federal Patent Office in Bern, Albert Einstein, 23, sent to the renowned journal Annalen der Physik a manuscript with the bold title “Kinetic Theory of Thermal Equilibrium and of the Second Law of Thermodynamics” Einstein02 . In the introduction, he explains that he wishes to fill a gap in the foundations of the general theory of heat, “for one has not yet succeeded in deriving the laws of thermal equilibrium and the second law of thermodynamics using only the equations of mechanics and the probability calculus”. He also announces “an extension of the second law that is of importance for the application of thermodynamics”. Finally, he will provide “the mathematical expression of the entropy from the standpoint of mechanics”. Einstein’s papers and their translations are available on the Princeton University Press site Princeton . In the following two years Einstein followed this line of research publishing a paper each year Einstein03 ; Einstein04 . The third one, entitled “On the general molecular theory of heat”, submitted on March 27, 1904, opened a new path, by tacitly extending the results obtained for a general mechanical system (with a large, but finite, number of degrees of freedom) to the case of black-body radiation. In pursuing this line of research Einstein found an unexpected result, that pointed at an inconsistency between the current understanding of the processes of light emission and absorption and the statistical approach. To resolve this inconsistency, in the first paper Einstein05a of his “Annus Mirabilis” 1905, Einstein renounced the detailed picture of light emission and adsorption provided by Maxwell’s equations, maintaining his statistical approach, in particular the statistical interpretation of entropy. He introduced therefore the concept of light quanta, presented as a “heuristic point of view”. The importance of the 1902–04 papers on the molecular theory of heat in Einstein’s intellectual development and in the advance of physics has been stressed by Kuhn (Kuhn, , p. 171), when he states that What brought Einstein to the blackbody problem in 1904 and to Planck in 1906 was the coherent development of a research program begun in 1902, a program so nearly independent of Planck’s that it would almost certainly have led to the blackbody law even if Planck had never lived. In spite of their importance, the 1902–04 papers have received comparatively little attention. One of the reasons was the publication in 1902 of Gibbs’ Elementary Principles in Statistical Mechanics. Gibbs This book is considered, especially since the publication of the influential book by R. C. Tolman Tolman , as the founding text of the discipline. Einstein himself contributed to the neglect of the 1902-1904 papers. In his answer to Paul Hertz’ criticism of his derivation of the second principle Einstein11 , he says I only wish to add that the road taken by Gibbs in his book, which consists in one’s starting directly from the canonical ensemble, is in our opinion preferable to the road I took. If I had known Gibbs’ book at that time, I would have not published these papers at all, but I would have limited myself to the treatment of a few points. In his scientific autobiography (Einstein49, , p. 47) Einstein returned to this point, saying Not acquainted with the earlier investigations by Boltzmann and Gibbs, which had appeared earlier and actually exhausted the subject, I developed the statistical mechanics and molecular-kinetic theory of thermodynamics which was based on the former. My major aim in this was to find facts which would guarantee as much as possible the existence of atoms of definite size. The last sentence of this quotation highlights the different attitude of Einstein with respect to Gibbs. Einstein aims at using the statistical approach to establish the reality of atoms, while Gibbs aims at a rational foundation of thermodynamics, and consequently focuses on the regularities which emerge in systems with many degrees of freedom. Einstein’s papers contain a more direct and fundamental approach to the statistical mechanics of equilibrium, and could actually suggest a didactically effective path to the introduction of the fundamental ideas of the field. We shall therefore attempt to to ease their reading by summarizing them, pointing out in particular the differences between Einstein’s and Gibbs’ points of view. We shall not try to discuss all the detailed analyses of the papers which have appeared in the literature (beyond Kuhn’s work Kuhn , one can also read Mehra75 ; BaraccaRechtman ; Gearhart ; Navarro ; Uffink ; Inaba ), but shall only refer to the more interesting observations. 2 Kinetic theory of thermal equilibrium and of the second principle of thermodynamics The first two papers Einstein02 ; Einstein03 have a very similar structure. The second paper aims to widen the scope of the first, by attempting to consider “general” dynamical systems and irreversible processes. We shall follow the first paper, and we shall then briefly review the points in which the second paper differs. We adapt Einsteins discussion to modern notation. Einstein begins by considering a general physical system as represented by a mechanical system with many coordinates $q=(q_{1},\ldots,q_{n})$ and the corresponding momenta $p=(p_{1},\ldots,p_{n})$, obeying the canonical equations of motion with a time-independent Hamiltonian that is the sum of a potential energy (function of the $q$’s alone) and of a kinetic energy that is a quadratic function of the $p$’s, whose coefficients are arbitrary functions of the $q$’s (and is implicitly supposed to be positive definite). Following Gibbs, we shall call the $p$’s and $q$’s collectively as the phase variables, and the space they span the phase space. Einstein then considers a very large number $N$ of such systems, with the same Hamiltonian, whose energies $E$ lie between two very close values $\overline{E}$ and $\overline{E}+\delta E$. He then looks for the stationary distribution of these systems in phase space. Here Einstein introduces a strong mechanical hypothesis by assuming that, apart from the energy, there is no other function defined on the phase space that is constant in time.111This is the fundamental hypothesis linking the mechanical and the statistical aspects of the problem. It is probably inspired by the consideration of monocyclic systems, introduced by Helmholtz Helmholtz and discussed by Boltzmann in BoltzmannMonocyclic . Cf. KleinMonocyclic and GallavottiErgodic . He argues that this condition is equivalent to the requirement that the stationary distribution of the systems in phase space depends only on the value of the energy. He proves indeed that if there are other functions $\phi(q,p)$ that are constants of the motion, the stationary distribution is not uniquely identified by the value of the energy, but does not attempt to prove the converse. He then shows that Liouville’s theorem implies that the local density of systems in phase space is constant in time and therefore, by the mentioned hypothesis, must be a function of the energy alone. Since the energies of all $N$ systems are infinitely close to one another, this density must be uniform on the region of phase space defined by the corresponding value of the Hamiltonian. In this way Einstein has defined what is now called the microcanonical ensemble, i.e., the distribution in phase space which is uniform when the energy of the system lies between two closely lying values, and vanishes otherwise. Einstein now turns to the consideration of thermal equilibrium between one system $\SS$ and one $\Sigma$ considerably larger.222Einstein actually considers two systems with the same number of degrees of freedom, but where the energy contained in $\Sigma$ is considerably larger. Apparently the equipartition theorem, which he derives in § 6 of the paper, led him to realize the awkwardness of this restriction, and he drops it in the second paper. The second system acts as a thermal reservoir, and the first one as a thermometer. He assumes that the total energy $\mathcal{E}$ of the global system $\SS\cup\Sigma$ can be written as $$\mathcal{E}=E+H,$$ (1) up to negligible terms, where $E$ pertains to $\SS$ and $H$ to $\Sigma$. Let the phase variables of $\SS$ be denoted by $(p,q)$ and those of of $\Sigma$ by $(\pi,\chi)$. The question is now to find the distribution of the phase variables of $\SS$ when the energy of the global system lies between $\mathcal{E}_{0}$ and $\mathcal{E}_{0}+\delta\mathcal{E}$, while the phase variables of $\Sigma$ can take on any values. As pointed out by Uffink Uffink , this problem was considered several times by Boltzmann, who almost always solved it by taking an ideal gas for $\Sigma$ and explicitly evaluating the resulting phase-space integral. Einstein instead introduces an elegant trick which leads directly to the desired result. Let us consider an infinitesimally small domain $g$ in the phase space of the global system $\SS\cup\Sigma$, with energy $\mathcal{E}$ between $\mathcal{E}_{0}$ and $\mathcal{E}_{0}+\delta\mathcal{E}$. Then the number $dN$ of systems of the ensemble which are found in $g$ is $$dN=A\int_{g}dp\,dq\;d\pi\,d\chi,$$ (2) where $A$ is a constant. Actually one can choose instead of $A$ any function of the total energy $\mathcal{E}$ which takes the value $A$ for $\mathcal{E}=\mathcal{E}_{0}$. Let us thus set333Einstein actually uses the notation $2h$ instead of $\beta$, which is now the traditional choice. $$A=A^{\prime}\,e^{-\beta\,\mathcal{E}_{0}}=A^{\prime}\,e^{-\beta\,E}e^{-\beta\,% H},$$ (3) where $\beta$ is a constant. Thus the number $dN^{\prime}$ of systems such that the phase variables of $\SS$ lie in a region of volume $dp\;dq$ around the point $(p,q)$, while the variables of $\Sigma$ can have any value, as long as $\mathcal{E}$ lies between $\mathcal{E}_{0}$ and $\mathcal{E}_{0}+\delta\mathcal{E}$, is given by $$dN^{\prime}=A^{\prime}e^{-\beta E}\,dp\,dq\int e^{-\beta H}\,d\pi\,d\chi,$$ (4) where the integral runs over all values of the phase variables of $\Sigma$ such that the values of its Hamiltonian $H$ lie between $H_{0}$ and $H_{0}+\delta\mathcal{E}$, and $$H_{0}=\mathcal{E}_{0}-E.$$ (5) The value of the constant $\beta$ can be fixed by requiring that the integral appearing on the right-hand side of equation (4) be independent of $E$. Indeed, once $\delta\mathcal{E}$ is fixed, the integral can be considered as a function $\Phi(H)$ of $H$ alone. Thus, since $E\ll\mathcal{E}_{0}$, we have $$\Phi(H_{0})=\Phi(\mathcal{E}_{0}-E)\simeq\Phi(\mathcal{E}_{0})-E\,\Phi^{\prime% }(\mathcal{E}_{0}),$$ (6) where $\Phi^{\prime}$ is the derivative of $\Phi$ with respect to its argument. Thus $\Phi^{\prime}(\mathcal{E}_{0})=0$. We can write however $$\Phi(H)=e^{-\beta H}\cdot\omega(H),$$ (7) where $\omega(H)=\int d\pi_{1}\cdots d\chi_{n}$, with the integral extended to the region in phase space such that the energy of $\Sigma$ lies between $H$ and $H+\delta\mathcal{E}$. The condition now reads $$e^{-\beta\mathcal{E}_{0}}\omega(\mathcal{E}_{0})\left[-\beta+\frac{\omega^{% \prime}(\mathcal{E}_{0})}{\omega(\mathcal{E}_{0})}\right]=0,$$ (8) where $\omega^{\prime}$ is the derivative of $\omega$ with respect to is argument. We therefore obtain the required condition for $\beta$ in the form $$\beta=\frac{\omega^{\prime}(\mathcal{E}_{0})}{\omega(\mathcal{E}_{0})}.$$ (9) Einstein now turns to show that the quantity $\beta$ is always positive. He first derives a lemma, by considering a general (positive definite) quadratic function $\varphi(x_{1},\ldots,x_{n})$ of $n$ variables (where $n$ is large enough), and defining the function $z(y)$ by the integral $$z(y)=\int dx_{1}\cdots dx_{n},$$ (10) where the integral is extended to all points for which $\varphi$ lies between $y$ and $y+\Delta$, where $\Delta$ is fixed. He then easily shows that, for $n\geq 3$, $z(y)$ is an increasing function of $y$. Let us now denote by $\Gamma(H)$ the phase space available to the larger system $\Sigma$ when the values of its Hamiltonian lie between $H$ and $H+\delta\mathcal{E}$. The Hamiltonian of $\Sigma$ is given by the sum of the potential energy, that depends only on the coordinates, and of the kinetic energy, which is a quadratic form in the momenta, whose coefficients depend only on the coordinates. Let $H_{0}$ and $H_{1}$ be two values of $H$, with $H_{1}>H_{0}$, and let $\Gamma(H_{0})$ and $\Gamma(H_{1})$ be the corresponding available space regions. Let $Q(H_{0})$ be the region of coordinate space such that the potential energy of the system is smaller than $H_{0}$. Thus if the point $(\pi,\chi)$ belongs to $\Gamma(H_{0})$, the point $(\chi)$ belongs to $Q(H_{0})$. Within $\Gamma(H_{1})$ let us identify the region $\Gamma^{\prime}(H_{1})$ where the coordinates $\chi$ belong to $Q(H_{0})$. Thus, for each such values of the coordinates, since the total energy is larger than $H_{0}$, the kinetic energy must be larger. Therefore, by the lemma on the monotonic increase of $z(y)$ with $y$, for each such point in coordinate space, the volume available to the momenta is larger for $\Gamma^{\prime}(H_{1})$ than for $\Gamma(H_{0})$. Integrating over the coordinates we obtain that the volume of $\Gamma^{\prime}(H_{1})$ must be larger than that of $\Gamma(H_{0})$. Since the volume of the region of $\Gamma(H_{1})$ that does not belong to $\Gamma^{\prime}(H_{1})$ cannot be negative, the volume of $\Gamma(H_{1})$ must be larger than that of $\Gamma(H_{0})$, i.e., the function $\omega(H)$ increases with $H$, and $\beta$ given by the above expression must be positive. Now, Einstein derives what is now known as the zero-th law of thermodynamics. Since $\beta$ depends only on the state of $\Sigma$, but determines the distribution of $\SS$ in state space, independently on how $\Sigma$ and $\SS$ interact, it follows that if a given system $\Sigma$ interacts with two small system $\SS$ and $\SS^{\prime}$ and is in equilibrium with them, $\SS$ and $\SS^{\prime}$ must have the same value of $\beta$. In particular, if $\SS$ and $\SS^{\prime}$ are mechanically identical, the average value of any arbitrary observable function $A(p,q)$ must be equal in $\SS$ and $\SS^{\prime}$. Einstein then calls $\SS$ and $\SS^{\prime}$ thermometers, $\beta$ the temperature function and the average of $A$ the temperature measure. Then Einstein goes on to prove the converse result, namely that if two systems that have the same values of $\beta$ are put in contact, they will be in thermal equilibrium. He considers two systems, $\Sigma_{1}$ and $\Sigma_{2}$, weakly interacting. Let each of them be in contact with an (infinitesimally) small thermometer $\SS_{1}$ and $\SS_{2}$. The temperature measures $A_{1}$ and $A_{2}$ in each thermometer will be the same, since we are in fact dealing with a single interacting system in thermal equilibrium, and therefore also the corresponding temperature functions $\beta_{1}$ and $\beta_{2}$ will be equal. Let the interaction terms between $\Sigma_{1}$ and $\Sigma_{2}$ be slowly brought to zero. Then the readings of the thermometers will remain equal, but now the reading of $\SS_{1}$ deals only with $\Sigma_{1}$ and that of $\SS_{2}$ only with $\Sigma_{2}$. The process is reversible, since we are dealing with a sequence of thermal equilibrium states. Thus, by reversing it, we obtain the required result. As an immediate consequence, we obtain that if $\Sigma_{1}$ and $\Sigma_{2}$ are in thermal equilibrium, and so are $\Sigma_{2}$ and $\Sigma_{3}$, then $\Sigma_{1}$ and $\Sigma_{3}$ are in thermal equilibrium, since they share the same value of $\beta$. Einstein concludes this section with the intriguing remark: I would like to note here that until now we have made use of the assumption that our systems are mechanical only inasmuch as we have applied Liouville’s theorem and the energy principle. Probably the basic laws of the theory of heat can be developed for systems that are defined in a much more general way. We will not attempt to do this here, but will rely on the equations of mechanics. We will not deal here with the important question as to how far the train of thought can be separated from the model employed and generalized. Uffink Uffink has remarked that “this quote indicates (with hindsight) a remarkable underestimation of the logical dependence of [Einstein’s] approach on the ergodic hypothesis.” But the passage shows, as also stressed by Uffink, that already in 1902 Einstein was considering the need to extend the statistical approach beyond its application to mechanical systems, no matter how general they can be conceived. A simple calculation allows Einstein to derive the equipartition theorem in the following form. Let the kinetic energy of a system be represented by a quadratic expression of the form $$K=\frac{1}{2}\left(\alpha_{1}p_{1}^{2}+\cdots+\alpha_{n}p_{n}^{2}\right),$$ (11) where $\alpha_{i}$, $i=1,\ldots,n$, are positive constants or functions of the coordinates $q$. This form can always be reached from a general quadratic expression by a suitable canonical transformation. The $p$ variables had been denoted as “momentoids” by Boltzmann. Then the average of $K$ at equilibrium is given by $$\left<{K}\right>=\frac{n}{2\beta}.$$ (12) In particular, this result implies that the kinetic energy of a single molecule in an ideal gas is equal to $3/(2\beta)$ on average. Kinetic theory teaches us that this quantity is proportional to the product of the pressure and the volume per particle in an ideal gas. Since this is proportional to the absolute temperature $T$, we obtain $$\frac{1}{\beta}=k_{\mathrm{B}}T=\frac{\omega(H)}{\omega^{\prime}(H)},$$ (13) where $k_{\mathrm{B}}$ is Boltzmann’s constant and $\omega(H)$ is the volume of phase space contained by the equal-energy surfaces of $\Sigma$ corresponding to the values $H$ and $H+\delta\mathcal{E}$. Having found the relation between $\beta$ and the temperature, Einstein proceeds to the derivation of the second law of thermodynamics, which he here limits to the statement of the integrability of heat divided by the absolute temperature. He switches back to a Lagrangian setting, describing the system’s state by the coordinates $q$ and their time derivatives $\dot{q}$, and introduces externally applied forces. These forces are split into ones derived from a potential depending on the $q$’s, and others that allow for heat transfer. The first ones are assumed to vary slowly with time, while the second ones change very rapidly. The infinitesimal heat $\delta Q$ is defined as the work of the second type of forces. Then a reversible transformation is one in which the system is led from an equilibrium state with given values of $\beta$ and of the volume $V$ to one with the values $\beta+\delta\beta$ and $V+\delta V$. Here Einstein tacitly assumes that the time average of the relevant quantities in a slow transformation can be obtained by averaging the same quantity over the distribution of the $N$ systems in phase space. He thus finds that $$\frac{\delta Q}{T}=d\left(\frac{\left<{E}\right>-F}{T}\right),$$ (14) where $\left<{E}\right>$ is the average total energy of the system, and $F$ is a constant introduced so that the distribution $P(p,q)=e^{\beta(E(p,q)-F)}$ is normalized. Einstein remarks that this expression contains the total energy, and is independent of its splitting into kinetic and potential terms.444This will be the starting point of his 1903 paper. One can readily integrate this expression, obtaining an explicit form of the entropy $S$: $$S=\frac{\left<{E}\right>-F}{T}=\frac{\left<{E}\right>}{T}+k_{\mathrm{B}}\log% \int e^{-\beta E(p,q)}\,dp\,dq+\text{const.}$$ (15) Now Einstein states the announced generalization of the second principle. It is worth quoting this short paragraph in its entirety. Einstein denotes by $V_{a}$ the potential of the conservative forces performing the reversible transformation. He then states No assumptions had to be made about the nature of the forces that correspond to the potential $V_{a}$ [the conservative ones], not even that such forces occur in nature. Thus, the mechanical theory of heat requires that we arrive at correct results if we apply Carnot’s principle to ideal processes, which can be produced from the observed processes by introducing arbitrarily chosen $V_{a}$’s. Of course, the results obtained from the theoretical consideration of those processes can have real meaning only when the ideal auxiliary forces $V_{a}$ no longer appear in them. Thus the strategy which led to the establishment of the Einstein relation in Brownian motion, in the 1905 paper, is already sketched in this one. 3 A theory of the foundations of thermodynamics In his 1903 memoir, entitled “A theory of the foundations of thermodynamics” Einstein03 , Einstein asks whether kinetic theory is essential for the derivation of the postulates of thermal equilibrium and of the entropy concept, or whether “assumptions of a more general nature” could be sufficient. He goes on therefore to consider a general dynamical system whose state is identified by a collection $p$ of variables $p=(p_{1},\ldots,p_{n})$, which correspond to both coordinates and momenta for a mechanical system, and evolve by a general system of equations of motion of the kind $$\frac{dp_{i}}{dt}=\varphi_{i}(p_{1},\ldots,p_{n});\qquad i=1,\ldots,n.$$ (16) Assuming that the system allows for a unique integral of motion, the energy $E(p)$, he then introduces the equilibrium postulate, according to which a “physical system” eventually reaches a time-independent macroscopic state, in which any “perceptible quantity” assumes a time-independent value. Einstein then looks for the stationary distribution of a collection of $N$ systems, with $N$ large. Each system evolves according to equations (16) and has an energy between $E$ and $E+\delta E$. He claims that the equilibrium postulate, along with the absence of any integral of motion beyond the energy, implies the existence of a well-defined probability distribution in $p$-space. Einstein’s argument reads Starting at an arbitrary point of time and throughout time $\mathcal{T}$, we consider a physical system which is represented by the equations (16) and has the energy $E$. If we imagine having chosen some arbitrary region $\Gamma$ of the state variables $p_{1}\ldots p_{n}$, then at a given instant of time $\mathcal{T}$ the values of the variables $p_{1}\ldots p_{n}$ will lie within the chosen region $\Gamma$ or outside it; hence, during a fraction of the time $\mathcal{T}$, which we will call $\tau$, they will lie in the chosen region $\Gamma$. Our condition then reads as follows: If the $p_{1}\ldots p_{n}$ are state variables of a physical system, i.e., of a system that assumes a stationary state, then for each region $\Gamma$ the quantity $\tau/\mathcal{T}$ has a definite limiting value for $\mathcal{T}=\infty$. For each infinitesimally small region this value is infinitesimally small. Thus the stationary distribution is identified by a function $\epsilon(p_{1},\ldots,p_{n})$ such that the number $dN$ of systems which at any given instant in time are found in the infinitesimal region $g$ located around $(p_{1},\ldots,p_{n})$ is given by $$dN=\epsilon(p_{1},\ldots,p_{n})\,dp_{1}\cdots dp_{n}.$$ (17) If this is true at a given instant $t$, then at a close instant $t+dt$ one has $$dN_{t+dt}=dN_{t}-\left(\sum_{\nu=1}^{n}\frac{\partial(\epsilon\varphi_{\nu})}{% \partial p_{\nu}}\right)dp_{1}\cdots dp_{n}.$$ (18) Since $dN_{t+dt}=dN_{t}$, by the stationarity of the distribution, one must have $$\sum_{\nu=1}^{n}\frac{\partial(\epsilon\varphi_{\nu})}{\partial p_{\nu}}=0.$$ (19) Then $$-\sum_{\nu=1}^{n}\frac{\partial\varphi_{\nu}}{\partial p_{\nu}}=\sum_{\nu=1}^{% n}\frac{\partial\log\epsilon}{\partial p_{\nu}}\varphi_{\nu}=\frac{d\log% \epsilon}{dt}.$$ (20) The solution of equation (20) is $$\epsilon=\exp\left[-\int dt\;\sum_{\nu=1}^{n}\frac{\partial\varphi_{\nu}}{% \partial p_{\nu}}+\psi(E)\right],$$ (21) where $\psi(E)$ is a time-independent integration constant that, by the previous hypotheses, can only depend on the $p$’s via the energy $E$. One thus obtains $$\epsilon=\text{const.}\times\exp\left[-\int dt\;\sum_{\nu=1}^{n}\frac{\partial% \varphi_{\nu}}{\partial p_{\nu}}\right]=\text{const.}\;e^{-m},$$ (22) where $m$ is given by $$m=\int dt\;\sum_{\nu=1}^{n}\frac{\partial\varphi_{\nu}}{\partial p_{\nu}}.$$ (23) Einstein now assumes that it is possible to introduce new state variables, denoted by $\pi_{1},\ldots,\pi_{n}$, such that the factor $e^{-m}$ is cancelled by the Jacobian of the transformation. With this transformation, one obtains a uniform stationary distribution in phase space. However it is clear that this transformation cannot be performed unless $m$ is time-independent, which implies $d(\log\epsilon)/dt=0$ throughout, i.e., a form of Liouville’s theorem. The oversight was realized by Einstein in March 1903, as witnessed by a letter to Michele Besso, (CPAE, , Vol. 5, Doc. 7) quoted by Uffink Uffink : If you look at my paper more closely, you will find that the assumption of the energy principle & of the fundamental atomistic idea alone does not suffice for an explanation of the second law; instead, coordinates $p$ must exist for the representation of things, such that for every conceivable total system $\sum\partial\phi_{\nu}/\partial p_{\nu}=0$. […] If that is true, then the entire generalization attained in my last paper consists in the elimination of the concept of force as well as in the fact that $E$ can possess an arbitrary form (not completely)? The sections that immediately follow, on the distribution of a system in contact with a reservoir, on the absolute temperature and thermal equilibrium, and on the definition of “infinitely slow” (quasistationary) processes, are not fundamentally different from the corresponding sections of the 1902 memoir. The derivation of the mechanical expression of the entropy is however slightly different, in particular because the possibility of resorting to the Lagrangian formulation is no longer available. Einstein considers a situation in which the functions $\varphi_{\nu}$ which appear on the right-hand side of the equations (16) depend not only on the coordinates $p_{\nu}$, but also on some parameters $\lambda$. He then considers an infinitely-slow infinitesimal transformation, subdividing it into an isopycnic process, in which the $\lambda$’s are kept constant, but the system is put in thermal contact with a system at a different temperature, and an adiabatic process, in which the system is isolated, but the $\lambda$’s are allowed to vary. The energy change $dE$ is given in general by $$dE=\sum\frac{\partial E}{\partial\lambda}d\lambda+\sum_{\nu}\frac{\partial E}{% \partial p_{\nu}}dp_{\nu}.$$ (24) In an isopycnic process the first term on the right-hand side of this equation vanishes, but the second term can be different from zero, since the equations of motion (16), which conserve $E$, do not hold when the system is not isolated. In an adiabatic process, on the other hand, the second term vanishes, since the equations of motion (16) satisfy energy conservation, but at the same time one has $dQ=0$. One can therefore write in general $$dQ=\sum_{\nu}\frac{\partial E}{\partial p_{\nu}}\,dp_{\nu}.$$ (25) Therefore, in the expression for the change of energy in an infinitely slow process given in equation (24), one can identify the second term in the right-hand side with the infinitesimal heat exchange $dQ$, and the first one, accordingly, with the infinitesimal work. Einstein has thus obtained a mechanical expression of the first principle of thermodynamics. Let us now denote by $W(p_{1},\ldots,p_{n})$ the probability distribution in phase space of the system when it is in equilibrium with an external body with a temperature function given by $\beta$. As derived by Einstein in § 3 of the paper, along the lines of its 1902 paper, it is given by $$dW=e^{c-\beta E}\,dp_{1}\cdots dp_{n},$$ (26) where the constant $c$ is defined by the normalization condition $$\int dW=\int e^{c-\beta E}\,dp_{1}\cdots dp_{n}=1.$$ (27) Let us assume that after the transformation, the system is in equilibrium with a body with temperature function $\beta+d\beta$, while the parameters $\lambda$ assume the values $\lambda+d\lambda$. Then the normalization condition assumes the form $$\int\exp\left[c+dc-(\beta+d\beta)\left(E+\sum\frac{\partial E}{\partial\lambda% }d\lambda\right)\right]\,dp_{1}\cdots dp_{n}=1.$$ (28) One thus obtains, to first order, $$\begin{split}&\displaystyle\int\left(dc-E\,d\beta-\beta\sum\frac{\partial E}{% \partial\lambda}d\lambda\right)\,e^{c-\beta E}\,dp_{1}\cdots dp_{n}=0.\end{split}$$ (29) Einstein now argues that the expression in parentheses can be considered as a constant, “because the system’s energy $E$ never differs markedly from a fixed average before and after the process”, and thus obtains $$dc-E\,d\beta-\beta\sum\frac{\partial E}{\partial\lambda}d\lambda=0.$$ (30) Since $$E\,d\beta+\beta\sum\frac{\partial E}{\partial\lambda}d\lambda=d\left(\beta E% \right)-\beta\sum_{\nu}\frac{\partial E}{\partial p_{\nu}}dp_{\nu}=d\left(% \beta E\right)-\beta\,dQ,$$ (31) where equation (25) has been substituted, Einstein obtains the relation $$\beta\,dQ=d(\beta E-c),$$ (32) and thus, since $1/\beta=k_{\mathrm{B}}T$, $$\frac{dQ}{T}=d\left(\frac{E}{T}-k_{\mathrm{B}}c\right)=dS,$$ (33) from which he obtains the expression of the entropy $$S=\frac{E}{T}-k_{\mathrm{B}}c=\frac{E}{T}+k_{\mathrm{B}}\log\int e^{-E/k_{% \mathrm{B}}T}\,dp_{1}\cdots dp_{n}.$$ (34) It is interesting to remark that in the 1902 paper Einstein had derived a similar expression of the heat exchanged $dQ$ involving the average values of the kinetic and potential energies, while here Einstein states that the values of the energy $E$ which matter are not very different from their mean value. This assumption is unnecessary because the relation (30) holds if $E$ is understood as the mean value of the energy, which is enough to reach Einstein’s goals. Moreover, Einstein has not yet derived this property of the energy distribution. We shall see that this assumption also leads Einstein to a quite dubious result in the next discussion, where he attempts to establish the property of entropy increase. In our opinion, Einstein later reconsidered this argument and was led therefore to investigate the fluctuations of energy, which he discusses in his next paper. Einstein now attempts to prove that the entropy does not decrease in transformations involving an adiabatically isolated system. He goes on from the probability distribution of a single system in its phase space, when the value of its energy is fixed, to the distribution of a collection of a very large number $N$ of such systems with the same value of the energy. Dividing the phase space in $\ell$ regions $g_{i}$, $i=1,\ldots,\ell$ of equal volume, Einstein looks for the probability of that $n_{1}$ systems fall in $g_{1}$, …, $n_{\ell}$ systems fall in $g_{\ell}$. The result is obviously $$W=\left(\frac{1}{\ell}\right)^{N}\frac{N!}{n_{1}!\cdots n_{\ell}!}.$$ (35) One then has, by Stirling’s formula, $$\begin{split}\displaystyle\log W&\displaystyle=\text{const.}-\sum_{i}n_{i}\log n% _{i}\simeq\text{const.}-\int\rho\log\rho\;dp_{1}\cdots dp_{n},\end{split}$$ (36) where $\rho$ is the density of systems in the $p$-space, when $\ell\to\infty$. It would have been a simple step to connect explicitly this expression to the entropy by means of Boltzmann’s formula, but Einstein does not do it. He instead uses it first to show that this expression reaches a maximum when $\rho$ is constant on the whole region of phase space in which the energy has the assigned value. He then argues that if the density $\rho$ differs noticeably from a constant (for states of a given value of the energy), it will be possible to find distributions with a larger value of $W$. In this case, if we follow in time the ensemble, the distribution will change with time, and since “we will have to assume that always more probable distributions will follow upon improbable ones, i.e., that $W$ increases until the distribution of states has become constant and $W$ a maximum”. Thus, if the distribution changes from $\rho$ to $\rho^{\prime}$ as time goes by, and the probability correspondingly increases from $W$ to $W^{\prime}$, the integral on the right-hand side of equation (36) decreases. He then argues that if the values of $\log\rho$ (when $\rho$ does not essentially vanish) are close to uniform, and the probability increases, one obtains the relation $$-\log\rho^{\prime}\geq-\log\rho.$$ (37) This equation cannot be true without qualification, due to the normalization condition, and is however unnecessary for Einstein’s argument in the immediately following section. See, e.g., the discussion in (Uffink, , §2.2). This is probably one of the points which led Einstein, in retrospect, to reconsider the assumption that the values of the energy which have non-vanishing probability are close to constant, and to evaluate the energy fluctuations. Einstein then takes advantage of this result to obtain the law of entropy increase in the following way. He considers a finite number of systems $\sigma_{1},\ldots,\sigma_{\nu},\ldots$, that together form an isolated system with state variables $p^{(1)}_{1},\ldots,p^{(1)}_{n_{1}},\ldots,$$p_{1}^{(\nu)},\ldots,p_{n_{\nu}}^{(\nu)},\ldots$, such that $n=\sum_{\nu}n_{\nu}$. System $\sigma_{\nu}$ is initially in equilibrium at a temperature $T_{\nu}=1/k_{\mathrm{B}}\beta_{\nu}$, and is therefore described by the distribution $$dw_{\nu}=e^{c_{\nu}-\beta_{\nu}E_{\nu}}\,dp_{1}^{(\nu)}\cdots dp_{n_{\nu}}^{(% \nu)}.$$ (38) Then the distribution of the global system is given by $$dw=\prod_{\nu}dw_{\nu}=e^{\sum(c_{\nu}-\beta_{\nu}E_{\nu})}\,dp_{1}\cdots dp_{% n}.$$ (39) Let us assume that the systems are now allowed to interact among themselves, and that at the end of the process a new equilibrium is reached, characterized by the temperature parameters $\beta^{\prime}_{\nu}$, etc. We then have, at the end of the process, $$dw^{\prime}=\prod_{\nu}dw^{\prime}_{\nu}=e^{\sum(c^{\prime}_{\nu}-\beta^{% \prime}_{\nu}E^{\prime}_{\nu})}\,dp_{1}\cdots dp_{n}.$$ (40) Einstein now introduces an ensemble of a very large number $N$ of global systems $\Sigma$ to argue that, since $W$ always increases, the distributions $$\displaystyle\rho$$ $$\displaystyle=N\,e^{\sum(c_{\nu}-\beta E_{\nu})};$$ (41a) $$\displaystyle\rho^{\prime}$$ $$\displaystyle=N\,e^{\sum(c^{\prime}_{\nu}-\beta^{\prime}E^{\prime}_{\nu})};$$ (41b) satisfy equation (37), i.e., $$\sum\left(c^{\prime}_{\nu}-\beta^{\prime}_{\nu}E^{\prime}_{\nu}\right)\geq\sum% (c_{\nu}-\beta_{\nu}E_{\nu}).$$ (42) But this implies, by equation (34), $$\sum S^{\prime}_{\nu}\geq\sum S_{\nu}.$$ (43) Again, the detour by equation (37) is disputable and unnecessary. Indeed, it is sufficient to use equation (35) to obtain equation (42) where $E$ is now taken as the mean value of the energy, and the result would follow. The observations made after equation (30) also apply here. However, the main weakness of the argument lies in the petitio principii that the probability $W$ of the ensemble distribution should always increase. This objection was raised by Paul Hertz in 1910 Hertz , and Einstein soon acknowledged Einstein11 that the objection was “fully founded”. In the closing section of this paper Einstein applies these results to a simple description of a thermal engine connected in turn to several heat reservoirs to derive the second principle in the form of Clausius. 4 On the general molecular theory of heat A change of pace is easily noticed already in the first lines of the 1904 paper, entitled “On the general molecular theory of heat.”Einstein04 Here he refers to his previous papers, in which he had spoken of the “kinetic theory of heat” as laying the foundations of thermodynamics, by the less specific expression of “molecular theory of heat”. The paper contains several results worth mentioning, as announced at the end of the introduction First, I derive an expression for the entropy of a system, which is completely analogous to the expression found by Boltzmann for ideal gases and assumed by Planck in his theory of radiation. Then I give a simple derivation of the second law. After that I examine the meaning of a universal constant, which plays an important role in the general molecular theory of heat. I conclude with an application of the theory to black-body radiation, which yields a most interesting relationship between the above-mentioned universal constant, which is determined by the magnitudes of the elementary quanta of matter and electricity, and the order of magnitude of the radiation wave-lengths, without recourse to special hypotheses. These results are obtained as independent developments of the theory reported in the previous two papers. In the previous papers he had derived the canonical expression of entropy, namely $$S=\frac{E}{T}+k_{\mathrm{B}}\int e^{-E/k_{\mathrm{B}}T}\,dp_{1}\cdots dp_{n},$$ (44) where $(p_{1},\ldots,p_{n})$ are the general state variables of the system, and $E$ is the value of the internal energy. In § 1 of this paper Einstein derives the expression we now call microcanonical, which is related to the density of states of energy $E$, $\omega(E)$, by the relation $$S=k_{\mathrm{B}}\log[\omega(E)].$$ (45) He obtains this result by integrating the relation between the temperature and $\omega(E)$ previously derived: $$\frac{1}{k_{\mathrm{B}}T}=\frac{\omega^{\prime}(E)}{\omega(E)},$$ (46) where one assumes that the system’s energy lies between $E$ and $E+\delta E$. Note, however, that in the previous papers $\omega(E)$ was the energy density of the thermal reservoir, while this relation is tacitly applied here to the energy density of the system. Interestingly, in this paper Einstein defines for the first time the density of states $\omega(E)$ in the now customary way, by $$\omega(E)\,\delta E=\int_{E}^{E+\delta E}dp_{1}\cdots dp_{n},$$ (47) while in the previous papers he kept including the $\delta E$ factor in its definition. The “derivation” of the second law in § 2 suffers again, as in the 1903 paper, from the petitio principii of the assumption that more improbable states never follow more probable ones. The calculation is now simpler, but the result is also more restricted. First Einstein formulates the zero-th laws law of thermodynamics by assuming that if a system is in contact with an environment at temperature $T_{0}$ it acquires the temperature $T_{0}$ and keeps it from then on. However, according to the molecular theory of heat, this is not absolutely true, but true only with some approximation. In particular the probability $W\,\delta E$ that the energy of such a system has a value lying between $E$ and $E+\delta E$ at an arbitrary point in time is given by $$W\,\delta E=C\,e^{-E/k_{\mathrm{B}}T_{0}}\,\omega(E)\,\delta E,$$ (48) where $C$ is a constant. Einstein argues that this distribution is very sharply peaked and that, because of the previous result, it can also be written in the form $$W\,\delta E=C\,\exp\left[\frac{1}{k_{\mathrm{B}}}\left(S-\frac{E}{T_{0}}\right% )\right]\,\delta E,$$ (49) where $S=S(E)$ is the value of the entropy pertaining to the value $E$ of the internal energy. Note that here again the property of the distribution of being sharply peaked is not needed, and anyway has not yet been derived. More interestingly, as far as we know, this is the first statement of Einstein’s principle of fluctuations, which relates the probability of an energy fluctuation in a thermodynamic system to the difference in the expression $\mathcal{F}(E,T)=E-TS(E)$, which is now known as the availability. Now Einstein considers a system made of several such subsystems, all in contact with a large similar system at the temperature $T_{0}$. The probability $\mathfrak{W}$ of a given distribution $(E_{1},\ldots,E_{\ell})$ of the energy among these subsystems is given by $$\mathfrak{W}\propto\exp\left[\frac{1}{k_{\mathrm{B}}}\left(\sum_{i=1}^{\ell}S_% {i}-\frac{1}{T_{0}}\sum_{i=1}^{\ell}E_{i}\right)\right].$$ (50) Let the reservoirs exchange energy, maybe by the assistance of cyclic machines, reaching an energy distribution $(E^{\prime}_{1},\ldots,E^{\prime}_{\ell})$. The corresponding probability is given by $$\mathfrak{W}^{\prime}\propto\exp\left[\frac{1}{k_{\mathrm{B}}}\left(\sum_{i=1}% ^{\ell}S^{\prime}_{i}-\frac{1}{T_{0}}\sum_{i=1}^{\ell}E^{\prime}_{i}\right)% \right].$$ (51) Assuming again that less probable states are followed by more probable ones, one must have $$\mathfrak{W}^{\prime}\geq\mathfrak{W}.$$ (52) Since $\sum_{i}E_{i}$ is conserved, this equation implies $$\sum_{i=1}^{\ell}S^{\prime}_{i}\geq\sum_{i=1}^{\ell}S_{i}.$$ (53) It is hard for us to make sense of this derivation. The results seems restricted to systems in contact with a reservoir with a given temperature $T_{0}$, and therefore it is by no means general. In particular the inequality among the $\mathfrak{W}$’s cannot be absolutely satisfied without violating the normalization of probabilities, just as in the case of equation (37). The most interesting part is the way in which Einstein treats the distribution of energies among the system as a collective state of a system made of several subsystems and, at the same time, as one possible macroscopic state of a system governed by a canonical distribution at the temperature $T_{0}$. This device will be put to use in the 1910 work on critical fluctuations. Einstein1910 The physical interpretation of the constant $\kappa=k_{\mathrm{B}}/2$ is obtained by Einstein in § 3 by evaluating, via his equipartition theorem, the kinetic energy of a mechanical system of $n$ particles, and by relating the resulting expression to the one obtained by the kinetic theory for the ideal gas. He thus obtains an explicit estimate of $\kappa=6.5\cdot 10^{-17}\mathrm{erg\,K^{-1}}$, corresponding to $k_{\mathrm{B}}=1.3\cdot 10^{-23}\mathrm{J\,K^{-1}}$. The discrepancy with modern values is due to the use of the value $N_{\mathrm{A}}=6.4\cdot 10^{23}\mathrm{mol^{-1}}$ for Avogadro’s number, that Einstein found in O. E. Meyer’s book. Meyer In § 4, under the title “General meaning of the constant $\kappa$” Einstein discusses the fluctuations of the energy in the canonical ensemble, deriving the relation between the specific heat and the amplitude of energy fluctuations as $$\left<{E^{2}}\right>-\left<{E}\right>^{2}=k_{\mathrm{B}}T^{2}\frac{d\left<{E}% \right>}{dT},$$ (54) where $\left<{\ldots}\right>$ denotes the canonical average. Gibbs had obtained the same expression in (Gibbs, , eq. (205), p. 72), but pointed out almost immediately that these fluctuations were not observable. With $\epsilon$, $\epsilon_{p}$ and $\epsilon_{q}$ the total, kinetic and potential energies respectively, and denoting averages by a bar, he states (Gibbs, , p. 74f) It follows that to human experience and observation with respect to such an ensemble as we are considering, or with respect to systems which may be regarded as taken at random from such an ensemble, when the number of degrees of freedom is of such order of magnitude as the number of molecules in the bodies subject to our observation and experiment, $\epsilon-\bar{\epsilon}$, $\epsilon_{p}-\bar{\epsilon}_{p}$, $\epsilon_{q}-\bar{\epsilon}_{q}$ would be in general vanishing quantities, since such experience would not be wide enough to embrace the more considerable divergencies from the mean values, and such observation not nice enough to distinguish the ordinary divergencies. In other words, such ensembles would appear to human observation as ensembles of uniform energy, and in which the potential and kinetic energies (supposing that there were means of easing these quantities separately) had each separately uniform values. Characteristically, Einstein instead goes over immediately to look for a system in which these fluctuations could be observed and he finds that the blackbody radiation could provide such a system. It is worth quoting his reasoning (Einstein04, , § 5) If the linear dimensions of a space filled with temperature radiation are very large in comparison with the wavelength corresponding to the maximum energy of the radiation at the temperature in question, then the mean energy fluctuation will obviously be very small in comparison with the mean radiation energy of that space. In contrast, if the radiation space is of the same order of magnitude as that wavelength, then the energy fluctuation will be of the same order of magnitude as the energy of the radiation of the radiation space. Einstein pauses only one moment before proceeding to the application of his molecular theory of heat to black-body radiation (Einstein04, , § 5) Of course, one can object that we are not permitted to assert that a radiation space should be viewed as a system of the kind we have assumed, not even if the applicability of the general molecular theory is conceded. Perhaps one would have to assume, for example, that the boundaries of the space vary with its electromagnetic state. However, these circumstances need not be considered, as we are dealing with orders of magnitude only. Einstein can thus evaluate the size $\left<{\epsilon^{2}}\right>$ of the energy fluctuations $\epsilon=E-\left<{E}\right>$ from equation (54) and from the Stefan-Boltzmann law $$\left<{E}\right>=a\,v\,T^{4},$$ (55) where $a=7.06\cdot 10^{-15}\,\mathrm{erg\,cm^{-3}\,K^{-4}}$ is the radiation constant, $T$ is the absolute temperature, and $v$ is the cavity volume. Then, the linear dimensions of a cavity for which $\left<{\epsilon^{2}}\right>\simeq\left<{E}\right>$ are given by $$\sqrt[3]{v}=\frac{1}{T}\sqrt[3]{\frac{4k_{\mathrm{B}}}{a}}=\frac{0.42}{T},$$ (56) which compares well (in order of magnitude) with the expression $\lambda_{\max}=0.293/T$ obtained from Planck’s law (both lengths are expressed in cm, and $T$ is expressed in Kelvin). However, in the following months, trying to explicitly apply his theory to that system, he will encounter a paradox, which he will brilliantly overcome by renouncing the classical picture of the emission and adsorption of light, based on Maxwell’s equations, and by introducing the concept of the light quanta. Einstein05a But that is another story, which has already been told many times. 5 Einstein and Gibbs One usually takes for granted that the research projects pursued by Einstein in these three papers, and by Gibbs in his 1902 book Gibbs were equivalent, and that the more mathematically refined argumentation contained in the latter made Einstein’s approach redundant. A closer scrutiny shows however fundamental differences in their approaches, and makes Einstein’s approach more attractive to present-day physicists. Gibbs program focuses in understanding the properties of ensembles of mechanical systems, i.e., of systems whose dynamical equations are given, but whose initial conditions are only given in a probability distribution. He gives this discipline the name of “statistical mechanics”. He stresses that its relevance goes beyond establishing a foundation of thermodynamics (Gibbs, , Preface, p. viii) But although, as a matter of history, statistical mechanics owes its origin to investigations in thermodynamics, it seems eminently worthy of an independent development, both on account of the elegance and simplicity of its principles, and because it yields new results and places old truths in a new light in departments quite outside of thermodynamics. Indeed, statistical mechanics laws are more general than those of thermodynamics (Gibbs, , p.ix) The laws of thermodynamics, as empirically determined, express the approximate and probable behavior of systems of a great number of particles, or, more precisely, they express the laws of mechanics for such systems as they appear to beings who have not the fineness of perception to enable them to appreciate quantities of the order of magnitude of those which relate to single particles, and who cannot repeat their experiments often enough to obtain any but the most probable results. The laws of statistical mechanics apply to conservative systems of any number of degrees of freedom, and are exact. On the other hand, according to Gibbs, our ignorance of the basic constitution of material bodies make unreliable our inferences based on supposed models of matter, even when derived by the methods of statistical mechanics (Gibbs, , p.ix-x) In the present state of science, it seems hardly possible to frame a dynamic theory of molecular action which shall embrace the phenomena of thermodynamics, of radiation, and of the electrical manifestations which accompany the union of atoms. […] Even if we confine our attention to the phenomena distinctively thermodynamic, we do not escape difficulties in as simple a matter as the number of degrees of freedom of a diatomic gas. It is well known that while theory would assign to the gas six degrees of freedom per molecule, in our experiments on specific heat we cannot account for more than five. Certainly, one is building on an insecure foundation, who rests his work on hypotheses concerning the constitution of matter. Gibbs therefore attempts to reduce his goal to a purely mathematical treatment (Gibbs, , p. x) Difficulties of this kind have deterred the author from attempting to explain the mysteries of nature, and have forced him to be contented with the more modest aim of deducing some of the more obvious propositions relating to the statistical branch of mechanics. Here, there can be no mistake in regard to the agreement of the hypotheses with the facts of nature, for nothing is assumed in that respect. The only error into which one can fall, is the want of agreement between the premises and the conclusions, and this, with care, one may hope, in the main, to avoid. One can therefore only hope to establish analogies between quantities which are defined within statistical mechanics, and those which are empirically encountered in thermodynamics (Gibbs, , p. x) We meet with other quantities, in the development of the subject, which, when the number of degrees of freedom is very great, coincide sensibly with the modulus, and with the average index of probability, taken negatively, in a canonical ensemble, and which, therefore, may also be regarded as corresponding to temperature and entropy. The relations of the laws of statistical mechanics with thermodynamics is further discussed in (Gibbs, , Ch. XIV, p. 166) A very little study of the statistical properties of conservative systems of a finite number of degrees of freedom is sufficient to make it appear, more or less distinctly, that the general laws of thermodynamics are the limit toward which the exact laws of such systems approximate, when their number of degrees of freedom is indefinitely increased. And the problem of finding the exact relations, as distinguished from the approximate, for systems of a great number of degrees of freedom, is practically the same as that of finding the relations which hold for any number of degrees of freedom, as distinguished from those which have been established on an empirical basis for systems of a great number of degrees of freedom. The enunciation and proof of these exact laws, for systems of any finite number of degrees of freedom, has been a principal object of the preceding discussion. But it should be distinctly stated that, if the results obtained when the numbers of degrees of freedom are enormous coincide sensibly with the general laws of thermodynamics, however interesting and significant this coincidence may be, we are still far from having explained the phenomena of nature with respect to these laws. For, as compared with the case of nature, the systems which we have considered are of an ideal simplicity. […] The phenomena of radiant heat, which certainly should not be neglected in any complete system of thermodynamics, and the electrical phenomena associated with the combination of atoms, seem to show that the hypothesis of systems of a finite number of degrees of freedom is inadequate for the explanation of the properties of bodies. In Gibbs’ approach, the probability distribution is a datum of the problem, while in Einstein’s one it is one of the unknowns. The greatest difference is that Gibbs starts from the equal a priori probability postulate, while for Einstein what is important is to evaluate time averages and these are replaced by phase space averages through an ergodic hypothesis. Thus Gibbs is allowed to introduce the canonical distribution a priori, as a particularly simple one, endowed with interesting properties, in particular because it factorizes when one considers the collection of two or more mechanically independent systems (Gibbs, , Ch. IV, p. 33) The distribution […] seems to represent the most simple case conceivable, since it has the property that when the system consists of parts with separate energies, the laws of the distribution in phase of the separate parts are of the same nature, a property which enormously simplifies the discussion, and is the foundation of extremely important relations to thermodynamics. On the contrary, for Einstein, the canonical distribution is the distribution which describes the mechanical state of a system in contact with a thermal reservoir at a given temperature, while the “simplest” distribution is rather the microcanonical, which represents the state of an isolated system at equilibrium. And the former is derived from the latter. Einstein’s 1910 lecture notes on the Kinetic Theory of Heat at the University of Zurich show, in Navarro’s words (Navarro, , §6.2), how his approach allowed him to proceed to the systematic application of statistical mechanics, once the canonical distribution is attained, to a large variety of fields. This is a sample list of the applications presented in the lecture notes: paramagnetism, Brownian motion, magnetic properties of solids, electron theory of metals, thermoelectricity, particle suspensions and viscosity. Gibbs invented, instead, a method whereby he could find no direct physical application other than the detection of the already mentioned thermodynamic analogies. Had Gibbs lived longer (he died the year after the publication of Elementary Principles) this might have changed. But, given his rigorous and extremely cautious attitude, any assumption on the issue is enormously risky. Even more strikingly, in Einstein’s hands, deviations from the expected behavior become a tool for the investigation of the microscopic dynamics. This difference in attitude was already highlighted above, in the discussion of energy fluctuations, but the clearest example is the 1905 paper on light emission and adsorption, Einstein05a where one notably reads This relation,555It is the relation now known as Jean’s radiation law. found as a condition for the dynamical equilibrium, not only fails to agree with the experiments, but also intimates that in our model a well-defined distribution of the energy between ether and matter is out of the question. […] In the following, we shall treat the “black-body radiation” in connection with the experiments, without establishing it on any model of the production or propagation of the radiation. Thus Einstein brackets the contemporary models of light adsorption and propagation, but maintains the statistical interpretation of entropy. He then evaluates the radiation entropy from the empirical distribution law and interprets it in terms of the statistical approach as describing the coexistence of point-like particles in a given volume (cf. Norton ). This paper was soon followed by the equally bold application of Planck’s radiation theory to the specific heats of solids Einstein07 . 6 Concluding remarks We presented Einstein’s approach to statistical mechanics in contrast to the one taken by Gibbs. The results are equivalent since both are based on Boltzmann’s contributions. Gibbs’ starting point is the equal a priori probability hypothesis in phase space that leads to the microcanonical probability density for an ensemble (of representative systems, according to Tolman Tolman ). Einstein, on the other hand, starts by stating that what is important is the evaluation of time averages of appropriate quantities. These can be replaced by averages of the same quantities over an unknown density function over the phase space, with the help of an ergodic hypothesis. Einstein introduces the assumption that the energy is the only conserved quantity to play the role of the ergodic hypothesis. Using this assumption and Lioville’s theorem, Einstein shows that the unknown density function mentioned before must be constant on the energy shell, that is it must be the microcanonical distribution. From there, the interpretation of the canonical distribution is different: for Gibbs, it is the simplest distribution, which leads to describe as statistically independent systems which are physically independent, while for Einstein it is the distribution which describes the state of a system in contact with a reservoir. Thus the index of the canonical distribution (as defined by Gibbs) is analogous to the temperature for Gibbs, but can be identified with the temperature for Einstein. It is also interesting to remark that in several points Einstein states (without proof) that the distribution of energy values in the canonical ensemble is sharply peaked, and deduces from this some dubious inequalities for the probability density itself. Only in the 1904 paper he explicitly evaluates the size of fluctuations, obtaining a result already derived by Gibbs. Then, while Gibbs had stressed the non-observability of energy fluctuations in macroscopic systems (thus contributing to the “rational foundation of thermodynamics”), Einstein points at the use of fluctuations as a tool for investigating microscopic dynamics (as he did, in particular, in Einstein09 , where he hinted at the dual wave-particle nature of radiation by interpreting the two terms appearing the expression of energy fluctuations). What interest can a present-day reader find in these papers? We think that they sketch a very neat road map for the introduction of the basic concepts of statistical mechanics, focusing on their heuristic value. One first focuses on isolated systems and identifies the microcanonical ensemble as the equilibrium distribution by means of the thermal equilibrium principle. For this step, Einstein’s reasoning given above, based on the postulate of the absence of integrals of motion beyond the energy, is excellent. Then, one looks at a small part of such an isolated system, and one shows that the corresponding distribution is the canonical one. Finally, one identifies the mechanical expressions of temperature, infinitesimal heat and, by integration, of entropy. All these steps can be tersely traced by following, more or less closely, Einstein’s path. At this point, the focus can be shifted to the evaluation of fluctuations, which allow on the one hand to recover the equivalence of ensembles for large enough systems and, by the same token, to identify situations in which the underlying molecular reality shows up in the behavior of macroscopic systems (like, e.g., in Brownian motion). This road map has been more or less followed by several modern textbooks on statistical mechanics, but we think that it would be fair to stress that it had first been sketched in the papers we described. In any case, we will be satisfied if the present note encourages some colleagues to have a look at these papers, in which the first steps in the making of a giant are recorded. Acknowledgments LP was introduced to critical phenomena by Leo’s lectures in the 1971 Varenna School, and RR fondly remembers Leo’s course in the Escuela Mexicana de Física Estadística, which had a great influence on the Statistical Physics group at UNAM. Both authors dedicate this work to Leo’s memory. LP is grateful to Jeferson Arenzon for encouraging him to present his ideas on Einstein’s 1902–04 works. References (1) A. Einstein, Kinetische Theorie der Wärmegleichgewichtes und des zweiten Hauptsatzes der Thermodynamik, Annalen der Physik 9 417–433 (1902). (2) http://einsteinpapers.press.princeton.edu. (3) A. Einstein, Eine Theorie der Grundlagen der Thermodynamik, Annalen der Physik 11 170–187 (1903). (4) A. Einstein, Zur allgemeinen molekularen Theorie der Wärme, Annalen der Physik 14 354–362 (1904). (5) A. Einstein, Über einen die Erzeugung und Verhandlung des Lichtes betreffenden heuristisches Gesichtspunkt, Annalen der Physik 17 132–148 (1905). (6) T. S. Kuhn, Black-Body Theory and the Quantum Discontinuity (Oxford: Oxford U. P., 1978). (7) J. W. Gibbs, Elementary Principles in Statistical Mechanics, developed with special reference to the rational foundation of thermodynamics (New York: Charles Scribner’s Sons, 1902). Online on https://en.wikisource.org/wiki/Elementary_Principles_in_Statistical_Mechanics. (8) R. C. Tolman, The Principles of Statistical Mechanics (Oxford: Clarendon Press, 1938). (9) A. Einstein, Bemerkungen zu den P. Hertzschen Arbeiten: “Über die mechanischen Grundlagen der Thermodynamik”, Annalen der Physik 34 175–176 (1911). (10) A. Einstein, Autobiographical Notes, in: P. A. Schilpp (ed.), Albert Einstein: Philosopher-Scientist (New York: Library of Living Philosophers, Inc., 1949). (11) J. Mehra, Einstein and the Foundation of Statistical Mechanics, Physica 79A 447–477 (1975). (12) A. Baracca and R. Rechtman S., Einstein’s Statistical Mechanics, Revista Mexicana de Física 31 695–722 (1985). (13) C. A. Gearhart, Einstein before 1905: The early papers on statistical mechanics, Am. J. Phys. 58 468–480 (1990). (14) L. Navarro, Gibbs, Einstein and the Foundations of Statistical Mechanics, Arch. Hist. Exact Sci. 53 147–180 (1998). (15) J. Uffink, Insuperable difficulties: Einstein’s statistical road to molecular physics, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 37 36–70 (2006). (16) H. Inaba, The development of ensemble theory, Eur. J. Phys. H 40 489–526 (2015). (17) H. von Helmholtz, Studien zur Statik monocykliker Systeme, Sitzungsberichte der Kgl. Preuss. Akad. der Wissensch. Berlin (1884) 159–177, 311–318; (1895) 119–141, 163–172. (18) L. Boltzmann, Über die Eigenschaften monozyklischer und anderer damit verwandter Systeme, Journal für die reine und angewandte Mathematik 98 68 (1884); Über einige Fälle, wo die lebendige Kraft nicht integrierender Nenner des Differentials der zugeführten Energie ist, Wiener Berichte 92 853 (1885); Neuer Beweis eines von Helmholtz aufgestellten Theorems betreffend die Eigenschaften monozyklischer Systeme, Göttinger Nachrichten (1886) 209. (19) M. J. Klein, Boltzmann, Monocycles and Mechanical Explanation, in R. J. Seeger and R. S. Cohn (eds.) Philosophical Foundations of Science (Dordrecht: Reidel, 1974) 155–175. (20) G. Gallavotti, Ergodicity, Ensembles, Irreversibility in Boltzmann and Beyond, J. Stat. Phys. 78 1571–1589 (1995). (21) J. Stachel et al. (eds.), The Collected Papers of Albert Einstein (Princeton: Princeton University Press, 1987–2004). (22) P. Hertz, Über die mechanischen Grundlagen der Thermodynamik, Annalen der Physik 32 225–274 (1910); ibid. 33 537–552 (1910). (23) A. Einstein, Theorie der Opaleszenz von homogenen Flüssigkeiten und Flüssigkeitsgemischen in der Nähe des kritischen Zustandes, Annalen der Physik 33 1275–1298 (1910). (24) O. E. Meyer, Kinetische Theorie der Gase (Breslau, 1877) (2. ed., 1899). Online at Google Books. (25) A. Einstein, Lecture Notes for the Course on the Kinetic Theory of Heat at the University of Zurich, Summer Semester 1910. http://einsteinpapers.press.princeton.edu/vol3-doc/217 (26) J. D. Norton, Atoms, Entropy, Quanta: Einstein’s Miraculous Argument of 1905, Studies in History and Philosophy of Modern Physics 37 71–100 (2006). (27) A. Einstein, Die Plancksche Theorie der Strahlung und die Theorie der Spezifischen Wärme, Annalen der Physik 22 180–190 (1907). (28) A. Einstein, Zum gegenwärtigen Stand des Strahlungsproblems, Physikalische Zeitschrift 10 185–193 (1910).
Excitons dressed by a sea of excitons M. Combescot and O. Betbeder-Matibet GPS, Université Pierre et Marie Curie and Université Denis Diderot, CNRS, Campus Boucicaut, 140 rue de Lourmel, 75015 Paris, France () Abstract We here consider an exciton $i$ embedded in a sea of $N$ identical excitons $0$. If the excitons are bosonized, a bosonic enhancement factor, proportional to $N$, is found for $i=0$. If the exciton composite nature is kept, this enhancement not only exists for $i=0$, but also for any exciton having a center of mass momentum equal to the sea exciton momentum. This physically comes from the fact that an exciton with such a momentum can be transformed into a sea exciton by “Pauli scattering”, i. e., carrier exchange with the sea, making this $i$ exciton not so much different from a $0$ exciton. This possible scattering, directly linked to the composite nature of the excitons, is irretrievably lost when the excitons are bosonized. This work in fact deals with the quite tricky scalar products of $N$-exciton states. It actually constitutes a crucial piece of our new many-body theory for interacting composite bosons, because all physical effects involving these composite bosons ultimately end by calculating such scalar products. The “Pauli diagrams” we here introduce to represent them, allow to visualize many-body effects linked to carrier exchange in an easy way. They are conceptually different from Feynman diagrams, because of the special feature of the “Pauli scatterings”: These scatterings, which originate from the departure from boson statistics, do not have their equivalent in Feynman diagrams, the commutation rules for exact bosons (or fermions) being included in the first line of the usual many-body theories. PACS.: 71.35.-y Excitons and related phenomena 1 Introduction We are presently developing a new many-body theory [1-7] able to handle interactions between composite bosons — like the semiconductor excitons. The development of such a theory is in fact highly desirable, because, in the low density limit, electron-hole pairs are known to form bound excitons, so that, in this limit, to manipulate excitons is surely a better idea than to manipulate free carriers. However, the interaction between excitons is not an easy concept due to carrier indistinguishability: Indeed, the excitons, being made of two charged particles, of course interact through Coulomb interactions. However, this Coulomb interaction can be $(V_{ee^{\prime}}+V_{hh^{\prime}}-V_{eh^{\prime}}-V_{e^{\prime}h})$ or $(V_{ee^{\prime}}+V_{hh^{\prime}}-V_{eh}-V_{e^{\prime}h^{\prime}})$ depending if we see the excitons as $(e,h)$ and $(e^{\prime},h^{\prime})$, or $(e,h^{\prime})$ and $(e^{\prime},h)$. In addition, excitons interact in a far more subtle manner through Pauli exclusion between their indistinguishable components, in the absence of any Coulomb process. This “Pauli interaction” is actually the novel and interesting part of our new many-body theory for composite bosons. It basically comes from the departure from boson statistics, all previous theories, designed for true bosons or true fermions, having the corresponding commutation rules set up in the first line [8]. In our theory, the fact that the excitons are not exact bosons appears through “Pauli scatterings” $\lambda_{mnij}$ between the “in” excitons $(i,j)$ and the “out” excitons $(m,n)$. Their link to boson departure is obvious from their definition. Indeed, these Pauli scatterings appear through [2,3] $$[B_{m},B_{i}^{\dagger}]=\delta_{mi}-D_{mi}\ ,$$ (1) $$[D_{mi},B_{j}^{\dagger}]=2\sum_{n}\lambda_{mnij}\,B_{n}^{\dagger}\ ,$$ (2) $B_{i}^{\dagger}$ being the $i$ exciton creation operator. It precisely reads in terms of the exciton wave function $\phi_{i}(\mathbf{r}_{e},\mathbf{r}_{h})=\langle\mathbf{r}_{e},\mathbf{r}_{h}|B% _{i}^{\dagger}|v\rangle$ as $$\lambda_{mnij}=\frac{1}{2}\int d\mathbf{r}_{e}\,d\mathbf{r}_{e^{\prime}}\,d% \mathbf{r}_{h}\,d\mathbf{r}_{h^{\prime}}\,\phi_{m}^{\ast}(\mathbf{r}_{e},% \mathbf{r}_{h})\,\phi_{n}^{\ast}(\mathbf{r}_{e^{\prime}},\mathbf{r}_{h^{\prime% }})\,\phi_{i}(\mathbf{r}_{e},\mathbf{r}_{h^{\prime}})\,\phi_{j}(\mathbf{r}_{e^% {\prime}},\mathbf{r}_{h})\ +\ (m\leftrightarrow n)\ .$$ (3) The above expression makes clear the fact that $\lambda_{mnij}$ just corresponds to a carrier exchange between two excitons (see fig. 1a) without any Coulomb process, so that $\lambda_{mnij}$ is actually a dimensionless “scattering”. It is possible to show that for bound states, $\lambda_{mnij}$ is of the order of $\mathcal{V}_{X}/\mathcal{V}$, with $\mathcal{V}_{X}$ being the exciton volume and $\mathcal{V}$ the sample volume [6]. All physical quantities involving excitons can be written as matrix elements of an Hamiltonian dependent operator $f(H)$ between $N$-exciton states, with usually most of them in the ground state $0$. These matrix elements formally read $$\langle v|B_{m_{1}}\cdots B_{m_{n}}B_{0}^{N-n}\,f(H)\,B_{0}^{{\dagger}N-n^{% \prime}}B_{i_{1}}^{\dagger}\cdots B_{i_{n^{\prime}}}^{\dagger}|v\rangle\ .$$ (4) They can be calculated by “pushing” $f(H)$ to the right in order to end with $f(H)|v\rangle$, which is just $f(0)|v\rangle$ if the vacuum is taken as the energy origin. This push is done through a set of commutations. In the simplest case, $f(H)=H$, we have $$HB_{i}^{\dagger}=B_{i}^{\dagger}(H+E_{i})+V_{i}^{\dagger}\ ,$$ (5) which just results from [2,3] $$[H,B_{i}^{\dagger}]=E_{i}B_{i}^{\dagger}+V_{i}^{\dagger}\ .$$ (6) We then push $V_{i}^{\dagger}$ to the right according to $$[V_{i}^{\dagger},B_{j}^{\dagger}]=\sum_{mn}\xi_{mnij}^{\mathrm{dir}}\,B_{m}^{% \dagger}B_{n}^{\dagger}\ ,$$ (7) to end with $V_{i}^{\dagger}|v\rangle$ which is just $0$ due to eq. (6) applied to $|v\rangle$. Equations (6,7), along with eqs. (1,2), form the four key equations of our many-body theory for interacting composite excitons. $\xi_{mnij}^{\mathrm{dir}}$ is the second scattering of this theory. It transforms the $(i,j)$ excitons into $(m,n)$ states, due to Coulomb processes between them, as obvious from its explicit expression: $$\displaystyle\xi_{mnij}^{\mathrm{dir}}=\frac{1}{2}\int d\mathbf{r}_{e}\,d% \mathbf{r}_{e^{\prime}}\,d\mathbf{r}_{h}\,d\mathbf{r}_{h^{\prime}}\,\phi_{m}^{% \ast}(\mathbf{r}_{e},\mathbf{r}_{h})\,\phi_{n}^{\ast}(\mathbf{r}_{e^{\prime}},% \mathbf{r}_{h^{\prime}})$$ $$\displaystyle\times(V_{ee^{\prime}}+V_{hh^{\prime}}-V_{eh^{\prime}}-V_{e^{% \prime}h})\,\phi_{i}(\mathbf{r}_{e},\mathbf{r}_{h})\,\phi_{j}(\mathbf{r}_{e^{% \prime}},\mathbf{r}_{h^{\prime}})\ +(m\leftrightarrow n)\ .$$ (8) Note that, in $\xi_{mnij}^{\mathrm{dir}}$, the “in” and “out” excitons are made with the same pairs, while, in $\lambda_{mnij}$, they have exchanged their carriers. Due to dimensional arguments, these $\xi_{mnij}^{\mathrm{dir}}$ for bound states are of the order of $R_{X}\mathcal{V}_{X}/\mathcal{V}$, with $R_{X}$ being the exciton Rydberg [6]. Another $f(H)$ of interest can be $1/(a-H)$, with $a$ possibly equal to $(\omega+i\eta)$ as in problems involving photons. In order to push $1/(a-H)$ to the right, we can use [4] $$\frac{1}{a-H}\,B_{i}^{\dagger}=B_{i}^{\dagger}\,\frac{1}{a-H-E_{i}}+\frac{1}{a% -H}\,V_{i}^{\dagger}\,\frac{1}{a-H-E_{i}}\ ,$$ (9) which follows from eq. (5). In pushing $1/(a-H)$ to the right, we generate Coulomb terms through the $V_{i}^{\dagger}$ part of eq. (9). Due to dimensional arguments, these terms ultimately read as an expansion in $\xi_{mnij}^{\mathrm{dir}}$ over an energy denominator which can be either a detuning or just a difference between exciton energies, depending on the problem at hand. A last $f(H)$ of interest is $e^{-iHt}$ which appears in problems involving time evolution. In order to push $e^{-iHt}$ to the right, we can use [7] $$e^{-iHt}\,B_{i}^{\dagger}=B_{i}^{\dagger}\,e^{-i(H+E_{i})t}+W_{i}^{\dagger}(t)\ ,$$ (10) $$W_{i}^{\dagger}(t)=-\int_{-\infty}^{+\infty}\frac{dx}{2i\pi}\,\frac{e^{-i(x+i% \eta)t}}{x-H+i\eta}\,V_{i}^{\dagger}\,\frac{1}{x-H-E_{i}+i\eta}\ .$$ (11) Equations (10,11) result from the integral representation of the exponential, namely $$e^{-iHt}=-\int_{-\infty}^{+\infty}\frac{dx}{2i\pi}\,\frac{e^{-i(x+i\eta)t}}{x-% H+i\eta}\ ,$$ (12) valid for $t$ and $\eta$ positive, combined with eq. (9). Again additional Coulomb terms appear in passing $e^{-iHt}$ over $B_{i}^{\dagger}$. By comparing eqs. (5,9,10), we see that, when we pass $f(H)$ over $B_{i}^{\dagger}$, we essentially replace it by $f(H+E_{i})$, as if the $i$ exciton was not interacting with the other excitons, within a Coulomb term which takes care of these interactions, $f(H+E_{i})$ being in some sense the zero order contribution of $f(H)$. Once we have pushed all the $H$’s up to $|v\rangle$ and generated very many Coulomb scatterings $\xi_{mnij}^{\mathrm{dir}}$, we end with scalar products of $N$-exciton states which look like eq. (4) with $f(H)=1$. Then, we start to push the $B$’s to the right according to eqs. (1,2), to end with $B|v\rangle$ which is just zero. This set of pushes now makes appearing the Pauli scatterings $\lambda_{mnij}$. In the case of $N=2$, eqs. (1,2) readily give the scalar product of two-exciton states as [2] $$\langle v|B_{m}\,B_{n}\,B_{i}^{\dagger}\,B_{j}^{\dagger}|v\rangle=\delta_{mi}% \,\delta_{nj}+\delta_{mj}\,\delta_{ni}-2\lambda_{mnij}\ .$$ (13) For large $N$, the calculation of similar scalar products is actually very tricky. We expect them to depend on $N$ and to contain many $\lambda_{mnij}$’s. The $N$ dependence of these scalar products is in fact crucial because physical quantities must ultimately depend on $\eta=N\mathcal{V}_{X}/\mathcal{V}$, with $\mathcal{V}_{X}/\mathcal{V}$ coming either from Pauli scatterings or from Coulomb scatterings — with possibly an additional factor $N$, if we look for something extensive. However, as the scalar products of $N$-exciton states are not physical quantities, they can very well contain superextensive terms in $N^{p}\eta^{n}$ which ultimately disappear from the final expressions of the physical quantities. To handle these factors $N$ properly — and their possible cancellations — is thus crucial. In previous works [1,5], we have calculated the simplest of these scalar products of $N$-exciton states, namely $\langle v|B_{0}^{N}B_{0}^{{\dagger}N}|v\rangle$. We found it equal to $N!$, as for exact bosons, multiplied by a corrective factor $F_{N}$ which comes from the fact that excitons are composite bosons, $$\langle v|B_{0}^{N}\,B_{0}^{{\dagger}N}|v\rangle=N!\,F_{N}\ .$$ (14) This $F_{N}$ factor is actually superextensive, since it behaves as $e^{-N\,O(\eta)}$ (see ref. [5]). In large enough samples, $N\eta$ can be extremely large even for $\eta$ small, which makes $F_{N}$ exponentially small. In physical quantities however, $F_{N}$ never appears alone, but through ratios like $F_{N-p}/F_{N}$ which actually read as $1+O(\eta)$ for $p\ll N$. This restores the expected $\eta$ dependence of these physical quantities. The present paper in fact deals with determining the interplay between the possible factors $N$ and the various $\lambda$’s which appear in scalar products of $N$-exciton states. These Pauli scatterings $\lambda$ being the original part of our many-body theory for interacting composite bosons, the understanding of this interplay is actually fundamental to master many-body effects between excitons at any order in $\eta=N\mathcal{V}_{X}/\mathcal{V}$. This will in particular allow us to cleanly show the cancellation of superextensive terms which can possibly appear in the intermediate stages of the theory [7]. This paper can appear as somewhat formal. It however constitutes one very important piece of this new many-body theory, because all physical effects between $N$ interacting excitons ultimately end with calculating such scalar products. Problems involving two excitons only[7] are in fact rather simple to solve because they only need the scalar product of two-exciton states given in eq. (13). The real challenging difficulty which actually remains in order to handle many-body effects between $N$ excitons at any order in $\eta$, is to produce the equivalent of eq. (13) for large $N$. In usual many-body effects, Feynman diagrams [8] have been proved to be quite convenient to understand the physics of interacting fermions or bosons. We can expect the introduction of diagrams to be also quite convenient to understand the physics of interacting composite bosons. It is however clear that diagrams representing carrier exchange between $N$ excitons have to be conceptually new: In them, must enter the Pauli scatterings which take care of the departure from boson statistics. As the fermion or boson statistics is included in the first lines of the usual many-body theories, the Pauli scatterings do not have their equivalent in Feynman diagrams. Another important part of the present paper is thus to present these new “Pauli diagrams” and to derive some of their specific rules. As we will show, these Pauli diagrams are in fact rather tricky because they can look rather differently although they represent exactly the same quantity. To understand why these different diagrams are indeed equivalent, is actually crucial to master these Pauli diagrams. This is the subject of the last section of this paper. It goes through the introduction of “exchange skeletons” which are the basic quantities for carrier exchanges between more than two excitons. Their appearance is physically reasonable because Pauli exclusion is $N$-body “at once”, so that, when it plays a role, it in fact correlates all the carriers of the involved excitons through a unique process, even if this process can be decomposed into exchanges between two excitons only, as in the Pauli scatterings $\lambda_{mnij}$. From a technical point of view, it is of course possible to calculate the scalar products of $N$-exciton states, just through blind algebra based on eqs. (1,2), and to get the right answer. However, in order to understand the appearance of the extra factors $N$ which go in front of the ones in $N\lambda\simeq N\mathcal{V}_{X}/\mathcal{V}=\eta$ and which are crucial to ultimately withdraw superextensive terms from physical quantities, it is in fact convenient to introduce the concept of “excitons dressed by a sea of excitons”, because these extra factors $N$ are physically linked to the underlying bosonic character of the excitons which is enhanced by the presence of an exciton sea. We will show that these extra factors $N$ are linked to the topology of the diagrammatic representation of these scalar products, which appears as “disconnected” when these extra $N$’s exist. This is after all not very astonishing because disconnected Feynman diagrams are known to also generate superextensive terms. Let us introduce the exciton $i$ dressed by a sea of $N$ excitons $0$ as $$|\psi_{i}^{(N)}\rangle=\frac{B_{0}^{N}B_{0}^{{\dagger}N}}{\langle v|B_{0}^{N}B% _{0}^{{\dagger}N}|v\rangle}\ B_{i}^{\dagger}|v\rangle\ ,$$ (15) The denominator $\langle v|B_{0}^{N}B_{0}^{{\dagger}N}|v\rangle$ is a normalization factor which makes the operator in front of $B_{i}^{\dagger}$ appearing as an identity in the absence of Pauli interactions with the exciton sea. Indeed, we can check that the vacuum state, dressed in the same way as $$|\psi^{(N)}\rangle=\frac{B_{0}^{N}B_{0}^{{\dagger}N}}{\langle v|B_{0}^{N}B_{0}% ^{{\dagger}N}|v\rangle}\ |v\rangle\ ,$$ (16) is just $|v\rangle$, as expected because no interaction can exist with the vacuum. On the opposite, subtle Pauli interactions take place between the exciton sea and an additional exciton $i$. As the dressed exciton $i$ contains one more $B^{\dagger}$ than the number of $B$’s, it is essentially a one-pair state. It can be written either in terms of free electrons and holes, or better in terms of one-exciton states. Since these states are the one-pair eigenstates of the semiconductor Hamiltonian $H$, they obey the closure relation $1=\sum_{m}B_{m}^{\dagger}|v\rangle\langle v|B_{m}$. So that $|\psi_{i}^{(N)}\rangle$ can be written as $$|\psi_{i}^{(N)}\rangle=\sum_{m}A_{N}(m,i)\,B_{m}^{\dagger}|v\rangle\ ,$$ (17) $$A_{N}(m,i)=\frac{\langle v|B_{m}\,B_{0}^{N}\,B_{0}^{{\dagger}N}\,B_{i}^{% \dagger}|v\rangle}{\langle v|B_{0}^{N}\,B_{0}^{{\dagger}N}|v\rangle}\ .$$ (18) This decomposition of $|\psi_{i}^{(N)}\rangle$ on one-exiton states makes appearing the scalar product of $(N+1)$-exciton states with $N$ of them in the exciton sea. As the physics which controls the extra factors $N$ in these scalar products, is actually linked to the underlying bosonic character of the excitons, let us first consider boson-excitons in order to see how a sea of $N$ boson-excitons $0$ affects them. 2 Boson-excitons dressed by a sea of excitons Instead of eq. (1), the commutation rule for boson-excitons is $[\bar{B}_{m},\bar{B}_{i}^{\dagger}]=\delta_{mi}$, so that the deviation-from-boson operator $D_{mi}$ for boson-excitons is zero, as the Pauli scatterings $\lambda_{mnij}$. From this boson commutator, we get by induction $$[\bar{B}_{0}^{N},\bar{B}_{i}^{\dagger}]=\bar{B}_{0}^{N-1}[\bar{B}_{0},\bar{B}_% {i}^{\dagger}]+[\bar{B}_{0}^{N-1},\bar{B}_{i}^{\dagger}]\bar{B}_{0}=N\delta_{0% i}\,\bar{B}_{0}^{N-1}\ .$$ (19) So that $\bar{B}_{0}^{N}\bar{B}_{0}^{{\dagger}N}|v\rangle=N!\,|v\rangle$, which shows that the normalization factor $F_{N}$ is just 1 for boson-excitons, while $|\bar{\psi}_{0}^{(N)}\rangle=(N+1)\bar{B}_{0}^{\dagger}|v\rangle$ and $|\bar{\psi}_{i\neq 0}^{(N)}\rangle=\bar{B}_{i}^{\dagger}|v\rangle$. So that, we do have $$|\bar{\psi}_{i}^{(N)}\rangle=(N\delta_{i0}+1)\,\bar{B}_{i}^{\dagger}|v\rangle\ .$$ (20) The factor $N$ which appears in this equation is physically linked to the well known bosonic enhancemen [9]. The memory of such an effect must a priori exist for composite bosons, such as the excitons. However, subtle changes are expected due to their underlying fermionic character. Let us now see how this bosonic enhancement, obvious for boson-excitons, does appear for exact excitons. 3 Exact excitons dressed by a sea of excitons The commutation rule for exact excitons is given in eq. (1). By taking another commutation, we generate eq. (2) which defines the Pauli scattering $\lambda_{mnij}$. From it, we easily get by induction $$[D_{mi},B_{0}^{{\dagger}N}]=2N\sum_{n}\lambda_{mn0i}\,B_{n}^{\dagger}B_{0}^{{% \dagger}N-1}\ ,$$ (21) its conjugate leading to $$[B_{0}^{N},D_{mi}]=2N\sum_{j}\lambda_{m0ji}\,B_{0}^{N-1}B_{j}\ ,$$ (22) since $D_{mi}^{\dagger}=D_{im}$, while $\lambda_{mnij}^{\ast}=\lambda_{ijmn}$. Equation (21) allows to generalize eq. (1) as $$[B_{m},B_{0}^{{\dagger}N}]=NB_{0}^{{\dagger}N-1}(\delta_{m0}-D_{m0})-N(N-1)% \sum_{n}\lambda_{mn00}B_{n}^{\dagger}B_{0}^{{\dagger}N-2}\ ,$$ (23) its conjugate leading to $$[B_{0}^{N},B_{i}^{\dagger}]=N(\delta_{0i}-D_{0i})B_{0}^{N-1}-N(N-1)\sum_{j}% \lambda_{00ij}B_{j}B_{0}^{N-2}\ .$$ (24) (We can note that eq. (19) for bosons just follows from eq. (24), since the deviation-from-boson operator $D_{mi}$ and the Pauli scattering $\lambda_{mnij}$ are equal to zero in the case of boson-excitons). In order to grasp the bosonic enhancement for exact excitons, let us start with the “best case” for such an enhancement, namely an exciton $0$ dressed by a sea of $N$ excitons $0$. 3.1 Exciton $0$ dressed by $N$ excitons $0$ According to eq. (17), this dressed exciton can be writen as $$|\psi_{0}^{(N)}\rangle=(N+1)\sum_{m}\zeta_{N}(m)B_{m}^{\dagger}|v\rangle\,$$ (25) in which we have set $$\zeta_{N}(m)=\frac{A_{N}(m,0)}{N+1}=\frac{\langle v|B_{0}^{N}B_{m}B_{0}^{{% \dagger}N+1}|v\rangle}{(N+1)!F_{N}}\ .$$ (26) This $\zeta_{N}(m)$, which is just $F_{N+1}/F_{N}\simeq 1+O(\eta)$ for $m=0$, will appear to be a quite useful quantity in the following. To calculate it, we rewrite $B_{m}B_{0}^{{\dagger}N+1}$ according to eq. (23). Since $D_{m0}|v\rangle=0$, which follows from eq. (1) applied to $|v\rangle$, this readily gives the recursion relation between the $\zeta_{N}(m)$’s as $$\zeta_{N}(m)=\delta_{m0}-\frac{F_{N-1}}{F_{N}}\,N\sum_{n}\lambda_{mn00}\,\zeta% _{N-1}^{\ast}(n)\ .$$ (27) Its diagrammatic representation is shown in fig. 2a as well as its iteration (fig. 2b). Its solution is $$\zeta_{N}(m)=\sum_{p=0}^{N}(-1)^{p}\frac{F_{N-p}}{F_{N}}\,\frac{N!}{(N-p)!}\,z% ^{(p)}(m,0)\ ,$$ (28) with $z^{(0)}(m,0)=\delta_{m0}$, while $$z^{(p)}(m,0)=\sum_{n}\lambda_{mn00}\,\left[z^{(p-1)}(n,0)\right]^{\ast}\ .$$ (29) $z^{(p)}(m,0)$ can be represented by a diagram with $(p+1)$ lines, the lowest one being $(m,0)$, while the $p$ other lines are $(0,0)$. These lines are connected by $p$ Pauli scatterings which are in zigzag, alternatively right, left, right…(see fig. 2b). For $p=1$, $z^{(p)}(m,0)$ is just $\lambda_{m000}$, while for $p=2$, it reads $\sum_{n}\lambda_{mn00}\lambda_{00n0}$ and so on …Fig.  2c also shows $\zeta_{N}^{\ast}(i)$, easy to obtain from $\zeta_{N}(m)$ by noting that $\lambda_{mnij}^{\ast}=\lambda_{ijmn}$, so that $\zeta_{N}^{\ast}(i)$ is just obtained from $\zeta_{N}(i)$ by a symmetry right-left: In $\zeta_{N}^{\ast}(i)$, the zigzag in fact appears as left, right, left,… Since we are ultimately interested in possible extra factors $N$, it can be of interest to understand the appearance of $N$’s in $\zeta_{N}(m)$. If we forget about the composite nature of the exciton, i. e., if we drop all carrier exchanges with the exciton sea, the electron and hole of the exciton $0$ are tight for ever as for boson-excitons, so that we should have for $|\psi_{0}^{(N)}\rangle$ the same result as the one for boson-excitons, namely $|\psi_{0}^{(N)}\rangle\simeq(N+1)B_{0}^{\dagger}|v\rangle$. This leads to $\zeta_{N}(m)\simeq\delta_{m0}$ at lowest order in $\lambda$. The composite exciton $0$ can however exchange its electron or its hole with one sea exciton to become an $m$ exciton. Since there are $N$ possible excitons in the sea for such an exchange, the first order term in exchange scattering must appear with a factor $N$. Another exciton, among the $(N-1)$ left in the sea, can also participate to these carrier exchanges so that the second order term in Pauli scattering must appear with a $N(N-1)$ prefactor; and so on … From this iteration, we thus conclude that $\zeta_{N}(m)$ contains the same number of factors $N$ as the number of $\lambda$’s. Since, for $0$ and $m$ being bound states, these $\lambda$’s are in $\mathcal{V}_{X}/\mathcal{V}$, while $F_{N-p}/F_{N}$ reads as an expansion in $\eta$ (see ref. [5]), we thus find that, in the large $N$ limit, $\zeta_{N}(m)$ can be written as an expansion in $\eta$, without any extra factor $N$ in front. This thus shows that $|\psi_{0}^{(N)}\rangle$ contains the same bosonic enhancement factor $(N+1)$ as the one of dressed boson-excitons $|\bar{\psi}_{0}^{(N)}\rangle$. Let us however stress that the relative weight of the $B_{0}^{\dagger}|v\rangle$ state in $|\psi_{0}^{(N)}\rangle$, namely $\zeta_{N}(0)$, which is exactly 1 in the case of boson-excitons, is somewhat smaller than 1 due to possible carrier exchanges with the exciton sea. From the iteration of $\zeta_{N}(0)$, we find that this weight reads $\zeta_{N}(0)=1-(F_{N-1}/F_{N})N\lambda_{0000}+\cdots$, which is nothing but $F_{N+1}/F_{N}$ as can be directly seen from eq. (26). To compensate this decrease of the $B_{0}^{\dagger}|v\rangle$ state weight, $|\psi_{0}^{(N)}\rangle$ has non-zero components on the other exciton states $B_{m\neq 0}^{\dagger}|v\rangle$, in contrast with the boson-exciton case. We can however note that, since $\zeta_{N}(m)=0$ for $\mathbf{Q}_{m}\neq\mathbf{Q}_{0}$, due to momentum conservation in Pauli scatterings, the other exciton states making $|\psi_{0}^{(N)}\rangle$ must have the same momentum $\mathbf{Q}_{0}$ as the one of the $0$ excitons. We thus conclude that one exciton $0$ dressed by a sea of $N$ excitons $0$, exhibits the same enhancement factor $(N+1)$ as the one which appears for boson-excitons. This dressed exciton however has additional components on other exciton states which have a momentum equal to the $0$ exciton momentum $\mathbf{Q}_{0}$. The existence of such a bosonic enhancement for the exciton $0$ can appear as somewhat normal because excitons are, after all, not so far from real bosons. We will now show that a similar enhancement, i. e., an additional prefactor $N$, also exists for excitons different from $0$ but having a center of mass momentum equal to $\mathbf{Q}_{0}$. Before showing it from hard algebra, let us physically explain why this has to be expected: From the two possible ways to form two excitons out of two electron-hole pairs, we have shown that $$B_{i}^{\dagger}B_{j}^{\dagger}=-\sum_{mn}\lambda_{mnij}\,B_{m}^{\dagger}B_{n}^% {\dagger}\ .$$ (30) $B_{i\neq 0}^{\dagger}B_{0}^{\dagger}$ can thus be written as a sum of an exciton $m$ and an exciton $n$ with $\mathbf{Q}_{m}+\mathbf{Q}_{n}=\mathbf{Q}_{i}+\mathbf{Q}_{0}$ due to momentum conservation included in the Pauli scatterings $\lambda_{mnij}$. This shows that, for an exciton $i$ with $\mathbf{Q}_{i}=\mathbf{Q}_{0}$, $B_{i\neq 0}^{\dagger}B_{0}^{\dagger}$ has a non-zero contribution on $B_{0}^{{\dagger}2}$, so that this $i\neq 0$ exciton, in the presence of other excitons $0$, is partly a $0$ exciton. A bosonic enhancement has thus to exist for any exciton $i$ with $\mathbf{Q}_{i}=\mathbf{Q}_{0}$. 3.2 Exciton $i$ dressed by $N$ excitons $0$ Let us now consider an exciton with arbitrary $i$. We will show that there are essentially two kinds of such excitons, the ones with $\mathbf{Q}_{i}=\mathbf{Q}_{0}$ and the ones with $\mathbf{Q}_{i}\neq\mathbf{Q}_{0}$: Since a $\mathbf{Q}_{i}=\mathbf{Q}_{0}$ exciton can be transformed into an $i=0$ exciton by carrier exchange with the exciton sea, it is clear that the excitons with $\mathbf{Q}_{i}\neq\mathbf{Q}_{0}$ are in fact the only ones definitely different from $0$ excitons; this is why they should be dressed differently. The exciton $i$ dressed by $N$ excitons $0$ reads as eq. (17). From eq. (26), we already know that $A_{N}(0,i)$ is just $(N+1)\zeta_{N}(i)$, so that we are left with determining the scalar product $A_{N}(m,i)$ for $(m,i)\neq 0$. There are many ways to calculate $A_{N}(m,i)$: We can for example start with $[B_{m},B_{i}^{\dagger}]$ given in eq. (1), or with $[B_{m},B_{0}^{{\dagger}N}]$ given in eq. (23), or even with $[B_{0}^{N},B_{i}^{\dagger}]$ given in eq. (24). While these last two commutators lead to calculations essentially equivalent, the first one may appear somewhat better at first, because it does not destroy the intrinsic $(m,i)$ symmetry of the $A_{N}(m,i)$ matrix element. These various ways to calculate $A_{N}(m,i)$ must of course end by giving exactly the same result. However, it turns out that the diagrammatic representations of $A_{N}(m,i)$ these various ways generate, look at first rather different. We will, in this section, present the calculation of $A_{N}(m,i)$ which leads to the “nicest” diagrams, i. e., the ones which are the easiest to memorize. We leave the discussion of the other diagrammatic representations of $A_{N}(m,i)$ and their equivalences for the last part of this work. 3.2.1 Recursion relation between $A_{N}(m,i)$ and $A_{N-2}(n,i)$ We start with $[B_{m},B_{i}^{\dagger}]$ given in eq. (1). This leads to write $A_{N}(m,i)$ as $$A_{N}(m,i)=a_{N}(m,i)+\hat{A}_{N}(i,m)\ ,$$ (31) in which we have set $$a_{N}(m,i)=\delta_{mi}-\langle v|B_{0}^{N}\,D_{mi}\,B_{0}^{{\dagger}N}|v% \rangle/N!F_{N}\ ,$$ (32) $$\hat{A}_{N}(i,m)=\langle v|B_{0}^{N}B_{i}^{\dagger}B_{m}B_{0}^{{\dagger}N}|v% \rangle/N!F_{N}\ .$$ (33) To calculate the matrix element appearing in $a_{N}(m,i)$, we can either use $[D_{mi},B_{0}^{{\dagger}N}]$ given in eq. (21), or $[B_{o}^{N},D_{mi}]$ given in eq. (22). With the first choice, we find $$a_{N}(m,i)=\delta_{mi}-2\frac{F_{N-1}}{F_{N}}\,N\sum_{j}\lambda_{mj0i}\,\zeta_% {N-1}^{\ast}(j)\ .$$ (34) Fig. 3a shows the diagrammatic representation of eq. (34), while fig. 3b shows the corresponding expansion of $a_{N}(m,i)$ deduced from the diagrammatic expansion of $\zeta_{N}^{\ast}(i)$ given in fig. 2c. By injecting eq. (28) giving $\zeta_{N}(m)$ into eq. (34), we find $$a_{N}(m,i)=\delta_{mi}+2\sum_{p=1}^{N}(-1)^{p}\,\frac{F_{N-p}}{F_{N}}\,\frac{N% !}{(N-p)!}\,z^{(p)}(m,i)\ ,$$ (35) where $z^{(p)}(m,i)$ is such that $$z^{(p)}(m,i)=\sum_{j}\lambda_{mj0i}\,z^{(p-1)\ast}(j,0)\ .$$ (36) As shown in fig. 3b, $z^{(p)}(m,i)$ is a zigzag diagram like $z^{(p)}(m,0)$, with the lowest line $(m,0)$ replaced by $(m,i)$. If we now turn to $\hat{A}_{N}(i,m)$, there are a priori two ways to calculate it: Either we use $[B_{m},B_{0}^{{\dagger}N}]$, or we use $[B_{0}^{N},B_{i}^{\dagger}]$. However, if we want to write $\hat{A}_{N}(i,m)$ in terms of $A_{N-2}(n,i)$, we must keep $B_{i}^{\dagger}$ so that $[B_{0}^{N},B_{i}^{\dagger}]$ is not appropriate. Equation (23) then leads to $$\hat{A}_{N}(i,m)=\frac{F_{N-1}}{F_{N}}\,N\,\delta_{m0}\,\zeta_{N-1}^{\ast}(i)-% \frac{N(N-1)}{N!F_{N}}\sum_{j}\lambda_{mj00}\,\langle v|B_{0}^{N}B_{i}^{% \dagger}B_{j}^{\dagger}B_{0}^{{\dagger}N-2}|v\rangle\ .$$ (37) The above matrix element can be calculated either with $[B_{0}^{N},B_{j}^{\dagger}]$ or with $[B_{0}^{N},B_{i}^{\dagger}]$. From the first commutator — which is the one which allows to keep $B_{i}^{\dagger}$ — we get $$\hat{A}_{N}(i,m)=N\,b_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\sum_{nj}\lambda_{% mj00}\lambda_{00nj}\,A_{N-2}(n,i)\ ,$$ (38) in which we have set $$b_{N}(m,i)=\frac{F_{N-1}}{F_{N}}\,\delta_{m0}\,\zeta_{N-1}^{\ast}(i)-\frac{F_{% N-2}}{F_{N}}\,(N-1)\,\lambda_{m000}\,\zeta_{N-2}^{\ast}(i)\ .$$ (39) By using the expansion of $\zeta_{N}(m)$ given in eq. (28), this equation leads to write $b_{N}(m,i)$ as $$\displaystyle b_{N}(m,i)=\frac{F_{N-1}}{F_{N}}\,\delta_{m0}\,\delta_{0i}+\sum_% {p=1}^{N-1}(-1)^{p}\,\frac{(N-1)!}{(N-1-p)!}\,\frac{F_{N-1-p}}{F_{N}}$$ $$\displaystyle\times\left[z^{(0)}(m,0)\,z^{(p)\ast}(i,0)+z^{(1)}(m,0)\,z^{(p-1)% \ast}(i,0)\right]\ .$$ (40) The diagrammatic representation of eq. (39) is shown in fig. 4a. From it and the diagrams of fig. 2c for $\zeta_{N}^{\ast}(i)$, we obtain the Pauli expansion of $b_{N}(m,i)$ shown in fig. 4b. It is just the diagrammatic representation of eq. (40). We see that $b_{N}(m,i)$ is made of diagrams which can be cut into two pieces. We also see that, while $a_{N}(m,i)$ differs from 0 for $\mathbf{Q}_{m}=\mathbf{Q}_{i}$ only, due to momentum conservation included in the Pauli scatterings, we must have $\mathbf{Q}_{m}=\mathbf{Q}_{i}=\mathbf{Q}_{0}$ to have $b_{N}(m,i)\neq 0$, since $\zeta_{N}(m)$ is 0 for $\mathbf{Q}_{m}\neq\mathbf{Q}_{0}$, as previously shown. From eqs. (31) and (38), we thus find that $A_{N}(m,i)$ obeys the recursion relation $$A_{N}(m,i)=a_{N}(m,i)+N\,b_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\sum_{nj}% \lambda_{mj00}\lambda_{00nj}\,A_{N-2}(n,i)\ ,$$ (41) with $b_{N}(m,i)=0$ if $\mathbf{Q}_{m}=\mathbf{Q}_{i}\neq\mathbf{Q}_{0}$. 3.2.2 Determination of $A_{N}(m,i)$ using $A_{N-2}(n,i)$ From the fact that we just need to have $\mathbf{Q}_{m}=\mathbf{Q}_{i}$ for $a_{N}(m,i)\neq 0$, while $b_{N}(m,i)\neq 0$ imposes $\mathbf{Q}_{m}=\mathbf{Q}_{i}=\mathbf{Q}_{0}$, we are led to divide $A_{N}(m,i)$ into a contribution which exists whatever the $i$ exciton momentum is and a contribution which only exists when $\mathbf{Q}_{i}$ is equal to the sea exciton momentum $\mathbf{Q}_{0}$. This gives $$A_{N}(m,i)=\alpha_{N}(m,i)+N\,\beta_{N}(m,i)\ ,$$ (42) where $\alpha_{N}(m,i)$ and $\beta_{N}(m,i)$ obey the two recursion relations $$\alpha_{N}(m,i)=a_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\sum_{nj}\lambda_{mj00% }\lambda_{00nj}\,\alpha_{N-2}(n,i)\ ,$$ (43) $$\beta_{N}(m,i)=b_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,(N-1)(N-2)\sum_{nj}\lambda_{% mj00}\lambda_{00nj}\,\beta_{N-2}(n,i)\ .$$ (44) a) Part of $A_{N}(m,i)$ which exists whatever $\mathbf{Q}_{i}(=\mathbf{Q}_{m})$ is The part of $A_{N}(m,i)$ which exists for any exciton $i$ is $\alpha_{N}(m,i)$. The diagrammatic representation of its recursion relation (43) is shown in fig. 5a, as well as its iteration (fig. 5b). If, in it, we insert the diagrammatic representation of $a_{N}(m,i)$ given in fig. 3b, we end with the diagrammatic representation of $\alpha_{N}(m,i)$ shown in fig. 5c. Note that we have used $\lambda_{mn0i}=\lambda_{mni0}$ in order to get rid of the factor 2 appearing in $a_{N}(m,i)$. Using eq. (35) for $a_{N}(m,i)$, it is easy to check that the solution of eq. (43) reads $$\alpha_{N}(m,i)=\sum_{p=0}^{N}(-1)^{p}\,\frac{F_{N-p}}{F_{N}}\,\frac{N!}{(N-p)% !}\,Z^{(p)}(m,i)\ ,$$ (45) where $Z^{(p)}(m,i)$ obeys the recursion relation $$Z^{(p)}(m,i)=\hat{z}^{(p)}(m,i)+\sum_{nj}\lambda_{mj00}\,\lambda_{00nj}\,Z^{(p% -2)}(n,i)\ ,$$ (46) with $\hat{z}^{(0)}=\delta_{mi}$, while $\hat{z}^{(p\neq 0)}(m,i)=2z^{(p)}(m,i)$. In agreement with fig. 5c, this leads to represent $Z^{(p)}(m,i)$ as a sum of zigzag diagrams with $p$ Pauli scatterings, located alternatively right, left, right,…, the index $m$ being always at the left bottom, while $i$ can be at all possible places on the right. $Z^{(p)}(m,i)$ thus contains $(p+1)$ diagrams which reduce to one, namely $Z^{(0)}(m,i)=\delta_{mi}$, when $p=0$. From fig. 5c, we also see that $\alpha_{N}(m,i)$ contains as many $N$’s as $\lambda$’s so that it ultimately depends on $(N,\lambda)$’s through $\eta$. b) Part of $A_{N}(m,i)$ which exists for $\mathbf{Q}_{i}(=\mathbf{Q}_{m})=\mathbf{Q}_{0}$ only The part of $A_{N}(m,i)$ which only exists when the $i$ and $m$ excitons have the same momentum as the sea exciton one, is $N\,\beta_{N}(m,i)$. The diagrammatic representation of the recursion relation (44) for $\beta_{N}$ is shown in fig. 6a, as well as its iteration (fig. 6b). Using eq. (40) for $b_{N}(m,i)$ and eq. (29), it is easy to check that the solution of the recursion relation (44) reads $$\beta_{N}(m,i)=\sum_{p=0}^{N-1}(-1)^{p}\,\frac{F_{N-1-p}}{F_{N}}\,\frac{(N-1)!% }{(N-1-p)!}\,\sum_{q=0}^{p}z^{(q)}(m,0)\,z^{(p-q)\ast}(i,0)\ .$$ (47) This is exactly what we get if, in the expansion of $\beta_{N}(m,i)$ in terms of $b_{N}(n,i)$ shown in fig. 6b, we insert the expansion of $b_{N}(n,i)$ in terms of Pauli scatterings shown in fig. 4b, (see fig. 6c): The diagrams making $\beta_{N}(m,i)$ are thus made of two pieces, in agreement with eq. (47). We also see that $\beta_{N}(m,i)$ contains as many $N$’s as $\lambda$’s so that $\beta_{N}(m,i)$, like $\alpha_{N}(m,i)$, is an $\eta$ function. c) $N$ dependence of $A_{N}(m,i)$ If we now come back to the expression (41) for $A_{N}(m,i)$, we see that when $\beta_{N}(m,i)=0$, i. e., when $\mathbf{Q}_{m}=\mathbf{Q}_{i}\neq\mathbf{Q}_{0}$, the $N$’s in $A_{N}(m,i)$ simply appear through products $N\lambda$. On the opposite, $A_{N}(m,i)$ contains an extra prefactor $N$ when $\beta_{N}(m,i)\neq 0$, i. e., when $\mathbf{Q}_{m}=\mathbf{Q}_{i}=\mathbf{Q}_{0}$: This extra $N$ is the memory of the bosonic enhancement found for the dressed exciton $i=0$. As already explained above, this bosonic enhancement exists not only for the exciton $i=0$, but also for any exciton which can be transformed into a $0$ exciton by Pauli scatterings with the sea excitons. From a mathematical point of view, this extra $N$ is linked to the topology of the diagrams representing $A_{N}(m,i)$. As in the case of the well known Feynman diagrams for which superextensive terms are linked to disconnected diagrams, we here see that an extra factor $N$ appears in the part of $A_{N}(m,i)$ corresponding to diagrams which are made of two pieces. To conclude, we can say that the procedure we have used to calculate $A_{N}(m,i)$ led us to represent this scalar product by Pauli diagrams which are actually quite simple: The part of $A_{N}(m,i)$ which exists for any $\mathbf{Q}_{m}=\mathbf{Q}_{i}$ is made of all connected diagrams with $m$ at the left bottom and $i$ at all possible places on the right, the exciton lines being connected by Pauli scatterings put in zigzag right, left, right…(see fig. 5c). $A_{N}(m,i)$ has an additional part when the $m$ and $i$ excitons have a momentum equal to the sea exciton momentum $\mathbf{Q}_{0}$. This additional part is made of all possible Pauli diagrams which can be cut into two pieces, $m$ staying at the left bottom of one piece, while $i$ stays at the right bottom of the other piece, the exciton lines being connected by Pauli scatterings in zigzag right, left, right…for the $m$ piece, and left, right, left…for the $i$ piece (see fig. 6c). As a direct consequence of the topology of these diconnected diagrams, an extra factor $N$ appears in this part of $A_{N}(m,i)$. This factor $N$ is physically linked to the well known bosonic enhancement which, for composite excitons, exists not only for an exciton identical to a sea exciton, but also for any exciton which can be transformed into a sea exciton by Pauli scatterings with the sea. Although this result for the scalar product of $(N+1)$-exciton states, with $N$ of them in the same state $0$, is nicely simple at any order in Pauli interaction, it does not leave us completely happy. Indeed, while in the diagrams which exist for $\mathbf{Q}_{i}=\mathbf{Q}_{m}=\mathbf{Q}_{0}$, the $m$ and $i$ indices play similar roles, their roles in the diagrams which exist even if $\mathbf{Q}_{i}\neq\mathbf{Q}_{0}$ are dissymmetric, which is not at all satisfactory. This dissymmetry can be traced back to the way we calculated $A_{N}(m,i)$. It is clear that equivalences between Pauli diagrams have to exist in order to restore the intrinsic $(m,i)$ symmetry of $A_{N}(m,i)$. In the last part of this work, we are going to discuss some of these equivalences between Pauli diagrams. However, the reader, not as picky as us, may just drop this last part since, after all, the quite simple, although dissymmetric, Pauli diagrams obtained above are enough to get the correct answer for $A_{N}(m,i)$ at any order in the Pauli interactions. 4 Equivalence between Pauli diagrams In order to have some ideas about which kinds of Pauli diagrams can be equivalent, let us first derive the other possible diagrammatic representations of $A_{N}(m,i)$. They use the recursion relations between $A_{N}(m,i)$ and $A_{N-2}(m,j)$ or $A_{N-2}(n,j)$, instead of $A_{N-2}(n,i)$. 4.1 Pauli diagrams for $A_{N}(m,i)$ using $A_{N-2}(m,j)$ To get this recursion relation, we must keep $B_{m}$ in the calculation of $\hat{A}_{N}(m,i)$ defined in eq. (33). This leads us to use $[B_{0}^{N},B_{i}^{\dagger}]$ instead of $[B_{m},B_{0}^{{\dagger}N}]$; equation (38) is then replaced by $$\hat{A}_{N}(m,i)=N\,c_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\,\sum_{nj}\lambda% _{nj00}\,\lambda_{00in}\,A_{N-2}(m,j)\ ,$$ (48) in which we have set $$c_{N}(m,i)=\frac{F_{N-1}}{F_{N}}\,\delta_{0i}\,\zeta_{N-1}(m)-\frac{F_{N-2}}{F% _{N}}\,(N-1)\lambda_{000i}\,\zeta_{N-2}(m)\ .$$ (49) Using fig. 2b for $\zeta_{N}(m)$, we easily obtain the diagrams for $c_{N}(m,i)$ shown in fig. 7. When compared to $b_{N}(m,i)$, we see that the roles played by $m$ and $i$ are exchanged as well as the relative position of the crosses. Equation (48) leads to write $A_{N}(m,i)$ as $$A_{N}(m,i)=\overline{\alpha}_{N}(m,i)+N\,\overline{\beta}_{N}(m,i)\ ,$$ (50) where $\overline{\alpha}_{N}(m,i)$ and $\overline{\beta}_{N}(m,i)$ obey the recursion relations $$\overline{\alpha}_{N}(m,i)=a_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\,\sum_{nj}% \lambda_{nj00}\,\lambda_{00in}\,\overline{\alpha}_{N-2}(m,j)\ ,$$ (51) $$\overline{\beta}_{N}(m,i)=c_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,(N-1)(N-2)\,\sum_{% nj}\lambda_{nj00}\,\lambda_{00in}\,\overline{\beta}_{N-2}(m,j)\ .$$ (52) Let us first consider $\overline{\beta}_{N}(m,i)$. As $\beta_{N}(m,i)$, it differs from zero for $\mathbf{Q}_{m}=\mathbf{Q}_{i}=\mathbf{Q}_{0}$ only. Its recursion relation leads to expand it in terms of $c$’s as shown in fig. 8. If we now replace the $c$’s by their expansion shown in fig. 7, we immediately find that $\overline{\beta}_{N}(m,i)$ is represented by the same Pauli diagrams as the ones for $\beta_{N}(m,i)$, so that $\overline{\beta}_{N}(m,i)=\beta_{N}(m,i)$. This is after all not surprising because, in them, the roles played by $m$ and $i$ are symmetrical. From this result, we immediately conclude that the parts of $A_{N}(m,i)$ which exist even if $\mathbf{Q}_{i}=\mathbf{Q}_{m}\neq\mathbf{Q}_{0}$ have also to be equal, i. e., we must have $\overline{\alpha}_{N}(m,i)=\alpha_{N}(m,i)$. Let us now see how this $\overline{\alpha}_{N}(m,i)$ appears, using eq. (51). If we calculate $a_{N}(m,i)$ not with $[D_{mi},B_{0}^{{\dagger}N}]$ but with $[B_{0}^{N},D_{mi}]$, we find that $a_{N}(m,i)$ can be represented, not only by the diagrams of fig. 3b, but also by those of fig. 3c. These diagrams look very similar, except that the crosses are now in zigzag left, right, left…Since these two sets of diagrams (3b) and (3c) represent the same $a_{N}(m,i)$, while they have to be valid for $N=2,3,\cdots$, the relative positions of the crosses have to be unimportant in these Pauli diagrams. We will come back to this equivalence at the end of this part. The iteration of the recursion relation for $\overline{\alpha}_{N}(m,i)$ leads to the diagrams of fig. 9a. If in them, we insert the diagrams of fig. 3c for $a_{N}(m,i)$, we get the diagrams of fig. 9b. They look like the ones for $\alpha_{N}(m,i)$, except that $i$ now stays at the right bottom while $m$ moves to all possible positions on the left, the zigzag for the Pauli scatterings being now left, right, left…This leads to write $\overline{\alpha}_{N}(m,i)$ as $$\overline{\alpha}_{N}(m,i)=\sum_{p=0}^{N}(-1)^{p}\,\frac{F_{N-p}}{F_{N}}\,% \frac{N!}{(N-p)!}\,\overline{Z}^{(p)}(m,i)\ ,$$ (53) where $\overline{Z}^{(p)}(m,i)$ represents the set of zigzag diagrams of fig. 9b, with $p$ crosses. Since $\overline{\alpha}_{N}(m,i)=\alpha_{N}(m,i)$, we conclude from the validity of their expansions for $N=2,3,\cdots$, that the zigzag diagrams $Z^{(p)}(m,i)$ and $\overline{Z}^{(p)}(m,i)$ must correspond to identical quantities, which is not obvious at first. 4.2 Pauli diagrams for $A_{N}(m,i)$ using $A_{N-2}(n,j)$ To get this recursion relation, we start as for the one between $A_{N}(m,i)$ and $A_{N-2}(n,i)$, but we use $[B_{0}^{N},B_{i}^{\dagger}]$ instead of $[B_{0}^{N},B_{j}^{\dagger}]$ to calculate the matrix element appearing in eq. (37). This leads to $$\hat{A}_{N}(m,i)=N\,d_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)\,\sum_{nj}\lambda% _{mj00}\,\lambda_{00ni}\,A_{N-2}(n,j)\ ,$$ (54) in which we have set $$d_{N}(m,i)=\frac{F_{N-1}}{F_{N}}\,\delta_{m0}\,\zeta_{N-1}^{\ast}(i)-\frac{F_{% N-2}}{F_{N}}\,(N-1)\,\delta_{0i}\,\sum_{j}\lambda_{mj00}\,\zeta_{N-2}^{\ast}(j% )\ .$$ (55) Using the diagrams of fig. 2c for $\zeta_{N}^{\ast}(i)$, it is easy to show that $d_{N}(m,i)$ is represented by the diagrams of fig. 10. Note that they are different from the ones for $b_{N}(m,i)$ and $c_{N}(m,i)$ shown in fig. 4b and fig. 7. This is actually normal because, as seen from eq. (55), $d_{N}(m,i)$ is equal to zero when both $m\neq 0$ and $i\neq 0$, while $b_{N}(m,i)$ and $c_{N}(m,i)$ differ from zero provided that $\mathbf{Q}_{m}(=\mathbf{Q}_{i})$ is equal to $\mathbf{Q}_{0}$. Equation (54) leads to write $A_{N}(m,i)$ as $$A_{N}(m,i)=\overline{\overline{\alpha}}_{N}(m,i)+N\,\overline{\overline{\beta}% }_{N}(m,i)\ ,$$ (56) where $\overline{\overline{\alpha}}_{N}(m,i)$ and $\overline{\overline{\beta}}_{N}(m,i)$ now obey $$\overline{\overline{\alpha}}_{N}(m,i)=a_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,N(N-1)% \,\sum_{nj}\lambda_{mj00}\,\lambda_{00ni}\,\overline{\overline{\alpha}}_{N-2}(% n,j)\ ,$$ (57) $$\overline{\overline{\beta}}_{N}(m,i)=d_{N}(m,i)+\frac{F_{N-2}}{F_{N}}\,(N-1)(N% -2)\,\sum_{nj}\lambda_{mj00}\,\lambda_{00ni}\,\overline{\overline{\beta}}_{N-2% }(n,j)\ .$$ (58) Let us start with $\overline{\overline{\beta}}_{N}(m,i)$. Its recursion relation is shown in fig. 11a, as well as its iteration (fig. 11b). (In it, we have used the two equivalent forms of this recursion relation given in fig. 11a). If, in these diagrams, we now insert the diagrammatic representation of $d_{N}(m,i)$ shown in fig. 10, with the $m$ part alternatively below and above the $i$ part, we find that $\overline{\overline{\beta}}_{N}(m,i)$ is represented by exactly the same diagrams as the ones for $\beta_{N}(m,i)$, so that $\overline{\overline{\beta}}_{N}(m,i)=\beta_{N}(m,i)$. As a consequence, we must have $\overline{\overline{\alpha}}_{N}(m,i)=\alpha_{N}(m,i)$. Let us now consider the recursion relation between $\overline{\overline{\alpha}}_{N}(m,i)$ and $\overline{\overline{\alpha}}_{N-2}(n,j)$. The iteration of this recursion relation leads to the diagrams of fig. 12a. If in it, we insert the diagrammatic representation of $a_{N}(m,i)$ shown in fig. 3b, we get the diagrams of fig. 12b. We can note that it is not enough to use $\lambda_{0n00}=\lambda_{n000}$ to transform the last third order diagram into the two last third order zigzag diagrams left, right, left of $\alpha_{N}(m,i)$. At fourth order, the situation is even worse, the last fourth order Pauli diagram for $\overline{\overline{\alpha}}_{N}(m,i)$ being totally different from a zigzag diagram. They however have to represent the same quantity because $\overline{\overline{\alpha}}_{N}(m,i)=\alpha_{N}(m,i)$ for any $N$. Let us now identify the underlying reason for the equivalence of Pauli diagrams like the ones of figs. 3b and 3c which represent the same $a_{N}(m,i)$, or the ones of figs. 5c, 9b and 12b which represent the part of the same $A_{N}(m,i)$ which exists even if $\mathbf{Q}_{m}(=\mathbf{Q}_{i})\neq\mathbf{Q}_{0}$. This will help us to understand how these Pauli diagrams really work. 4.3 “Exchange skeletons” All the Pauli diagrams we found in the preceding sections, are made of a certain number of exciton lines connected by Pauli scatterings between two excitons, put in various orders. It is clear that the value of these diagrams can depend on the “in” and “out” exciton states, i. e., the indices which appear at the right and the left of these diagrams, but not on the intermediate exciton states over which sums are taken. Between these “in” and “out” excitons, a lot of carrier exchanges take place through the various Pauli scatterings represented by crosses in the Pauli diagrams. It is actually reasonable to think that these Pauli diagrams have to ultimately read in terms of “exchange skeletons” between the “in” and “out” excitons of these diagrams. For $N=2,3,4,$…excitons, these “exchange skeletons” should appear as $$L^{(2)}(m_{1},m_{2};i_{1},i_{2})=\int d\mathbf{r}_{e_{1}}\cdots d\mathbf{r}_{h% _{2}}\phi_{m_{1}}^{\ast}(\mathbf{r}_{e_{1}},\mathbf{r}_{h_{1}})\phi_{m_{2}}^{% \ast}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{2}})\phi_{i_{1}}(\mathbf{r}_{e_{1}},% \mathbf{r}_{h_{2}})\phi_{i_{2}}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{1}}),$$ (59) $$\displaystyle L^{(3)}(m_{1},m_{2},m_{3};i_{1},i_{2},i_{3})=\int d\mathbf{r}_{e% _{1}}\cdots d\mathbf{r}_{h_{3}}\phi_{m_{1}}^{\ast}(\mathbf{r}_{e_{1}},\mathbf{% r}_{h_{1}})\phi_{m_{2}}^{\ast}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{2}})\phi_{m_{% 3}}^{\ast}(\mathbf{r}_{e_{3}},\mathbf{r}_{h_{3}})$$ $$\displaystyle\times\phi_{i_{1}}(\mathbf{r}_{e_{1}},\mathbf{r}_{h_{2}})\phi_{i_% {2}}(\mathbf{r}_{e_{3}},\mathbf{r}_{h_{1}})\phi_{i_{3}}(\mathbf{r}_{e_{2}},% \mathbf{r}_{h_{3}}),$$ (60) $$\displaystyle L^{(4)}(m_{1},m_{2},m_{3},m_{4};i_{1},i_{2},i_{3},i_{4})=\int d% \mathbf{r}_{e_{1}}\cdots d\mathbf{r}_{h_{4}}\phi_{m_{1}}^{\ast}(\mathbf{r}_{e_% {1}},\mathbf{r}_{h_{1}})\phi_{m_{2}}^{\ast}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{% 2}})$$ $$\displaystyle\times\phi_{m_{3}}^{\ast}(\mathbf{r}_{e_{3}},\mathbf{r}_{h_{3}})% \phi_{m_{4}}^{\ast}(\mathbf{r}_{e_{4}},\mathbf{r}_{h_{4}})\phi_{i_{1}}(\mathbf% {r}_{e_{1}},\mathbf{r}_{h_{2}})\phi_{i_{2}}(\mathbf{r}_{e_{3}},\mathbf{r}_{h_{% 1}})\phi_{i_{3}}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{4}})\phi_{i_{4}}(\mathbf{r}% _{e_{4}},\mathbf{r}_{h_{3}}),$$ (61) and so on…These definitions are actually transparent once we look at the diagrammatic representations of these exchange skeletons shown in fig. 13. There are in fact various equivalent ways to represent these exchange skeletons, as can be seen from fig. 14 in the case of three excitons. These equivalent representations simply say that the $i_{1}$ exciton has the same electron as the $m_{1}$ exciton and the same hole as the $m_{2}$ exciton, so that $i_{1}$ and $m_{1}$ must be connected by an electron line while $i_{1}$ and $m_{2}$ must be connected by a hole line. All possible carrier exchanges between $N$ excitons can be expressed in terms of these exchange skeletons. • In the case of two excitons, the Pauli scattering which appears in the Pauli diagrams, is just $\lambda_{mnij}=\left(L^{(2)}(m,n;i,j)+L^{(2)}(n,m;i,j)\right)/2$ (see fig. 1a). In our many-body theory for interacting composite bosons, this $\lambda_{mnij}$ appears as composed of processes in which the indices $m$ and $n$ are exchanged. This is actually equivalent to say that the excitons exchange their electrons instead of their holes (see fig. 1b). We can also note that, when the two indices on one side are equal, like in $\lambda_{mn00}$, to exchange an electron or to exchange a hole is just the same (see figs. 1d and 1e). • For three excitons, we could think of a carrier exchange between the $(i,j,k)$ and $(m,n,p)$ excitons, different from the one corresponding to $L^{(3)}(m,n,p;i,j,k)$. Let us, for example, consider the one which would read like eq. (60), with $(m_{1},m_{2},m_{3},i_{1})$ respectively replaced by $(m,n,p,i)$, while $\phi_{i_{2}}(\mathbf{r}_{e_{3}},\mathbf{r}_{h_{1}})\phi_{i_{3}}(\mathbf{r}_{e_% {2}},\mathbf{r}_{h_{3}})$ is replaced by $\phi_{j}(\mathbf{r}_{e_{2}},\mathbf{r}_{h_{3}})\phi_{k}(\mathbf{r}_{e_{3}},% \mathbf{r}_{h_{1}})$. This carrier exchange, shown in fig. 15, indeed reads as an exchange skeleton, being simply $L^{(3)}(m,n,p;i,k,j)$. And so on, for any other carrier exchange we could think of. Let us now consider the various diagrams we have found in calculating $A_{N}(m,i)$ and understand why they are indeed equivalent, in the light of these exchange skeletons. 4.4 Pauli diagrams with one exciton only different from 0 We first take the simplest of these Pauli diagrams, namely the zigzag diagram $z^{(p)}(m,0)$ entering $\zeta_{N}(m)$, shown in fig. 2. On its left, this zigzag diagram has $p$ excitons $0$ and one exciton $m$, while on its right, the $(p+1)$ excitons are all $0$ excitons. After summation over the intermediate exciton indices, the final expression of this Pauli diagram must read as an integral of $\phi_{m}^{\ast}(\mathbf{r}_{e_{1}},\mathbf{r}_{h_{1}})\phi_{0}^{\ast}(\mathbf{% r}_{e_{2}},\mathbf{r}_{h_{2}})$ …$\phi_{0}^{\ast}(\mathbf{r}_{e_{p+1}},\mathbf{r}_{h_{p+1}})$ multiplied by $(p+1)$ wave functions $\phi_{0}$ with the $(\mathbf{r}_{e_{i}},\mathbf{r}_{h_{i}})$’s mixed in such a way that the integral cannot be cut into two independent integrals (otherwise the Pauli diagram would be topologically disconnected). This is exactly what the exchange skeleton $L^{(p+1)}(m,0\cdots,0;0,\cdots,0)$ does.The possible permutations of the various $(\mathbf{r}_{e},\mathbf{r}_{h})$’s in the definition of this $L^{(p+1)}$, actually show that the $m$ index in the diagrammatic representation of this exchange skeleton can be in any possible place on the left. This is also true for the $m$ position in the Pauli diagrams with all the indices equal to $0$ except one. Moreover, the relative position of the crosses in these diagrams are unimportant (see fig. 16). This is easy to show, just by “sliding” the carrier exchanges, as explicitly shown in the case of three excitons (see fig. 17). In this figure, we have also used the fact that the Pauli scatterings reduce to one diagram when the two indices on one side are equal (see figs. 1d and 1e). This possibility to “slide” the carrier exchange, mathematically comes from the fact that $\sum_{q}\phi_{q}^{\ast}(\mathbf{r}_{e},\mathbf{r}_{h})\phi_{q}(\mathbf{r}_{e^{% \prime}},\mathbf{r}_{h^{\prime}})$ is nothing but $\delta(\mathbf{r}_{e}-\mathbf{r}_{e^{\prime}})\delta(\mathbf{r}_{h}-\mathbf{r}% _{h^{\prime}})$. 4.5 Pauli diagrams with one exciton on each side different from 0 There are essentially two kinds of such diagrams: Either the two excitons $(m,i)$ different from $0$ have no common carrier, or they have one. Let us start with this second case. 4.5.1 $m$ and $i$ have one common carrier Two different exchange skeletons exist in this case, depending if the common carrier is an electron or a hole. They are shown in figs. 18a and 18b. By “sliding” the carrier exchanges as done in fig. 18c, it is easy to identify the set of Pauli diagrams which correspond to the sum of these two exchange skeletons (see fig. 18d). This in particular shows the identity of the Pauli diagrams of fig. 18e which enter the two diagrammatic representations of $a_{N}(m,i)$ shown in figs. 3b and 3c. 4.5.2 $m$ and $i$ do not have a common carrier In this case, the number of different exchange skeletons depends on the number of $0$ excitons involved. • For two $0$ excitons, there is one exchange skeleton only (see fig. 19). By “sliding” the carrier exchanges, we get the two equivalent Pauli diagrams shown in fig. 19. They can actually be deduced from one another just by symmetry up/down, which results from $\lambda_{mnij}=\lambda_{nmji}$. • For three $0$ excitons, there are two possible exchange skeletons which are actually related by electron-hole exchange (see figs. 20a and 20b). By sliding the carrier exchanges, we get the two diagrams of fig. 20c, so that, by combining these two exchange skeletons, we find the two Pauli diagrams of fig. 20d. • In the same way, for four $0$ excitons, there are three possible exchange skeletons, two of them being related by electron-hole exchange (see figs. 21a,21b,21c). By sliding the carrier exchanges, it is again possible to find the Pauli diagrams corresponding to these exchange skeletons as shown in fig. 21. 4.6 Equivalent representations of the diagrams appearing in $A_{N}(m,i)$ By expressing the various Pauli diagrams entering the expansion of $A_{N}(m,i)$ in terms of these exchange skeletons, as shown in figs. (16-21), it is now possible to directly prove their equivalence. Fig. 22 shows the set of transformations which allows to go from the last third order diagrams of $\overline{\overline{\alpha}}_{N}(m,i)$ to the zigzag Pauli diagrams of $\alpha_{N}(m,i)$, with $i$ at the two upper positions. The transformation of the last fourth order diagram for $\overline{\overline{\alpha}}_{N}(m,i)$ into the two missing zigzag diagrams of $\alpha_{N}(m,i)$ is somewhat more subtle. For the interested reader, let us describe in details how this can be done. We have reproduced in fig. 23, the last fourth order diagram of $\overline{\overline{\alpha}}_{N}(m,i)$, as it appears in fig. 12b. We see that all Pauli scatterings have two $0$ indices on one side, except $\lambda_{npq0}$. According to figs. 1a,1b, this $\lambda_{npq0}$ can be represented by the sum of an electron exchange plus a hole exchange between the $(q,0)$ excitons. By continuity, we represent each of the other Pauli scatterings which have two $0$ excitons on one side, either by fig. 1d or by fig. 1e, the choice between these two equivalent representations being driven by avoiding the crossings of electron and hole lines. This leads to the two diagrams of figs. 23b and 23c: They just correspond to exchange the role played by the electrons and the holes. We now consider one of these two diagrams, namely the one of fig. 23b. Let us call $0$, $0^{\prime}$, $0^{\prime\prime}$ and $0^{\prime\prime\prime}$ the four identical $0$ excitons on the right, in order to recognize them more easily when we will redraw this diagram. To help visualizing this redrawing, we have also given names to the various carriers. It is then straightforward to check that the diagram of fig. 23b corresponds to the samecarrier exchanges as the ones of fig. 23d. Since the diagram of fig. 23c is the same as the one of fig. 23b, with the electrons replaced by holes, we conclude that twice the ugly diagram of fig. 23a is indeed equal to the two missing zigzag diagrams of $\alpha_{N}(m,i)$ reproduced in the top of this figure. 5 Conclusion In this paper, we have essentially calculated the scalar product of $(N+1)$-exciton states with $N$ of them in the same state $0$. This scalar product is far from trivial due to many-body effects induced by “Pauli scatterings” which originate from the composite nature of excitons. As a result, these scalar products appear as expansions in $\eta=N\mathcal{V}_{X}/\mathcal{V}$, where $\mathcal{V}_{X}$ and $\mathcal{V}$ are the exciton and sample volumes, with possibly some additional factors $N$. In order to understand the physical origin of these additional $N$’s — which will ultimately differentiate superextensive from regular terms — we have introduced the concept of exciton dressed by a sea of excitons, $$|\psi_{i}^{(N)}\rangle=\frac{B_{0}^{N}B_{0}^{{\dagger}N}}{\langle v|B_{0}^{N}B% _{0}^{{\dagger}N}|v\rangle}\ B_{i}^{\dagger}|v\rangle\ .$$ In the absence of Pauli interaction between the exciton $i$ and the sea of $N$ excitons $0$, the operator in front of $B_{i}^{\dagger}$ reduces to an identity. Due to Pauli interaction, contributions on other exciton states $B_{m\neq i}^{\dagger}|v\rangle$ appear in $|\psi_{i}^{(N)}\rangle$, which originate from possible carrier exchanges between the $i$ exciton and the sea. We moreover find that a bosonic enhancement, which gives rise to an extra factor $N$, — reasonable for the exciton $i=0$, since after all, excitons are not so far from bosons — also exists when the $i$ exciton can be transformed by carrier exchanges into a sea exciton. This happens for any exciton $i$ having the same center of mass momentum as the sea exciton. In order to understand the carrier exchanges between $N$ excitons, which make the scalar products of $N$-exciton states so tricky, we have introduced “Pauli diagrams”. They read in terms of Pauli scatterings between two excitons. With them, we have shown how to generate a diagrammatic representation of the scalar products of $(N+1)$-exciton states with $N$ of them in the same state $0$, at any order in Pauli interaction. This diagrammatic representation is actually not unique. Although the one we first give, is nicely simple to memorize, more complicated ones, obtained from other procedures to calculate the same scalar product, are equally good in the sense that they lead to the same correct result. In order to understand the equivalence between these various Pauli diagrams, we have introduced “exchange skeletons” which correspond to carrier exchanges between more than two excitons. Their appearance in the scalar products of $N$-exciton states is actually quite reasonable because, even if we can calculate these scalar products in terms of Pauli scatterings between two excitons only, Pauli exclusion is originally $N$-body “at once”: When a new exciton is added, its carriers must be in states different from the ones of all the previous excitons. The Pauli scatterings between two excitons generated by our many-body theory for interacting composite bosons, are actually quite convenient to calculate many-body effects between excitons at any order in the interactions. It is however reasonable to find that a set of such Pauli scatterings, which in fact correspond to carrier exchanges between more than two excitons, finally read in terms of these “exchange skeletons”. The present work is restricted to scalar products of $N$-exciton states in which all of them, except one, are in the same $0$ state. In physical effects involving $N$ excitons, of course enter more complicated scalar products. Such scalar products will be presented in a forthcoming publication. The detailed study presented here, is however all the more useful, because it allows to identify the main characteristics of these scalar products, which are actually present in more complicated situations: As an example, in the case of two dressed excitons $(i,j)$, a bosonic enhancement is found not only for $\mathbf{Q}_{i}$ or $\mathbf{Q}_{j}$ equal to $\mathbf{Q}_{0}$, but also for $\mathbf{Q}_{i}+\mathbf{Q}_{j}=2\mathbf{Q}_{0}$, because these $(i,j)$ excitons can transform themselves into two $0$ excitons by carrier exchanges. The corresponding processes are represented by topologically disconnected Pauli diagrams, and extra factors $N$ appear as a signature of this topology. REFERENCES [1] M. Combescot, C. Tanguy, Europhys. Lett. 55, 390 (2001). [2] M. Combescot, O. Betbeder-Matibet, Europhys. Lett. 58, 87 (2002). [3] O. Betbeder-Matibet, M. Combescot, Eur. Phys. J. B 27, 505 (2002). [4] M. Combescot, O. Betbeder-Matibet, Europhys. Lett. 59, 579 (2002). [5] M. Combescot, X. Leyronas, C. Tanguy, Eur. Phys. J. B 31, 17 (2003). [6] O. Betbeder-Matibet, M. Combescot, Eur. Phys. J. B 31, 517 (2003). [7] M. Combescot, O. Betbeder-Matibet, K. Cho, H. Ajiki, Cond-mat/0311387. [8] A.A. Abrikosov, L.P. Gorkov, I.E. Dzyaloshinski, Methods of quantum field theory in statistical physics, Prentice-hall, inc. Englewood cliffs N.J. (1964). [9] C. Cohen-Tannoudji, B. Diu, F. Laloë, Mécanique Quantique, Hermann, Paris (1973).
Split Instability of a Vortex in an Attractive Bose-Einstein Condensate Hiroki Saito    Masahito Ueda Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan (November 19, 2020) Abstract An attractive Bose-Einstein condensate with a vortex splits into two pieces via the quadrupole dynamical instability, which arises at a weaker strength of interaction than the monopole and the dipole instabilities. The split pieces subsequently unite to restore the original vortex or collapse. pacs: 03.75.Fi, 05.30.Jp, 32.80.Pj, 67.40.Vs Quantized vortices in gaseous Bose-Einstein condensates (BECs) offer a visible hallmark of superfluidity Matthews ; Madison ; Abo , where repulsive interatomic interactions play a crucial role in the vortex stabilization and lattice formation. Attractive BECs, on the other hand, cannot hold vortices in any thermodynamically stable state. A fundamental issue of the decay of a many-particle quantum system may be addressed if a vortex is created in an attractive BEC. Such a state has become possible owing to the development of the Feshbach technique Inouye by which the strength and the sign of interactions can be controlled Cornish ; Roberts01 ; Donley . Suppose that a singly quantized vortex is created in a BEC with repulsive interaction and that the interaction is adiabatically changed from repulsive to attractive. According to previous work Dalfovo , the vortex state remains metastable until a dimensionless strength of interaction $g$ to be defined later reaches a critical value $g_{M}^{\rm cr}(<0)$, and when $|g|$ exceeds $|g_{M}^{\rm cr}|$, the system develops a monopole (breathing-mode) instability and collapses. In this Letter, we show that this conclusion holds only when the system has exact axisymmetry, and that even an infinitesimal symmetry-breaking perturbation induces the quadrupole dynamical instability that appears for $|g|$ smaller than $|g_{M}^{\rm cr}|$. We note that similar dynamical instabilities Recati ; Sinha ; Tsubota initiate vortex nucleation observed by the ENS group Madison . A dynamical instability is also shown to transfer a vortex from one to the other component of a binary BEC system Garcia00 . Here, we show that yet another dynamical instability causes a vortex to split into two pieces that revolve around the center of the trap. Surprisingly, in some parameter regimes, the pieces subsequently unite to restore the original vortex, and this split-merge cycle repeats. We report below the results of our studies on the collapsing dynamics of a vortex in an attractive BEC. We first investigate the Bogoliubov spectrum of a single-vortex state. The single-vortex state is determined so as to minimize the Gross-Pitaevskii (GP) energy functional within the axisymmetric functional space $\psi_{0}=f(r,z)e^{i\phi}$ with $r=(x^{2}+y^{2})^{1/2}$, where we ignore the effect of vortex bending Rosen ; Garcia01 ; bending . In the following analysis, we normalize the length, time, and wave functions in units of $d_{0}\equiv(\hbar/m\omega_{\perp})^{1/2}$, $\omega_{\perp}^{-1}$, and $(N/d_{0}^{3})^{1/2}$, where $\omega_{\perp}$ is the radial trap frequency, and $N$, the number of BEC atoms. We obtain the Bogoliubov spectrum by numerically diagonalizing the Bogoliubov-de Gennes equations Edwards $$\displaystyle\left(K+2g|\psi_{0}|^{2}\right)u_{n}+g\psi_{0}^{2}v_{n}$$ $$\displaystyle=$$ $$\displaystyle E_{n}u_{n},$$ (1a) $$\displaystyle\left(K+2g|\psi_{0}|^{2}\right)v_{n}+g\psi_{0}^{*2}u_{n}$$ $$\displaystyle=$$ $$\displaystyle-E_{n}v_{n},$$ (1b) where $K\equiv-\nabla^{2}/2+(r^{2}+\lambda^{2}z^{2})/2-\mu$ with $\lambda\equiv\omega_{z}/\omega_{\perp}$, and $n$ is the index of the eigenmode. Here, $g\equiv 4\pi Na/d_{0}$ characterizes the strength of interaction, where $a$ is the s-wave scattering length. For a vortex state $\psi_{0}\propto e^{i\phi}$, each angular momentum state $u_{n}\propto e^{im\phi}$ is coupled only to $v_{n}\propto e^{i(m-2)\phi}$, and we shall refer to $m$ as the angular momentum of the excitation. We find that there is at least one negative eigenvalue in the $m=0$ mode for any $g<0$ and $\lambda$ even in the presence of a rotating drive. The vortex state with attractive interactions is therefore thermodynamically unstable, and eventually decays into the non-vortex ground state by dissipating its energy and angular momentum. At sufficiently low temperatures in a high-vacuum chamber, however, the thermodynamic instability is irrelevant, since the energy and angular momentum are conserved. In fact, recent experiments Madison have demonstrated that the vortex state in a stationary trap has a lifetime of $\sim 1$ s, which is much longer than the characteristic time scales of the dynamics that we shall discuss below. When the complex eigenvalues emerge in the Bogoliubov spectrum, the amplitude of the corresponding mode grows exponentially in time. As noise is inevitable in experimental situations, such dynamical instabilities are more important than the thermodynamic one at low temperature. Figure 1 shows the real and imaginary parts of the lowest eigenvalues of the $m=-1$ and $3$ excitations in an isotropic trap. The eigenvalues become complex at the critical strength of interaction $g_{Q}^{\rm cr}=-15.06$, showing the onset of the dynamical instability in the quadrupole mode. The imaginary part of the complex eigenvalue is proportional to $\sqrt{g_{Q}^{\rm cr}-g}$ as shown in the inset in Fig. 1. The complex eigenvalues emerge also in the dipole modes, i.e., $m=0$ and $2$, for $g<g_{D}^{\rm cr}=-18.02$. The eigenvalues with other $m$ are real for $g$ larger than the critical value for the monopole (radial-breathing-mode) instability $g_{M}^{\rm cr}=-23.7$. Figure 2 shows the $\lambda$ dependence of $g_{Q}^{\rm cr}$, $g_{D}^{\rm cr}$, $g_{M}^{\rm cr}$, and $g_{\rm nonvortex}^{\rm cr}$ in axi-symmetric traps, where $g_{\rm nonvortex}^{\rm cr}$ is the critical value for the non-vortex state to collapse through the monopole instability. We note that $|g_{M}^{\rm cr}|$ is always larger than $|g_{Q}^{\rm cr}|$ and $|g_{D}^{\rm cr}|$, and hence the latter instabilities arise before the monopole instability sets in. For a trap with $\lambda\gtrsim 0.3$, the quadrupole instability arises before the dipole one, unlike the quasi-1D toroidal trap Rokhsar ; Ueda ; Berman . We also performed numerical diagonalization for 2D systems. Strong confinement in the $z$ direction produces the quasi-2D trap, when $\hbar\omega_{z}$ is much larger than the characteristic energy of the dynamics. The effective strength of interactions in the quasi-2D oblate trap is given by $g^{\rm 2D}=\sqrt{\lambda/(2\pi)}g^{\rm 3D}$ Castin . In the 2D system, the dynamical instability arises in the quadrupole, dipole, and monopole modes at $g_{Q}^{\rm 2Dcr}=-7.79$, $g_{D}^{\rm 2Dcr}=-11.48$, and $g_{M}^{\rm 2Dcr}=-23.4$. The dependencies of the complex eigenvalues on $g$ are similar to that in Fig. 1. To understand the dynamical instabilities analytically, let us consider the GP action integral in 2D $$S=\int dt\int d{\bf r}\psi^{*}\left(-i\frac{\partial}{\partial t}-\frac{\nabla% ^{2}}{2}+\frac{r^{2}}{2}+\frac{g}{2}|\psi|^{2}\right)\psi.$$ (2) We assume that the state evolution is described by $\psi=\sum_{m}c_{m}(t)\phi_{m}({\bf r})$, where $\phi_{m}({\bf r})$ is assumed to take the form of $\phi_{m}({\bf r})=[r^{|m|}/(\sqrt{\pi|m|!}d^{|m|+1})]\exp[-r^{2}/(2d^{2})+im\phi]$ and $d=[1+g/(8\pi)]^{1/4}$ minimizes the GP energy functional for the $m=1$ state. Substituting this $\psi$ into Eq. (2) and minimizing $S$ with respect to $c_{m}$ yields $$i\dot{c}_{m}=\varepsilon_{m}c_{m}+g\sum_{mm_{1}m_{2}m_{3}}G_{m_{2},m_{3}}^{m,m% _{1}}c_{m_{1}}^{*}c_{m_{2}}c_{m_{3}},$$ (3) where $\varepsilon_{m}\equiv\int d{\bf r}(|\nabla\psi_{m}|^{2}+r^{2}|\psi_{m}|^{2})/2$ and $G^{m_{1},m_{2}}_{m_{3},m_{4}}\equiv\int d{\bf r}\psi_{m_{1}}^{*}\psi_{m_{2}}^{% *}\psi_{m_{3}}\psi_{m_{4}}$. When the BEC exists in the $m=1$ mode, we obtain $c_{1}(t)=e^{-i\mu t}+O(|c_{m\neq 1}|^{2})$ with $\mu=1/d^{2}+d^{2}+g/(4\pi d^{2})$. The linear analysis of Eq. (3) for $\tilde{c}_{m}\equiv e^{i\mu t}c_{m}$ $(m\neq 1)$ yields $$i\dot{\tilde{c}}_{m}=(\varepsilon_{m}-\mu)\tilde{c}_{m}+2G^{m,1}_{m,1}\tilde{c% }_{m}+G^{m,2-m}_{1,1}\tilde{c}_{2-m}^{*}.$$ (4) It follows from this that, for $m=-1$, the eigenfrequencies are given by $A\pm\sqrt{B}$, where $A\equiv[g/(8\pi)-1]/[1+g/(8\pi)]^{1/2}$ and $B\equiv 3+5g/(8\pi)+[1+g/(2\pi)-g^{2}/(32\pi^{2})]/[1+g/(8\pi)]$. We find that $B$ is a monotonically increasing function for $g>-8\pi$, and $B$ becomes negative for $g<g^{\rm cr}\simeq-9.2$, which is in reasonable agreement with $g_{Q}^{\rm 2Dcr}=-7.79$ stated above. We also find that the imaginary part appearing for $g<g^{\rm cr}$ is proportional to $\sqrt{g^{\rm cr}-g}$, in agreement with the inset of Fig. 1. The Bogoliubov analysis described above is valid only if deviations from a stationary state are small. To follow further evolution of the wave function, we must solve the time-dependent GP equation. Since we are studying the growth of small perturbations, high precision is required in the numerical integration, and hence we consider the GP equation in 2D $$i\frac{\partial\psi}{\partial t}=\left[-\frac{1}{2}\nabla^{2}+\frac{1}{2}r^{2}% +g^{\rm 2D}|\psi|^{2}\right]\psi$$ (5) to ensure sufficiently small discretization in the Crank-Nicholson scheme Ruprecht . This situation corresponds to an oblate trap with large $\lambda$. Figures 3 (a)-(f) depict the time evolution of the density and phase profiles with $g^{\rm 2D}=-9$, which is smaller than the critical value for the quadrupole mode $g_{Q}^{\rm 2Dcr}=-7.79$ but larger than that for the dipole mode $g_{D}^{\rm 2Dcr}=-11.48$. A small symmetry-breaking perturbation is added to the initial state to imitate noise in realistic situations. Due to the quadrupole instability, the vortex is first stretched [Fig. 3 (b)], and then splits into two clusters that revolve around the center of the trap [Fig. 3 (d)] with angular velocity $\simeq 0.73\omega_{\perp}$. In the first deformation process the $m=-1$ and $3$ components grow exponentially, and their Lyapunov exponents agree with the imaginary part of the complex eigenvalues. Interestingly, the split process is reversible: the two clusters subsequently unite to restore the ring shape [Fig. 3 (f)], and this split-merge process repeats. We numerically checked that no split-merge phenomenon occurs for $g^{\rm 2D}>g_{Q}^{\rm 2Dcr}$, where the system is metastable. The insets in Fig. 3 illustrate the phase plots. At $t=16$, there are three topological defects: the central one exists from the outset, and the other two enter as the vortex splits, in accordance with the fact that the $m=3$ component grows upon the vortex split. The two side vortices cannot be seen in the density plot, and hence they may be called “ghost” vortices Tsubota that carry very little angular momentum. The two clusters in Fig. 3 (d) may be regarded as revolving “solitons” whose phases differ by $\pi$. In fact, at an energy only slightly below that of Fig. 3 (d), there is a low-lying state in which two solitons revolve without changing their shapes. It is interesting to note that this situation is similar to the soliton-train formation observed by the Rice group Strecker , where the modulation (dynamical) instability causes a quasi-1D condensate to split into solitons when the interaction is changed from repulsive to attractive using the Feshbach resonance. We note that similar instabilities split an optical vortex propagating in a nonlinear medium into spiraling solitons Tikhonenko ; Garcia_opt ; Mihalache . This similarity between attractive BECs and optical solitons Stegeman implies that other nonlinear phenomena, such as pattern formation, which has been predicted in attractive BECs Saito , may also be realized in optical systems. When the system becomes too small to be observed by the in situ imaging method due to the attractive interaction, the condensate must be expanded before imaging. Figure 3 (g) shows the expanded image at $t=17$, where the interaction is switched from $g^{\rm 2D}=-9$ to $g_{\rm expand}=50$ and the trapping potential is switched off at $t=16$ [Fig. 3 (d)]. The image shows the interference fringes due to the overlap of the atomic clouds emanating from the two clusters. The wavelength of the interference pattern is proportional to the expansion time. Figure 3 (h) shows the expanded image at $t=18$ with $g_{\rm expand}=0$. Comparing Figs. 3 (g) and (h), we find that the stronger repulsive interaction produces more fringes and bends them around the center. When $|g^{\rm 2D}|$ exceeds $|g_{D}^{\rm 2Dcr}|=11.48$, a dipole instability arises in addition to the quadrupole one. The dipole instability causes atoms to transfer from one cluster to the other, thereby inducing the collapse. Figures 4 (a)-(c) show the collapse process with $g^{\rm 2D}=-11.5$. After the split-merge process repeats a few times, the balance between the two clusters is broken due to the dipole instability. As a consequence, the cluster labeled A grows [Fig. 4 (a)], then B grows [Fig. 4 (b)] like a seesaw, and eventually B absorbs most atoms and collapses [Fig. 4 (c)], where the original topological defect begins to spiral out as indicated by the white arrow in the inset. With a stronger attractive interaction $g^{\rm 2D}=-12$, both clusters collapse immediately after the vortex split as shown in Figs. 4 (d)-(f). In this collapse process, we found the exchange of a vortex-antivortex pair (see the insets). This phenomenon is also seen in the split-merge process with $g^{\rm 2D}=-11.5$, while it is not seen at weaker attractive interactions, say, at $g^{\rm 2D}=-9$. Figures 4 (g)-(i) show the collapse where the interaction is switched from $g^{\rm 2D}=0$ to $g^{\rm 2D}=-24<g_{M}^{\rm 2Dcr}$ at $t=0$. The vortex first shrinks due to the monopole instability [(g) $\rightarrow$ (h)], then splits into two clusters due to the quadrupole instability [(h) $\rightarrow$ (i)], and both clusters collapse. When the interaction is switched to an even greater attractive one, a shell structure is formed Saito , which splits into several parts due to multipole instabilities, and each fragment collapses and explodes Donley , producing very complicated collapsing dynamics. In conclusion, we have studied the dynamical instabilities of a quantized vortex in an attractive BEC. The dynamical quadrupole instability spontaneously breaks the axisymmetry and splits a vortex into clusters that revolve around the center of the trap, which then unite to restore the vortex or eventually collapse. The dynamical instabilities presented here play a much larger role than the thermodynamic one at low temperature, and serve as a dominant mechanism for the collapsing dynamics of a rotating condensate: vortices collapse via the dynamical instabilities around the topological defects. This work was supported by a Grant-in-Aid for Scientific Research (Grant No. 11216204) by the Ministry of Education, Science, Sports, and Culture of Japan, and by the Toray Science Foundation. References (1) M. R. Matthews et al., Phys. Rev. Lett. 83, 2498 (1999). (2) K. W. Madison et al., Phys. Rev. Lett. 84, 806 (2000). (3) J. R. Abo-Shaeer et al., Science 292, 476 (2001). (4) S. Inouye et al., Nature 392, 151 (1998). (5) S. L. Cornish et al., Phys. Rev. Lett. 85, 1795 (2000). (6) J. L. Roberts et al., Phys. Rev. Lett. 86, 4211 (2001). (7) E. A. Donley et al., Nature 412, 295 (2001). (8) F. Dalfovo and S. Stringari, Phys. Rev. A 53, 2477 (1996). (9) A. Recati et al., Phys. Rev. Lett. 86, 377 (2001). (10) S. Sinha and Y. Castin, Phys. Rev. Lett. 87, 190402 (2001). (11) M. Tsubota et al., Phys. Rev. A 65, 023603 (2002). (12) J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. Lett. 84, 4264 (2000). (13) P. Rosenbusch et al., cond-mat/0206511. (14) J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. A 63, 041603 (2001). (15) In recent experiments Rosen , the time scale of the vortex bending is found to be $\gtrsim$ 1 s, which is much larger than the time scale of the vortex split ($\sim$ 10 ms). This is because the former is due to a thermally activated process Rosen , and the latter, to a dynamical instability. Furthermore, the bending effect will be much less pronounced in a spherical or prolate trap, which we assume in this Letter, than in an elongated trap. (16) M. Edwards et al., J. Res. Natl. Inst. Stand. Technol. 101, 553 (1996). (17) D. S. Rokhsar, cond-mat/9709212. (18) M. Ueda and A. J. Leggett, Phys. Rev. Lett. 83, 1489 (1999). (19) G. P. Berman et al., Phys. Rev. Lett. 88, 120402 (2002). (20) Y. Castin and R. Dum, Eur. Phys. J. D 7, 399 (1999). (21) P. A. Ruprecht et al., Phys. Rev. A 51, 4704 (1995). (22) K. E. Strecker et al., Nature 417, 150 (2002). (23) V. Tikhonenko et al., J. Opt. Soc. Am. B 12, 2046 (1995). (24) J. J. García-Ripoll et al., Phys. Rev. Lett. 85, 82 (2000). (25) D. Mihalache et al., Phys. Rev. Lett. 88, 073902 (2002). (26) For review, see, G. I. Stegeman and M. Segev, Science 286, 1518 (1999), and references therein. (27) H. Saito and M. Ueda, Phys. Rev. Lett. 86, 1406 (2001); Phys. Rev. A 63, 043601 (2001); 65, 033624 (2002).
Enrichment of Jupiter’s atmosphere by late planetesimal bombardment Sho Shibata Institute for Computational Science (ICS), University of Zurich Ravit Helled Institute for Computational Science (ICS), University of Zurich Abstract Jupiter’s atmosphere is enriched with heavy elements by a factor of about 3 compared to proto-solar. The origin of this enrichment and whether it represent the bulk composition of the planetary envelope remain unknown. Internal structure models of Jupiter suggest that its envelope is separated from the deep interior and that the planet is not fully mixed. This implies that Jupiter’s atmosphere was enriched with heavy elements just before the end of its formation. Such enrichment can be a result of late planetesimal accretion. However, in-situ Jupiter formation models suggest the decreasing accretion rate with increasing planetary mass, which cannot explain Jupiter’s atmospheric enrichment. In this study, we model Jupiter’s formation and show that an migration of proto-Jupiter from $\sim$ 20 AU to its current location can lead to a late planetesimal accretion and atmospheric enrichment. Late planetesimal accretion does not occur if proto-Jupiter migrates only a few AU. We suggest that if Jupiter’s outermost layer is fully-mixed and is relatively thin (up to $\sim$ 20% of its mass), such late accretion can explain its measured atmospheric composition. It is therefore possible that Jupiter underwent significant orbital migration followed by late planetesimal accretion. methods: numerical — planets and satellites: formation — planets and satellites: gaseous planets –– planets and satellites: interiors 1 Introduction The Galileo probe measured the elemental abundances in Jupiter’s atmosphere and found that several heavy elements are enriched by a factor of $\sim$ 3 compared to a proto-solar composition (e.g., Owen et al., 1999; Wong et al., 2004; Atreya et al., 2020). Also the recent measurement of Jupiter’s water abundance by Juno imply that oxygen is enriched by a factor of a few (Li et al., 2020). The origin of the heavy-element enrichment of Jupiter’s atmosphere remains unknown and several ideas have been suggested to explain this enrichment. One idea is that the atmospheric enrichment is caused by the erosion of a primordial heavy-element core (e.g., Stevenson, 1982; Guillot et al., 2004; Öberg & Wordsworth, 2019; Bosman et al., 2019). However, in this case the materials dissolved into the deep interior must be mixed by convection and be delivered to the upper envelope. Since recent structure models of Jupiter imply that the planet is not fully convective (e.g., Leconte & Chabrier, 2013; Wahl et al., 2017; Vazan et al., 2018; Debras & Chabrier, 2019), the validity of this explanation is questionable and should be investigated in detail. Alternatively, Jupiter’s atmospheric enrichment could be a result of the accretion of enriched disk gas (Guillot et al., 2006; Bosman et al., 2019; Schneider & Bitsch, 2021). However, this scenario cannot reproduce the atmospheric enrichment of water and refractory materials (Schneider & Bitsch, 2021). Finally, it is possible that Jupiter’s atmosphere has been enriched by late accretion of heavy elements, in the form of planetesimal accretion, as we explore in this work. Previous investigations of Jupiter’s formation considered planetesimal accretion in the context of in-situ formation where the proto-Jupiter grows at 5.2 AU (e.g., Zhou & Lin, 2007; Shiraishi & Ida, 2008; Shibata & Ikoma, 2019; Venturini & Helled, 2020; Podolak et al., 2020). In this case, as the gas accretion rate increases, the planetesimal accretion rate decreases. Therefore, if Jupiter is not fully convective the enrichment of its outer envelope is difficult to explain . However, there is a clear theoretical indication that planets migrate (e.g. Bitsch et al., 2015; Kanagawa et al., 2018; Ida et al., 2018; Bitsch et al., 2019; Tanaka et al., 2020). During the planetary migration, planetesimals can be captured (e.g. Alibert et al., 2005), and it was recently shown that the migration rate regulates the planetesimal accretion rate (Shibata et al., 2020, 2021; Turrini et al., 2021). It was shown in Shibata et al. (2021) that rapid planetesimal accretion occurs in the limited region which we refer to as the ”sweet spot for planetesimal accretion” (SSP). The SSP is located around $\lesssim 10{\rm AU}$ for planets smaller than Jupiter, suggesting that proto-Jupiter enters the SSP after a large fraction of its envelope has already accumulated. If this is the case, a non-negligible amount of planetesimals can be accreted into the outer layer of the proto-Jupiter and lead to an enrichment of its atmosphere. In this letter, we simulate Jupiter’s formation including planetary migration and investigate the accretion rate of planetesimals. In Sec. 2, we describe our numerical model and the formation pathways of proto-Jupiter we consider. Our results are presented in Sec. 3 where we show that rapid planetesimal accretion occurs just before the end of Jupiter formation. A discussion on the connection to Jupiter’s measured atmospheric metallicity is presented in Sec. 4. Finally, our conclusions are discussed in Sec. 5. 2 Methods We perform orbital integration calculations of planetesimals around a proto-planet growing via disk gas accretion (increasing planetary mass $M_{\rm p}$) and migrating inward due to the tidal interaction with the surrounding gaseous disk (decreasing planetary semi-major axis $a_{\rm p}$). Our simulations begin from the rapid gas accretion phase for given planetary mass $M_{\rm p,0}$ and semi-major axis $a_{\rm p,0}$. We assume that there are many single-sized planetesimals with a radius of $R_{\rm pl}$ around the protoplanet’s orbit. The protoplanet then encounters these planetesimals and can capture some of them. Planetesimals are represented by test particles and are therefore only affected by the gravitational forces from the central star (with a mass of $M_{\rm s}=M_{\odot}$) and the protoplanet, as well as the drag force of the gaseous disk. To model the drag force we follow the model of Adachi et al. (1976). The dynamical integration for the bodies is performed using the numerical framework presented in Shibata & Ikoma (2019). We adapt the formation model of Tanaka et al. (2020), where both the gas accretion timescale $\tau_{\rm acc}$ and planetary migration timescale $\tau_{\rm tide}$ depend on the gap structure opened by the protoplanet’s tidal torque. In this model, the effect of the density profile of the disk is cancelled, and the relation between the two timescales is given by: $$\displaystyle\frac{\tau_{\rm tide}}{\tau_{\rm acc}}=\left|\frac{d\ln{M_{\rm p}}}{d\ln{a_{\rm p}}}\right|=\left(\frac{M_{\rm p}}{M_{\rm th}}\right)^{-2/3},$$ (1) where $M_{\rm th}$ is the threshold mass determined by the gas accretion and migration models. In Tanaka et al. (2020), $M_{\rm th}$ is estimated as $\sim 10^{-2}$ . Their gas accretion model assumed that most of the disk gas entering the hill sphere is accreted by the planet. However, as pointed in Ida et al. (2018), recent hydrodynamic simulations clearly show that this is an over estimate of the gas accretion rate (e.g., Szulágyi et al., 2016; Kurokawa & Tanigawa, 2018). We therefore consider two formation pathways with $M_{\rm th}=10^{-2}M_{\rm s}$ (Case 1) and $M_{\rm th}=10^{-3}M_{\rm s}$ (Case 2). Figure 1 shows the formation pathways of proto-Jupiter for these two cases where the solid and dashed lines correspond to Case 1 and Case 2, respectively. We set $M_{\rm p,0}=6\times 10^{-5}M_{\odot}\sim 20M_{\oplus}$ and adjust $a_{\rm p,0}$ as $6.9{\rm AU}$ for Case 1 (square point) and $18.3{\rm AU}$ for Case 2 (circle point) in order to ensure that the protoplanet reaches Jupiter’s mass at its current location (cross point in Fig. 1). It is clear that other formation paths are possible, however, in this study we focus only on two typical cases representing short and long migration in order to investigate how they compare in terms of late heavy-element enrichment. The SSP depends on various parameters, such as the disk viscosity, aspect ratio, and planetesimal size (Shibata et al., 2021). In order to investigate the effect of the SSP, we consider different planetesimal sizes $R_{\rm pl}$. In Fig. 1, we plot the formation pathways of the proto-planet and the SSP when considering two different planetesimal sizes. As we show below the relative position between the formation pathways and the SSP affects the enrichment of the planetary envelope. The detailed model and other parameters used in our simulations are presented in the Appendix A. 3 Results Figure 2 shows the results of our simulations. The upper panels present the cumulative captured mass of planetesimals $M_{\rm cap}$ as a function of calculation time $t-t_{0}$. In Case 1 (left panel), the cumulative captured mass gradually increases with time although the accretion rate decreases. This is because the expanding speed of the feeding zone decreases with the increase of gas accretion timescale and planetary mass (Shibata & Ikoma, 2019). This results in the depletion of planetesimals inside the feeding zone. For Case 2 (right panel), the accretion rate decreases until $t-t_{0}\lesssim 10^{6}\rm yr$. Planetesimal accretion nearly stops when $t-t_{0}\sim 10^{5}\rm yr$. However, a second planetesimal accretion phase occurs before the end of Jupiter’s formation. As shown in Fig. 1, proto-Jupiter enters the SSP before the end of its formation, which triggers the second phase planetesimal accretion. Before it enters the SSP, many planetesimals are shepherded by the mean motion resonances. This leads to a large amount of planetesimals that enters the feeding zone when proto-Jupiter reaches the SSP. The planetesimal accretion rate increases with decreasing $R_{\rm pl}$. This is because the SSP moves outward with decreasing $R_{\rm pl}$ and the length of the evolutionary pathway that overlaps with the SSP is longer for smaller planetesimals. On the other hand, the initiation of the second phase of planetesimal accretion occurs earlier for smaller planetesimals because proto-Jupiter enters the SSP earlier. The lower panels of Fig. 2 show the heavy-element distribution formed in Jupiter envelope $Z_{\rm p}$ by the end of the simulations. To estimate $Z_{\rm p}$, we neglect any mixing processes; namely $Z_{\rm p}$ is obtained from the planetesimal accretion rate normalised by the mass growth rate $\dot{M}_{\rm cap}/\dot{M}_{\rm p}$. For Case 1 (left panel), the planetesimal accretion rate decreases with time and the outer envelope is barely enriched with planetesimals. On the other hand, for Case 2 (right panel), the planetesimals accreted during the second accretion phase are deposited in the outer envelope ($\gtrsim 0.5M_{\rm J}$). This is most profound for $R_{\rm pl}=10^{6.5}$, where the metallicity of Jupiter’s atmosphere can be enhanced by a factor of a few. It should be noted, however, that the final metallicity of Jupiter’s atmosphere would depend on the mixing and settling of the heavy elements accreted at this late stage. We discuss this topic in more detail in Sec. 4. It should be noted that in this study we focus on the enrichment of Jupiter’s atmosphere. However, formation models should also reproduce the total heavy-element mass in the planet. Interior models of Jupiter that fit Juno gravity data suggest that Jupiter’s interior is enriched with a few tens M${}_{\oplus}$ of heavy elements although the exact heavy-element mass is not well-constrained. In this study, we begin the simulation of proto-Jupiter with a mass of $\sim 20M_{\oplus}$ of heavy elements without specifying the formation process of the heavy-element core. Core formation via planetesimal/pebble accretion is expected to lead to deep interiors that are heavy-element dominated in composition and for the build-up of composition gradients (e.g. Lozovsky et al., 2017; Helled & Stevenson, 2017; Valletta & Helled, 2020). We therefore assume that the deep interior of the forming planet consists of mostly heavy elements. 4 Discussion 4.1 Enrichment of Jupiter’s atmosphere The accreted heavy elements can be redistributed by the mixing processes in Jupiter’s envelope that can occur over a timescale of Gyrs. Here for simplicity, we assume that the uppermost layer of Jupiter’s envelope with a mass $M_{\rm conv,top}$ is separated from the deeper interior and that the accreted heavy-element mass deposited in this region $M_{\rm cap,top}$ is uniformly distributed. In other words, we assume that the outermost region of Jupiter’s envelope is convective and homogeneously mixed and that the envelope below this layer has a different composition. This assumption is in fact consistent with recent models of Jupiter’s interior (e.g., Vazan et al., 2018). In this case, the metallicity of Jupiter’s atmosphere (outer envelope) $Z_{\rm top}$ is given by: $$\displaystyle Z_{\rm top}=\frac{M_{\rm cap,top}}{M_{\rm conv,top}}.$$ (2) Figure 3 shows $Z_{\rm top}$ as a function of $M_{\rm conv,top}$. The solid and dashed black lines correspond to Case 1 and Case 2, respectively. Here, we show the cases of $R_{\rm pl}=10^{6.5}{\rm cm}$ with which the most efficient atmospheric enrichment is achieved. The thin lines show the results for the cases where we use a planetesimal disk that is twice more massive than our baseline model. In Case 1, $Z_{\rm top}$ is significantly lower than proto-solar metallicity independently of $M_{\rm conv,top}$. This is because the total mass of accreted planetesimals is smaller than $1M_{\oplus}$ and the planetesimals are accreted at early stages and are deposited in the deep interior. Even when we consider a planetesimal disk that is several times more massive, it is difficult to explain Jupiter’s enriched atmosphere in Case 2. On the other hand, for Case 2, several M${}_{\oplus}$ of heavy elements are accreted during the late phases of Jupiter’s formation leading to the enrichment of the planetary uppermost envelope as we show in Fig. 3. We find that in this case $Z_{\rm top}$ is enhanced in comparison to proto-solar metallicity for $M_{\rm conv,top}\lesssim 0.6M_{\rm J}$. The blue and red areas correspond to the measured water abundance by Juno (Li et al., 2020) and the elemental abundance measured by the Galileo probe (Atreya et al., 2020), respectively. We find that Jupiter’s atmospheric metallicity can be explained with $M_{\rm conv,top}\lesssim 0.2M_{\rm J}$. The maximum required value of $M_{\rm conv,top}$ can increase when considering larger sizes of the planetesimal disk. It should be noted, however, that the size of the uppermost convective envelope of Jupiter is not well-constrained and changes with different structure models (e.g., Wahl et al., 2017; Vazan et al., 2018; Debras & Chabrier, 2019). Using a planetary evolution model, Vazan et al. (2018) found that a primordial composition gradient in the deep interior can be partially eroded by convective mixing leading to a large convective envelope ($60\%$ of Jupiter mass) for Jupiter today. In this case, a more massive planetesimal disk are required to reproduce the measured elemental abundances in Jupiter’s atmosphere. To explain Jupiter’s atmospheric enrichment, our results clearly favour a small outer convective layer for Jupiter as proposed by Debras & Chabrier (2019) ($M_{\rm conv,top}\sim 0.1M_{\rm J}$). 4.2 Very volatile materials The measurement of the Galileo probe also finds that very volatile materials are also enriched in Jupiter’s atmosphere. Such elements are expected to condense in a cold environment where the temperature is $\lesssim 30\rm K$ (Atreya et al., 2020). Figure 4 shows the accreted heavy-element mass as a function of the initial semi-major axis of the planetesimals. The mass of accreted planetesimals when $M_{\rm p}>0.8M_{\rm J}$ is indicated with meshed textures. In Case 2, the planetesimals accreted onto the upper envelope mainly come from the relatively cold outer region of the disk ($10\sim 15{\rm AU}$). However, even for the case of an optically thick disk (Sasselov & Lecar, 2000), the mid-plane temperature is too high for very volatile materials, such as N${}_{2}$ and Ar, to condense into planetesimals. the atmosphere, for example, by decreasing the mid-plane temperature with a shadow of the inner disk (e.g. Ohno & Ueda, 2021), or by increasing the metallicity of the accreting disk gas towards the end of disk depletion (Guillot & Hueso, 2006). 4.3 Assumed Disk Model In this study we adapt a relatively large disk in comparison to observed protoplanetary disks (e.g. Andrews et al., 2010). The planetesimal accretion rate is expected to change when considering different disk models. However, as long as Jupiter’s formation pathway overlaps with the SSP, the second planetesimal accretion is expected to occur. To confirm that this is indeed the case, we performed additional simulations considering different disk models. The results are presented in Appendix B. Indeed, we find that a late phase of planetesimal accretion always occurs when proto-Jupiter enters the SSP. We therefore conclude that the occurrence of second planetesimal accretion by proto-Jupiter is robust. In this study, we assumed that the planetesimals are homogeneously distributed. However, recent planetesimal formation models imply that planetesimals are distributed in a ring-like structure around ice lines because the solid-to-gas ratio is locally enhanced due to the pile-up of solid materials (Armitage et al., 2016; Dra̧żkowska & Alibert, 2017; Hyodo et al., 2021). Ice lines of volatile materials such as $CO_{2}$ could lead to planetesimal formation at $\sim 10-15{\rm AU}$. In addition, the disk temperature evolution is important to consider in planetesimal formation models around ice-lines (e.g., Lichtenberg et al., 2021). We hope to investigate other planetesimal distributions and their time evolution in future studies. Finally, it should be noted that the initial planetesimal distribution also affects the leftover distribution of small objects. By the end of the simulations in Case 2, more than $10M_{\oplus}$ of planetesimals remain in the region interior to Jupiter’s orbit. These objects do not exist at present in the solar system, the non-accreted planetesimals mainly come from distances $\lesssim 10{\rm AU}$. 5 Conclusions We investigated Jupiter’s origin focusing on the possibility of planetesimal accretion towards the end of its formation. We considered two formation pathways: Case 1 where proto-Jupiter migrates from $\sim 7{\rm AU}$ to its current location and Case 2 where proto-Jupiter migrates from $\sim 20{\rm AU}$. For Case 1, we find that the planetesimal accretion rate decreases with increasing planetary mass. Therefore for this case, Jupiter’s outer envelope cannot be enriched with heavy elements. On the other hand, in Case 2, we find that a late planetesimal accretion phase occurs before the end of Jupiter’s formation. This happens because proto-Jupiter enters the sweet spot for planetesimal accretion (see text for details), which leads to an enrichment of Jupiter’s atmosphere. The accreted heavy elements are expected to mix and redistribute in Jupiter’s envelope during its long-term evolution. Assuming the mass of the uppermost layer of Jupiter’s envelope $M_{\rm conv,top}$ and fully mixing of deposited heavy materials, we find that: • Jupiter’s atmosphere is barely enriched in Case 1 regardless of the size of $M_{\rm conv,top}$. • A relatively thin layer of $M_{\rm conv,top}\lesssim 0.2M_{\rm J}$ in Case 2 is consistent with the observed metallicity of Jupiter’s atmosphere. The results of the two formation models we consider and their outcomes are summarised in the sketches of Fig. 5. To conclude, we suggest that Jupiter’s core was formed via pebble accretion and had migrated from $\sim 20$ AU to its current location followed by a late phase of planetesimal accretion that enriches its atmosphere with heavy elements. In this scenario we infer an internal structure that is (at least qualitatively) consistent with interior models of Jupiter in which the outermost part of Jupiter’s envelope is enriched with heavy elements by a factor of a few relative to the proto-solar composition. In our scenario, the atmospheric enrichment is a result of late planetesimal accretion and not due to convective mixing of heavy elements from the deep interior as suggested by other studies (e.g. Öberg & Wordsworth, 2019; Bosman et al., 2019). We conclude that Jupiter’s atmosphere can be enriched with heavy elements if proto-Jupiter had migrated from $\sim 20$ AU to its current location. Further studies about the convection of Jupiter’s envelope and the mixing of heavy materials would be used for the constraints of Jupiter’s formation pathway. We acknowledge support from the Swiss National Science Foundation (SNSF) under grant 200020_188460. Appendix A Planetary formation model We describe the formation model used in this study, which is based on the model by Tanaka et al. (2020). We adopt a planetary migration model with a shallow gap empirically obtained by Kanagawa et al. (2018). The migration rate is given by: $$\displaystyle\frac{dr_{\rm p}}{dt}=-2c\frac{M_{\rm p}}{M_{\rm s}}\frac{{r_{\rm p}}^{2}\Sigma_{\rm gap}}{M_{\rm s}}\left(\frac{h_{\rm p}}{r_{\rm p}}\right)^{-2}v_{\rm K,p},$$ (A1) where $c$ is the constant set as $c=3$ in this study, $r_{\rm p}$ is the orbital radius of the planet, $\Sigma_{\rm gap}$ is the surface density of disk gas at the gap bottom, $h_{\rm p}$ is the disk gas scale height and $v_{\rm K,p}$ is the Kepler velocity of the planet. the gas accretion rate is given by (Tanigawa & Watanabe, 2002): $$\displaystyle\frac{dM_{\rm p}}{dt}=D\Sigma_{\rm gap}$$ (A2) with $$\displaystyle D=0.29\left(\frac{M_{\rm p}}{M_{\rm s}}\right)^{4/3}\left(\frac{h_{\rm p}}{r_{\rm p}}\right)^{-2}{r_{\rm p}}^{2}\Omega_{\rm p}$$ (A3) where $\Omega_{\rm p}$ is the Kepler angular velocity of the planet. Using eq. (A1) and eq. (A2), we can obtain eq. (1) and $M_{\rm th}=10^{-2}$. In the Case 2, to account for the lower accretion rate, we artificially reduce the gas accretion rate by a factor of $\sim 5$ and obtain $M_{\rm th}=10^{-3}$. Our baseline disk model is based on the self-similar solution for the surface density profile of disk gas (Lynden-Bell & Pringle, 1974). The mid-plane temperature of disk gas $T_{\rm disk}$ is given by: $$\displaystyle T_{\rm disk}=280{\rm K}\left(\frac{r}{1{\rm AU}}\right)^{-1/2},$$ (A4) where $r$ is the radial distance from the central star. In this case, the disk gas viscosity $\nu=\alpha_{\rm vis}c_{\rm s}h_{\rm s}$, where $\alpha_{\rm vis}$ is the viscosity parameter (Shakura & Sunyaev, 1973) and $c_{\rm s}$ is the sound speed of disk gas, is proportional to $r$ and the self-similar solution $\Sigma_{\rm SS}$ is given as: $$\displaystyle\Sigma_{\rm SS}=\frac{M_{\rm tot,0}}{2\pi{R_{\rm d}}^{2}}\left(\frac{r}{R_{\rm d}}\right)^{-1}T^{-3/2}\exp\left(-\frac{r}{TR_{\rm d}}\right),$$ (A5) with: $$\displaystyle T$$ $$\displaystyle=1+\frac{t}{\tau_{\rm vis}},$$ (A6) $$\displaystyle\tau_{\rm vis}$$ $$\displaystyle=\frac{{R_{\rm d}}^{2}}{\nu_{\rm d}},$$ (A7) where $M_{\rm tot,0}$ is the disk total mass at $t=0$, $R_{\rm d}$ is a radial scaling length of protoplanetary disk, $\tau_{\rm vis}$ is the characteristic viscous timescale and $\nu_{\rm d}$ is a disk gas viscosity at $r=R_{\rm d}$. The surface density profile of disk gas is altered by the gap opening around the planet, the gas accretion onto the planet and the disk depletion. We include these effects and the surface density profile of disk gas $\Sigma_{\rm gas}$ is given by: $$\displaystyle\Sigma_{\rm gas}=f_{\rm gap}f_{\rm acc}f_{\rm dep}\Sigma_{\rm SS},$$ (A8) where $f_{\rm gap}$ is the gap opening factor, $f_{\rm acc}$ is the gas accretion factor, and $f_{\rm dep}$ is the disk depletion factor. For the gap opening factor, we adapt the empirically obtained model by Kanagawa et al. (2017). The gap structure changes with the radial distance from the planet $\Delta r=|r-r_{\rm p}|/r_{\rm p}$ and $f_{\rm gap}$ is written as a function of $\Delta r$ as $$\displaystyle f_{\rm gap}=\begin{cases}\displaystyle{\frac{1}{1+0.04K}}&{\rm for}~{}\Delta r<\Delta R_{1},\\ \displaystyle{4.0{K^{\prime}}^{-1/4}\Delta r-0.32}&{\rm for}~{}\Delta R_{1}<\Delta r<\Delta R_{2},\\ \displaystyle{1}&{\rm for}~{}\Delta R_{2}<\Delta r,\end{cases}.$$ (A9) with $$\displaystyle K$$ $$\displaystyle=\left(\frac{M_{\rm p}}{M_{\rm s}}\right)^{2}\left(\frac{h_{\rm p}}{r_{\rm p}}\right)^{-5}{\alpha_{\rm vis}}^{-1},$$ (A10) $$\displaystyle K^{\prime}$$ $$\displaystyle=\left(\frac{M_{\rm p}}{M_{\rm s}}\right)^{2}\left(\frac{h_{\rm p}}{r_{\rm p}}\right)^{-3}{\alpha_{\rm vis}}^{-1},$$ (A11) $$\displaystyle\Delta R_{1}$$ $$\displaystyle=\left\{\frac{1}{4(1+0.04K)}+0.08\right\}{K^{\prime}}^{1/4},$$ (A12) $$\displaystyle\Delta R_{2}$$ $$\displaystyle=0.33{K^{\prime}}^{1/4}.$$ (A13) In the disk region inner than the planet, disk surface density is reduced by the gas accretion onto the planet. When the gas accretion rate is given by eq. (A2) and the gap structure is given by eq. (A9), $f_{\rm acc}$ is written as (Tanaka et al., 2020) $$\displaystyle f_{\rm acc}=\begin{cases}1&{\rm for}~{}r>r_{\rm p}\\ \displaystyle{\left\{1+\frac{D}{3\pi\nu(1+0.04K)}\right\}^{-1}}&{\rm for}~{}r\leq r_{\rm p},\end{cases}.$$ (A14) where $\nu$ is the disk gas viscosity given as $\nu=\alpha_{\rm vis}c_{\rm s}h_{\rm s}$. To account for the disk depletion process, such as photo-evaporation or disk wind, we set the disk depletion factor $f_{\rm dep}$ as $$\displaystyle f_{\rm dep}=\exp\left(-\frac{t}{\tau_{\rm dep}}\right).$$ (A15) The surface density of disk gas at the gap bottom, which is used for the gas accretion rate and migration rate, is obtained as $\Sigma_{\rm gap}=\Sigma_{\rm gas}(r=r_{\rm p})$ using eq. (A8). In our model, we set $\alpha_{\rm vis}=10^{-3}$, $M_{\rm disk,0}=0.1M_{\odot}$ and $R_{\rm disk}=200{\rm AU}$, respectively. The final orbital position of proto-Jupiter at the $a_{\rm p}-M_{\rm p}$ plane is determined by the core formation time $t_{0}$ (e.g. Tanaka et al., 2020). We find that Jupiter stops at its current location due to the disk depletion if the core formed with $t_{0}=1.3\times 10^{6}\rm yr$ and $t_{0}=0.9\times 10^{6}\rm yr$ for Case 1 and Case 2, respectively. We continue the orbital integration for $1\times 10^{7}\rm yr$. By the end of the simulation, we find that Jupiter does not grow further and that planetesimal accretion is negligible at that stage. For the planetesimal disk, we assume that the planetesimal distribution follows the density profile of the gaseous disk at $t=0$ and adopt the solid-to-gas ratio used in Turrini et al. (2021). To speed up the numerical simulation, we adopt super-particles used in Shibata et al. (2021). Figure 6 shows the disk model used in this letter. The solid and dashed lines are the surface density of solid materials (or planetesimals) and gas, respectively. Figure 7 shows the evolution of gas accretion timescale $\tau_{\rm acc}$ (red) and planetary migration timescale $\tau_{\rm tide}$ (blue) in our model. The solid and dashed lines are cases of $M_{\rm th}=10^{-2}$ and $10^{-3}$, respectively. Both timescales rapidly increase around $M_{\rm p}\sim M_{\rm J}$ due to the exponential decay of disk gas. We stop our simulation at $t-t_{0}=1\times 10^{7}\rm yr$. Even if we proceeded the simulation farther, proto-Jupiter would not grow and migrate anymore. When $M_{\rm th}=10^{-2}$, the fraction of timescales keeps a large value of $\tau_{\rm tide}/\tau_{\rm acc}\gg 1$. On the other hand, when $M_{\rm th}=10^{-3}$, the fraction of timescales decreases to $\sim 1$. Table 1 shows the parameters used in this study. Appendix B The effect of the assumed disk’s profile In Sec. 3, we adapt a self-similar solution for a protoplanetary disk with $M_{\rm disk,0}=0.1M_{\odot}$ and $R_{\rm disk}=200{\rm AU}$. However, the structure and the size of protoplanetary disks are not well-determined. Here, we show the results when assuming three different disk models. The first one is a small disk model where we adapt the self-similar solution but with $M_{\rm disk,0}=0.03M_{\odot}$ and $R_{\rm disk}=50{\rm AU}$. The second and third disk models are a steep disk model and a flat disk model where we adapt a simple disk profile given by: $$\displaystyle\Sigma_{\rm Simple}=\Sigma_{0}\left(\frac{r}{5.2{\rm AU}}\right)^{-\alpha_{\rm disk}}$$ (B1) with $\alpha_{\rm disk}=3/2$ for the steep disk and $\alpha_{\rm disk}=1/2$ for the flat disk. $\Sigma_{0}$ is set as $300{\rm g}/{\rm cm}^{2}$ and the gap opening factor, the gas accretion factor and the disk depletion factor are adapted in the same way as eq. (A8). Due to the different distribution of the disk gas, the evolution is different from the original model. However, the evolution pathways on the $a_{\rm p}-M_{\rm p}$ plane is similar to the baseline model because the timescale fractions are independent of the disk’s profile (see Eq. 1). Fig. 8 shows the results using the various disk models. The total captured mass of planetesimals and the timing of planetesimal accretion is similar to the baseline disk model. This is because the location of the SSP is nearly independent of the disk surface density profile (Shibata et al., 2021). Even if the gaseous disk had a different density distribution, the SSP would locate around Jupiter’s orbit. Therefore we conclude that the occurrence of a second planetesimal accretion phase where heavy elements are deposited into the upper envelope of proto-Jupiter is robust and does not depend on the assumed disk model. Appendix C Planetesimal collisions In our simulation, we adopt test particles for planetesimals and neglect collisions between planetesimals. As pointed by Batygin (2015) and Shibata et al. (2021), planetesimal collisions cloud be important during the shepherding process. We find that more than 10 M${}_{\oplus}$ of planetesimals are shepherded by the mean motion resonances These planetesimals could collide with each other as the planet migrates inwards. Once collisional cascade begins, the planetesimal size distribution can change and therefore affecting (i.e., reducing) the planetesimal accretion rate. Appendix D Effect of other planets Our simulations focused on the interaction between a migrating planet and the surrounding planetesimals and do not include the existence of other protoplanets. The gravitational perturbations from other protoplanets, however, could affect the location of the SSP (Shibata et al., 2020). In addition, the migration of other planets can change the distribution of planetesimals, and even contribute the further planetesimal formation (Shibaike & Alibert, 2020). It is therefore clear that future studies should investigate formation pathways accounting for the growth of all the outer planets and their mutual interactions. Since the atmospheres of all the outer planets in the solar system are measured to be enriched with heavy materials (e.g. Atreya et al., 2020) it is desirable to investigate planetesimal accretion mechanisms for all four planets. References Adachi et al. (1976) Adachi, I., Hayashi, C., & Nakazawa, K. 1976, Progress of Theoretical Physics, 56, 1756 Alibert et al. (2005) Alibert, Y., Mordasini, C., Benz, W., & Winisdoerffer, C. 2005, A&A, 434, 343 Andrews et al. (2010) Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2010, Astrophysical Journal, 723, 1241 Armitage et al. (2016) Armitage, P. J., Eisner, J. A., & Simon, J. B. 2016, The Astrophysical Journal Letters, 828, doi:10.3847/2041-8205/828/1/L2 Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, araa, 47, 481 Atreya et al. (2020) Atreya, S. K., Hofstadter, M. H., In, J. H., et al. 2020, Space Sci. Rev., 216, 18 Barnett & Ciesla (2022) Barnett, M. N., & Ciesla, F. J. 2022, arXiv:2201.00862 Batygin (2015) Batygin, K. 2015, Monthly Notices of the Royal Astronomical Society, 451, 2589. https://academic.oup.com/mnras/article-abstract/451/3/2589/1181581 Bitsch et al. (2019) Bitsch, B., Izidoro, A., Johansen, A., et al. 2019, Astronomy & Astrophysics, 623, A88. https://arxiv.org/pdf/1902.08771.pdfhttps://www.aanda.org/10.1051/0004-6361/201834489 Bitsch et al. (2015) Bitsch, B., Lambrechts, M., & Johansen, A. 2015, Astronomy & Astrophysics, 582, A112. https://www.aanda.org/10.1051/0004-6361/201526463ehttp://www.aanda.org/10.1051/0004-6361/201526463 Bosman et al. (2019) Bosman, A. D., Cridland, A. J., & Miguel, Y. 2019, Astron. Astrophys. Suppl. Ser., 632, L11 Debras & Chabrier (2019) Debras, F., & Chabrier, G. 2019 Dones et al. (2004) Dones, L., Weissman, P. R., Levison, H. F., & Duncan, M. J. 2004, Comets II Dra̧żkowska & Alibert (2017) Dra̧żkowska, J., & Alibert, Y. 2017, Astron. Astrophys. Suppl. Ser., 608, A92 Eriksson et al. (2021) Eriksson, L. E. J., Ronnet, T., & Johansen, A. 2021, Astron. Astrophys. Suppl. Ser., 648, A112 Guillot & Hueso (2006) Guillot, T., & Hueso, R. 2006, mnras, 367, L47 Guillot et al. (2006) Guillot, T., Santos, N. C., Pont, F., et al. 2006, Astronomy & Astrophysics, 453, L21 Guillot et al. (2004) Guillot, T., Stevenson, D. J., Hubbard, W. B., & Saumon, D. 2004, Jupiter. The planet, satellites and magnetosphere., 35. http://adsabs.harvard.edu/cgi-bin/nph-data{_}query?bibcode=2004jpsm.book...35G{&}link{_}type=ABSTRACT{%}5Cnpapers2://publication/uuid/B26DB9A6-624B-4964-80CD-5BB7E9B570F3 Helled & Stevenson (2017) Helled, R., & Stevenson, D. 2017, ApJ, 840, L4 Hyodo et al. (2021) Hyodo, R., Guillot, T., Ida, S., Okuzumi, S., & Youdin, A. N. 2021, Astron. Astrophys. Suppl. Ser., 646, A14 Ida et al. (2018) Ida, S., Tanaka, H., Johansen, A., Kanagawa, K. D., & Tanigawa, T. 2018, The Astrophysical Journal, 864, 77. https://doi.org/10.3847/1538-4357/aad69chttps://iopscience.iop.org/article/10.3847/1538-4357/aad69c Inaba & Ikoma (2003) Inaba, S., & Ikoma, M. 2003, Astronomy & Astrophysics, 410, 711. http://www.aanda.org/10.1051/0004-6361:20031248 Kanagawa et al. (2017) Kanagawa, K. D., Tanaka, H., Muto, T., & Tanigawa, T. 2017, Publications of the Astronomical Society of Japan, 69, doi:10.1093/pasj/psx114. http://academic.oup.com/pasj/article/doi/10.1093/pasj/psx114/4633985 Kanagawa et al. (2018) Kanagawa, K. D., Tanaka, H., & Szuszkiewicz, E. 2018, The Astrophysical Journal, 861, 140. http://stacks.iop.org/0004-637X/861/i=2/a=140?key=crossref.6128a03edd02ab268bdcdc18bccdba56https://iopscience.iop.org/article/10.3847/1538-4357/aac8d9 Kurokawa & Tanigawa (2018) Kurokawa, H., & Tanigawa, T. 2018, Monthly Notices of the Royal Astronomical Society, 479, 635. https://academic.oup.com/mnras/article/479/1/635/5034960 Leconte & Chabrier (2013) Leconte, J., & Chabrier, G. 2013, Nat. Geosci., 6, 347 Li et al. (2020) Li, C., Ingersoll, A., Bolton, S., et al. 2020, Nature Astronomy, 4, 609 Lichtenberg et al. (2021) Lichtenberg, T., Dra̧żkowska, J., Schönbächler, M., Golabek, G. J., & Hands, T. O. 2021, Science, 371, 365 Lozovsky et al. (2017) Lozovsky, M., Helled, R., Rosenberg, E. D., & Bodenheimer, P. 2017, ApJ, 836, 227 Lynden-Bell & Pringle (1974) Lynden-Bell, D., & Pringle, J. E. 1974, MNRAS, 168, 603 Öberg & Wordsworth (2019) Öberg, K. I., & Wordsworth, R. 2019, Astron. J., 158, 194 O’Brien et al. (2007) O’Brien, D. P., Morbidelli, A., & Bottke, W. F. 2007, The primordial excitation and clearing of the asteroid belt—Revisited, , Ohno & Ueda (2021) Ohno, K., & Ueda, T. 2021, Astron. Astrophys. Suppl. Ser., 651, L2 Owen et al. (1999) Owen, T., Mahaffy, P., Niemann, H. B., et al. 1999, nat, 402, 269 Podolak et al. (2020) Podolak, M., Haghighipour, N., Bodenheimer, P., Helled, R., & Podolak, E. 2020, The Astrophysical Journal, 899, 45 Sasselov & Lecar (2000) Sasselov, D. D., & Lecar, M. 2000, ApJ, 528, 995 Schneider & Bitsch (2021) Schneider, A. D., & Bitsch, B. 2021, How drifting and evaporating pebbles shape giant planets I: Heavy element content and atmospheric C/O, Tech. rep., arXiv:2105.13267v1 Shakura & Sunyaev (1973) Shakura, N. I., & Sunyaev, R. A. 1973, Astronomy & Astrophysics, 24, 337 Shibaike & Alibert (2020) Shibaike, Y., & Alibert, Y. 2020, Astron. Astrophys. Suppl. Ser., 644, A81 Shibata et al. (2020) Shibata, S., Helled, R., & Ikoma, M. 2020, A&A, 633, 13. https://doi.org/10.1051/0004-6361/201936700http://arxiv.org/abs/1911.02292 Shibata et al. (2021) —. 2021, arXiv:2112.12623 Shibata & Ikoma (2019) Shibata, S., & Ikoma, M. 2019, MNRAS, 487, 4510 Shiraishi & Ida (2008) Shiraishi, M., & Ida, S. 2008, The Astrophysical Journal, 684, 1416. http://stacks.iop.org/0004-637X/684/i=2/a=1416https://iopscience.iop.org/article/10.1086/590226 Stevenson (1982) Stevenson, D. J. 1982, planss, 30, 755 Szulágyi et al. (2016) Szulágyi, J., Masset, F., Lega, E., et al. 2016, Mon. Not. R. Astron. Soc., 460, 2853 Tanaka et al. (2020) Tanaka, H., Murase, K., & Tanigawa, T. 2020, The Astrophysical Journal, 891, 143. https://iopscience.iop.org/article/10.3847/1538-4357/ab77af Tanigawa & Watanabe (2002) Tanigawa, T., & Watanabe, S.-I. 2002, The Astrophysical Journal, 580, 506. https://iopscience.iop.org/article/10.1086/343069 Turrini et al. (2021) Turrini, D., Schisano, E., Fonte, S., et al. 2021, Astrophys. J., 909, 40 Valletta & Helled (2020) Valletta, C., & Helled, R. 2020, Astrophys. J., 900, 133 Valletta & Helled (2021) —. 2021, Mon. Not. R. Astron. Soc. Lett., 507, L62 Vazan et al. (2018) Vazan, A., Helled, R., & Guillot, T. 2018, Astrophysics A&A, 610, 14. https://doi.org/10.1051/0004-6361/201732522 Venturini & Helled (2020) Venturini, J., & Helled, R. 2020, Astronomy & Astrophysics, 634, A31. https://www.aanda.org/10.1051/0004-6361/201936591 Wahl et al. (2017) Wahl, S. M., Hubbard, W. B., Militzer, B., et al. 2017, Geophys. Res. Lett., 44, 4649 Wong et al. (2004) Wong, M. H., Mahaffy, P. R., Atreya, S. K., Niemann, H. B., & Owen, T. C. 2004, Planet. Space Sci., 171, 105 Zhou & Lin (2007) Zhou, J., & Lin, D. N. C. 2007, The Astrophysical Journal, 666, 447. http://stacks.iop.org/0004-637X/666/i=1/a=447https://iopscience.iop.org/article/10.1086/520043 \listofchanges
Spectrally narrow, long-term stable optical frequency reference based on a Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ crystal at cryogenic temperature Qun-Feng Chen Qun-Feng.Chen@uni-duesseldorf.de    Andrei Troshyn    Ingo Ernsting    Steffen Kayser    Sergey Vasilyev    Alexander Nevsky    Stephan Schiller Institut für Experimentalphysik, Heinrich-Heine-Universität Düsseldorf, 40225 Düsseldorf, Germany (November 19, 2020) Abstract Using an ultrastable continuous-wave laser at 580 nm we performed spectral hole burning of Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ at very high spectral resolution. Essential parameters determining the usefulness as a “macroscopic” frequency reference: linewidth, temperature sensitivity, long-term stability were characterized, using a H-maser stabilized frequency comb. Spectral holes with linewidth as low as 6 kHz were observed and the upper limit of the drift of the hole frequency was determined to be on the order of 5$\pm$3 mHz/s. We discuss necessary requirements for achieving ultra-high-stability in laser frequency stabilization to these spectral holes. pacs: Frequency stabilized lasers are of importance for a variety of scientific and industrial applications. Present-day laser stabilization techniques utilize various types of frequency references e.g. atomic or molecular transitions, or modes of low-loss optical resonators. Another type of frequency reference is an ensemble of optical centers in a solid at cryogenic temperature. Appealing features of the latter are a relatively low sensitivity to environmental disturbances and simplicity of interrogation. Spectral hole burning (SHB) is well-established technique that allows to overcome limitations imposed by inhomogeneous broadening of absorption lines and to address narrow optical transitions of dopants in solids Macfarlane and Shelby (1987); Nilsson et al. (2004). The technique has been proposed and implemented for numerous applications, e.g. optical data storage and processing, quantum computing and laser stabilization Böttger et al. (2003); Pryde et al. (2002); Sellin et al. (2001). Even if only one single transition of a particular dopant/host system is identified as suitable for frequency stabilization at a competitive level, this would suffice to provide frequency-stable radiation over essentially the complete optical range, since a femtosceond frequency comb, a virtual beat technique Telle et al. (2002), and a nonlinear frequency conversion can be employed for transfer of the frequency stability to other frequencies. In this work we study some fundamental properties of narrow persistent spectral holes in a particular system, Europium ions doped in an yttrium orthosilicate crystal. Our measurements are performed at very high resolution in the frequency domain, using, for the first time, to our knowledge, an ultra-stable and narrow-linewidth laser. We achieve long-lived holes with width as low as 6 kHz, the lowest linewidth reported so far for long-lived holes foo (a). The ${}^{7}F_{0}$ – ${}^{5}D_{0}$ transition (580 nm) in Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ crystal exhibits one of the narrowest optical resonances in a solid and hence was thoroughly studied during past decades Yano et al. (1991); Equall et al. (1994); Könz et al. (2003); Sellars et al. (2004); Macfarlane et al. (2004). The ${}^{7}F_{0}$ and ${}^{5}D_{0}$ energy levels of Eu${}^{3+}$ ion have no electronic magnetic moment. The atomic nuclei in Y${}_{2}$SiO${}_{5}$ crystal have small or no magnetic moment, therefore the contribution of the nuclear spin fluctuations to the homogeneous broadening of the transition is small. The homogeneous linewidth of 122 Hz ($2\times 10^{-13}$ relative linewidth) was determined from photon echo decays. This is close to the 85 Hz limit imposed by the spontaneous emission. Persistent spectral holes with a lifetime of $\approx 100$ h occur because the excited ${}^{5}D_{0}$ state decays to long-lived hyperfine levels of the ground state. Studies of the Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ spectral holes using continuous-wave laser interrogation have been severely limited by the linewidth and frequency instability of the used laser sources, usually dye lasers. A solid-state laser source based on a cw-OPO has been used to obtain spectral holes with linewidth below 1 MHz Petelski et al. (2001). By coincidence the 580 nm wavelength is very close to the ${}^{1}S_{0}$ – ${}^{3}P_{0}$ clock transition in neutral Yb (578 nm). We used our diode-laser-based interrogation laser for an Yb optical lattice clock to significantly improve on previous measurements. A schematic of the experimental setup is shown in Fig. 1. Details of the narrowband laser are described in Nevsky et al. (2008); Vogt et al. (2010). In brief, we frequency double the IR output of the home-built external cavity diode laser in a quasi-phase-matched nonlinear waveguide. The laser is locked, via its second harmonic output, to a mode of a high-finesse Fabry-Perot resonator made of ultra low expansion glass (ULE), enclosed in a vacuum chamber. The setup is equipped with temperature stabilization and vibration isolation stages. The linewidth of the laser at 580 nm is about 1 Hz, the linear drift is $\approx 0.1$ Hz/s. The laser frequency was measured by a Ti:Sapphire frequency comb referenced to a H-maser and to GPS. We used an acoustooptic frequency shifter (AOM) for tuning the laser frequency and to control the laser power. An uncoated $5\times 5\times 10$ mm${}^{3}$ Y${}_{2}$SiO${}_{5}$ crystal doped with $0.1\%$ Eu${}^{3+}$ (Scientific Materials) was cooled in a pulse tube cryostat to 3 – 6 K. The laser and the cryostat, located in different laboratories, were connected by 80 m long single-mode optical fiber. Fluctuations in the unstabilized optical fiber resulted in broadening of the laser emission by $\approx 1$ kHz. The maximum laser power delivered to the crystal was about 15 $\mu$W. The laser beam was loosely focused in the crystal. The transmission through the crystal was detected by a low-noise silicon photodetector placed outside the cryostat. Spectral holes were burned on site 1 (516 847 GHz) during 2 – 10 s at $\approx$ mW/cm${}^{2}$ intensity and then the spectra were obtained by frequency-scanning the strongly (100 - 1000 times) attenuated laser relative to the hole spectrum, detecting the transmitted signal. The minimum delay between the burning and the reading was about 10 s. Fig. 2 illustrates spectral holes obtained in the absence of magnetic field ($H_{0}=0$, left plot) and at $H_{0}\approx 0.1$ T (here we placed the crystal inside a strong permanent magnet). At $H_{0}=0$ and 6 K the width of spectral holes was 1.5 MHz a few seconds after the burning, and further broadened in time at a rate of 2 kHz/s. The holes became indistinguishable on the inhomogeneous background after a few tens of minutes. The introduction of the magnetic field resulted in two orders of magnitude sharper spectra (right plot). The width of the holes burned at 3 K was as low as 6 kHz, while burning at 6 K we found increased widths of 14 kHz. Furthermore, the spectral holes became much more stable in time as will be described below. Our experiments reveal a considerable difference between the linewidths observed by other groups by photon echoes on a short time-scale ($\approx$ ms) and the widths observed in this work a few seconds after the hole burning. Instantaneous spectral diffusion (ISD) is not a likely reason of the hole’s broadening in our experiments: the very small fraction of the exited ions ($\leq 10^{-6}$ of the total Eu${}^{3+}$ ions) results in a contribution $\Gamma_{ISD}\leq 20$ Hz, according to Könz et al. (2003). The broadening of the spectral holes in time (continuous spectral diffusion) was reported for similar material (Eu${}^{3+}$:Y${}_{2}$O${}_{3}$) Sellars et al. (1994), and was attributed to an interaction with a distribution of low-lying energy states in the crystal that are thermally activated at low temperatures, similar to two-level systems (TLS’s) in glasses. The same broadening mechanism can be expected in the Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ system. Another important characteristic of SHB frequency reference is its sensitivity to the temperature. We burned a spectral hole at 3 K and then measured the spectrum of this hole while the temperature of the crystal was increased incrementally to 4 K. At the end of the experiment the temperature was decreased back to 3 K and the hole’s spectrum was measured again. The whole experiment took approximately 45 minutes. The slow drift of the laser frequency was monitored by the frequency comb during the measurements and was subtracted from the data. Fig. 3. left shows the spectrum of the original hole (black) and the spectra of the same hole at 3.5 and 4 K (green and red, respectively). The blue line shows the spectrum obtained after cooling the crystal back to 3 K. As can be seen, alteration of the crystal’s temperature results in a shift of the hole’s central frequency, broadening of the hole and deformation of its shape. However, all those effects are (almost) reversible, compare blue and black lines. The temperature cycling during 45 minutes resulted in a small deterioration of the hole’s shape but no visible line broadening. Similar reversibility was observed when the hole was burned at 6 K and its spectra were read during the 6 K $\rightarrow$ 3 K $\rightarrow$ 6 K temperature cycling. According to theory and measurement of Könz et al. Könz et al. (2003), coupling to phonons causes the line center frequency and the linewidth of the SHB to change with temperature as $f_{c}(T)\propto T^{4}$ and $\Gamma(T)-\Gamma(T=0)\propto T^{7}$, respectively. Our observations indicate that qualitatively $\Gamma(T)-\Gamma(T_{0})\propto(T-T_{0})^{2}$ (Fig. 3 right, blue circles and dashed line), where $T_{0}$ is the temperature at which the hole is burned, and that the shape of the spectrum is deformed when the temperature difference is more than 1 K (Fig. 3, Left). The deformation implies an extra mechanism for broadening, e.g. interaction with TLS’s or a (reversible) change in the magnetic field distribution within the crystal due to thermal expansion of the magnet and crystal housing. Our observation of the frequency shift agrees with a $T^{4}$ behavior (Fig. 3, right, red circles and solid line). Because spectroscopy of a single hole is not suitable for covering a broad range of temperatures (the hole becomes too shallow and deformed for precise evaluation of $f_{c}$), we also measured the derivatives $\Delta f_{c}/\Delta T$ by repeating the process of burning a hole at certain temperature $T$ and comparing its spectra at the original and at the slightly shifted temperatures $T+0.25$ K. These measurements were carried out in the 3.75 - 5 K range and yielded $\Delta f_{c}/\Delta T\propto T^{3}$, in agreement with the previous result. The measured temperature sensitivity $\Delta f_{c}/\Delta T\approx 20$ Hz/mK at 3 K is very small. With the crystal temperature stability we achieve now (2 $\mu$K) foo (b), the corresponding hole frequency instability should be reduced to below the $1\times 10^{-16}$ level on a time scale of $10^{4}$ s. A further strong reduction of the temperature sensitivity can be expected by cooling to the sub-K regime, due to the (predicted) $T^{3}$ dependence of $df_{c}/dT$. To characterize the lifetime and the long-term stability of the spectral holes in Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ crystal we burned a hole at 3 K and then measured the spectra of this hole daily for two weeks. The temperature of the crystal was stable within 1 mK during this experiment. The shape of the hole remained very constant in time as illustrated in Fig. 4 (left plot). The hole’s average broadening rate of 0.85 kHz/day and decay rate of $4\%$ /day were estimated from a linear fit of the experimental data. The broadening varied strongly, accelerating up to few kHz/day and then slowing down and even reversing during the next days. This peculiar behavior is probably due to fluctuations of the stray fields inside the cryostat or microscopic displacements of the crystal (the crystal was loosely mounted inside the magnet to avoid strain; furthermore our pulse tube cooler produces vibrations with $\approx 1$ $\mu$m amplitude at 1 Hz). The long-term frequency stability of the SHB reference was evaluated by measuring the frequency difference with a cavity mode, $f_{c}-f_{\rm ULE}$, and the frequency difference $f_{\rm ULE}-f_{\rm maser}$. These differences were fitted to a linear dependence, and the difference of the slopes yields the average drift of the hole in absolute terms, 5$\pm$3 mHz/s. This is significantly smaller than the drift of the ULE cavity (76 mHz/s). Laser frequency stabilization to a SHB reference has been demonstrated for a number of material systems Böttger et al. (2003); Pryde et al. (2002); Sellin et al. (2001), where phase-modulation methods were used to lock the laser to the SHB reference. A SHB reference differs from other references in that its spectrum is modified in time by the probe laser radiation (and due to the spectral diffusion). Roll-off of the reference due to the interaction with the laser imposes a limit on long-term stability of a frequency-locked laser Julsgaard et al. (2007); Pryde et al. (2001). The combination of a SHB reference with a laser prestablized to a high-finesse reference resonator could provide an efficient way to overcome this difficulty. A long-lived SHB reference can then be used for compensation of the slow drift of the resonator (or of other standard references, e.g. molecular iodine gas). A first test of frequency stabilization of our prestabilized laser (where we achieved an instability below the 10${}^{-13}$ level) has indicated several basic but important requirements in order to reach competitive results: stabilization of the optical path between laser and crystal, stabilization of the temperature of the crystal to the $\mu$K level (already achieved), and sufficient laser power for obtaining a sufficiently high signal-to-noise ratio. In addition, interrogation parameters (duty cycle, interrogation intensity, laser beam size, crystal thickness, dopant concentration, etc.) should be optimized with respect to the magnitude of the drift to be corrected, in order to achieve optimum performance. Beyond these parameters, the inhomogeneous broadening ($\approx 2$ GHz for the employed crystal) offers additional potential for improving the signal-to-noise ratio of the SHB reference Böttger et al. (2003). Within this linewidth $\approx 10^{4}$ – $10^{5}$ frequency channels are in principle accessible. The number of spectral holes (channels), which can be burned simultaneously, is limited essentially by the available laser power. In our experiments we produced single spectral holes by burning at a few $\mu$W power during several seconds. Thus, with several 10 mW, a realistically attainable power, a number of holes of the above order can indeed be simultaneously burnt. This could be implemented by producing a laser spectrum that is amplitude-modulated at a large number of discrete frequencies by means of a low-voltage wide-band fiber-optic modulator. The simultaneous read-out could be performed with an appropriate modulation/demodulation scheme. The use of multiple SHB references for laser stabilization allows reducing the exposure of each individual reference to the laser radiation and hence reducing its spectrum’s distortion, while delivering a sufficient total laser power to the transmission detector. With our current setup, the signal-to-noise ratio of the spectrum is about $10^{3}$ for an integration time of 0.1 s. Therefore, if 100 holes were used simultaneously, a frequency instability of about 0.1 Hz ($2\times 10^{-16}$) at an integration time of 10 s could be expected. This frequency instability would be competitive with the currently best room temperature ULE reference cavity Jiang et al. (2011). In summary, we demonstrated narrow, long-lived spectral holes in Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ crystals using, for the first time, to our knowledge, an ultra-stable and narrow-linewidth laser. The narrow linewidth (6 kHz), low temperature sensitivity (20 Hz/mK) very good long-term stability (5$\pm$3 mHz/s drift) of the Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$ SHB reference at 3 K in moderate magnetic field ($\approx 0.1$ T) represent a very attractive combination of properties for laser frequency stabilization, especially in conjunction with prestabilization to an optical cavity. One can expect a further improvement of those parameters at temperatures below 3 K and in stronger magnetic fields. High-resolution SHB in the frequency domain may also be a useful tool for studying the nature of spectral diffusion mechanisms in solids. The authors are very grateful to Sirah Laser- und Plasmatechnik GmbH for loan of the single-frequency dye laser. After completion of this manuscript, we learned of the work by Thorpe et al. (arXiv:1106.0520v1) on frequency stabilization of a dye laser to a level of $6\times 10^{-16}$ on time scales between 2 and 8 s via SHB of Eu${}^{3+}$:Y${}_{2}$SiO${}_{5}$. References Macfarlane and Shelby (1987) R. M. Macfarlane and R. M. Shelby, Spectroscopy of solids containing rare earth ions, A. A. Kaplyanskii and R. M. Macfarlane (Eds.), (North-Holland, Amsterdam, 1987), chap. Coherent transient and hole-burning spectroscopy of rare earth ions in solids. Nilsson et al. (2004) M. Nilsson, L. Rippe, S. Kröll, R. Klieber, and D. Suter, Phys. Rev. B 70, 214116 (2004). Böttger et al. (2003) T. Böttger, G. J. Pryde, and R. L. Cone, Opt. Lett. 28, 200 (2003). Pryde et al. (2002) G. J. Pryde, T. Böttger, R. L. Cone, and R. C. C. Ward, Journal of Luminescence 98, 309 (2002). Sellin et al. (2001) P. B. Sellin, N. M. Strickland, T. Böttger, J. L. Carlsten, and R. L. Cone, Phys. Rev. B 63, 155111 (2001). Telle et al. (2002) H. Telle, B. Lipphardt, and J. Stenger, Applied Physics B: Lasers and Optics 74, 1 (2002). foo (a) Spectral holes with 3.5 kHz width were reported for a similar material, Eu${}^{3+}$:Y${}_{2}$O${}_{3}$ Sellars et al. (1994). however, these spectra were measured in frequency domain shortly after burning (10 ms) and the burning itself occurred during 2 ms. Yano et al. (1991) R. Yano, M. Mitsunaga, and N. Uesugi, Opt. Lett. 16, 1884 (1991). Equall et al. (1994) R. W. Equall, Y. Sun, R. L. Cone, and R. M. Macfarlane, Phys. Rev. Lett. 72, 2179 (1994). Könz et al. (2003) F. Könz, Y. Sun, C. W. Thiel, R. L. Cone, R. W. Equall, R. L. Hutcheson, and R. M. Macfarlane, Phys. Rev. B 68, 085109 (2003). Sellars et al. (2004) M. J. Sellars, E. Fraval, and J. J. Longdell, Journal of Luminescence 107, 150 (2004). Macfarlane et al. (2004) R. M. Macfarlane, Y. Sun, R. L. Cone, C. W. Thiel, and R. W. Equall, Journal of Luminescence 107, 310 (2004). Petelski et al. (2001) T. Petelski, R. S. Conroy, K. Bencheikh, J. Mlynek, and S. Schiller, Opt. Lett. 26, 1013 (2001). Nevsky et al. (2008) A. Nevsky, U. Bressel, I. Ernsting, C. Eisele, M. Okhapkin, S. Schiller, A. Gubenko, D. Livshits, S. Mikhrin, I. Krestnikov, et al., Applied Physics B: Lasers and Optics 92, 501 (2008). Vogt et al. (2010) S. Vogt, C. Lisdat, T. Legero, U. Sterr, I. Ernsting, A. Nevsky, and S. Schiller, Demonstration of a transportable 1 hz-linewidth laser, arXiv:1010.2685v2 (2010). Sellars et al. (1994) M. J. Sellars, R. S. Meltzer, P. T. H. Fisk, and N. B. Manson, J. Opt. Soc. Am. B 11, 1468 (1994). foo (b) A temperature stability of 2 $\mu$K was achieved in a time scale of $10^{4}$ s by active stabilization, using carbon-glass resistor for temperature measurement and a commercial resistance bridge temperature controller, with feedback to a heater close to the crystal. Julsgaard et al. (2007) B. Julsgaard, A. Walther, S. Kröll, and L. Rippe, Opt. Express 15, 11444 (2007). Pryde et al. (2001) G. Pryde, T. Böttger, and R. Cone, Journal of Luminescence 94-95, 587 (2001). Jiang et al. (2011) Y. Y. Jiang, A. D. Ludlow, N. D. Lemke, R. W. Fox, J. A. Sherman, L. S. Ma, and C. W. Oates, Nature Photonics 5, 158 (2011).
Power-law Tails from Dynamical Comptonization in Converging Flows Roberto Turolla11affiliation: Department of Physics, University of Padova, via Marzolo 8, 35131 Padova, Italy; turolla@pd.infn.it , Silvia Zane22affiliation: Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK; sz@mssl.ucl.ac.uk and Lev Titarchuk33affiliation: George Mason University/Center for Earth Observing and Space Research, Fairfax, VA 22030 and US Naval Research Laboratory, Code 7620, Washington, DC 20375-5352, USA; lev@xip.nrl.navy.mil 44affiliation: NASA/Goddard Space Flight Center, Greenbelt MD 20771, USA; lev@lheapop.gsfc.nasa.gov Abstract The effects of bulk motion comptonization on the spectral formation in a converging flow onto a black hole are investigated. The problem is tackled by means of both a fully relativistic, angle-dependent transfer code and a semi-analytical, diffusion-approximation method. We find that a power-law high-energy tail is a ubiquitous feature in converging flows and that the two approaches produce consistent results at large enough accretion rates, when photon diffusion holds. Our semi-analytical approach is based on an expansion in eigenfunctions of the diffusion equation. Contrary to previous investigations based on the same method we find that, although the power-law tail at really large energies is always dominated by the flatter spectral mode, the slope of the hard X-ray portion of the spectrum is dictated by the second mode and it approaches $\Gamma=3$ at large accretion rate, irrespective of the model parameters. The photon index in the tail is found to be largely independent on the spatial distribution of soft seed photons when the accretion rate is either quite low ($\lesssim 5$ in Eddington units) or sufficiently high ($\gtrsim 10$). On the other hand, the spatial distribution of source photons controls the photon index at intermediate accretion rates, when $\Gamma$ switches from the first to the second mode. Our analysis confirms that a hard tail with photon index $\Gamma<3$ is produced by the up-scattering of primary photons onto infalling electrons if the central object is a black hole. accretion, accretion disks — black hole physics — radiation mechanisms: non-thermal — radiative transfer 1 Introduction The idea that photons may change their energy in repeated scatterings with cold electrons in a moving fluid has been suggested more than 20 years ago by Payne & Blandford (1981) and Cowsik & Lee (1982). This process, often refereed to as dynamical (or bulk) Comptonization, is completely equivalent to Comptonization by hot electrons once the thermal velocity is replaced by the bulk velocity ${\bf v}$. However, as already noted by Cowsik & Lee and Payne & Blandford, it has to be $\nabla\cdot{\bf v}\neq 0$, like in a converging flow, for the mechanism to produce a full-fledged effect. If a photon interacts with electrons moving at uniform speed its energy is boosted by a factor $\sim\gamma^{2}=(1-v^{2})^{-1}$, independently of the number of scatterings. On the other hand, in a flow where $\nabla\cdot{\bf v}\neq 0$ a photon typically scatters on electrons with different velocity and the change of the local rest frame introduces a differential effect. As shown by Payne & Blandford (see also Nobili, Turolla & Zampieri 1993), if monochromatic radiation with $\nu=\nu_{0}$ is injected at large Thomson depth in a spherical accretion flow the emergent spectrum is broad, shifted to $\nu>\nu_{0}$ and a typical power-law tail appears at high energies. For a power-law velocity law, $v\sim r^{-\beta}$, the photon spectral index is correlated to the velocity gradient and becomes $3$ in free-fall. Dynamical Comptonization in the non-relativistic limit was also investigated by Schneider & Bogdan (1989) and Mastichiadis & Kylafis (1992), who considered a spherical flow onto a neutron star and found that the choice of the inner boundary condition affects the emerging power-law index substantially. The competing action of dynamical and thermal Comptonization in a semi-infinite medium at non-zero temperature was discussed by Colpi (1988) using the non-relativistic diffusion approximation. Titarchuk, Mastichiadis & Kylafis (1996) extended Mastichiadis & Kylafis’ results including the effects of electron recoil and thermal motion. They demonstrated that the spectral power-law index goes to zero when the Thomson depth in the flow becomes very large. The potential importance of dynamical Comptonization in connection with high-energy emission from compact X-ray sources has been recognized since the very beginning. However, it was not until the mid-90’s that renewed interest in this topic was aroused by two facts: the observational evidence that X-ray spectra from Galactic black hole candidates (BHCs) may exhibit in the soft (high) state a power-law tail which extends up to hundreds of keV, and the introduction of the two-phase paradigm for accretion flows onto black holes (Chakrabarti & Titarchuk 1995). In addition to standard, geometrically-thin accretion disks (SSDs, Shakura & Sunyaev 1973), Chakrabarti & Titarchuk (1995) presented arguments for the existence of a sub-Keplerian flow outside the disk. The popular advection-dominated accretion models (ADAFs), initially introduced for optically thin accretion flows onto black holes (see e.g. Narayan, Mahadevan & Quataert 1999 for a review), are a particular class of sub-Keplerian flows, namely ADAFs are also nearly spherical and transonic close to the hole. In current models for BHCs, a sub-Keplerian component is thought to exist along with the SSD in the inner region of the accretion flow ($\lesssim 100$ gravitational radii) and it may provide the conditions for making dynamical Comptonization effective. Chakrabarti & Titarchuk (1995), Titarchuk, Mastichiadis & Kylafis (1996), and Ebisawa, Titarchuk & Chakrabarti (1996) were the first to point out that in a realistic accretion flow the finite Thomson depth at the horizon, $\tau_{s,H}$ produces a power-law spectral index (which depends on $\tau_{s,H}$) flatter than 3, in general agreement with the observed values (at least for $\dot{m}\lesssim 5$, as we will discuss later). Turolla, Zane, Zampieri & Nobili (1996), solving analytically the general-relativistic moment equations, confirmed this result and provided a simple expression for the power-law index as a function of $\tau_{s,H}$ in a free-falling medium. Titarchuk, Mastichiadis & Kylafis (1997), hereafter TMK97, considered a non-relativistic diffusion equation for the photon occupation number, including dynamical and thermal effects. They derived both numerical and (approximate) analytical solutions, and concluded that, while the spectral index depends on the location of the inner boundary and on the boundary conditions, it is not much sensitive to the spatial and energy distribution of the primary photons. Several numerical calculations, based on different approaches to the solution of the transfer problem, have been presented so far (relativistic moment method: Nobili, Turolla & Zampieri 1993, Zampieri, Turolla & Treves 1993; fully relativistic transfer equation: Zane, Turolla, Nobili & Erna 1996, Titarchuk & Zannias 1998; Monte Carlo simulations: Laurent & Titarchuk 1999). Shrader & Titarchuk (1998), following TMK97, were able to reproduce the observed X-ray spectrum of several BHCs, assuming that the accretion flow consists of two phases: a SSD and a radial component which replaces the disk close to the hole. Thermal photons emitted by the disk produce the observed soft emission at a few keV and are in part up-scattered by the inflowing electrons forming the power-law tail. Numerical investigations based on Monte Carlo methods or $\Lambda$-iteration schemes typically fail in reproducing cases with relatively high $\dot{m}$, even well below the range where diffusion approximation holds. Analytical investigations, on the other hand, are supposed to reproduce the diffusion approximation limit, but suffer from a series of limitations imposed by the simplifying assumptions which need to be introduced. All of them are based on the solution of the first two moment equations neglecting, to a various extent, terms of higher order in $v$ and higher order moments. This approach is justified at large depth, but becomes questionable for $\tau_{s,H}\lesssim 1$. Moreover, analytical spectra have been often calculated for a monochromatic injection of photons at a given radius, so they fail to answer to the fundamental question of how the emergent spectrum depends on the primary photon distribution, both in energy and in space. An exception is the method presented by TMK97, that will be used in this investigation. Quite surprisingly, while all studies seem to agree that a power-law tail forms at high energies in a converging flow, the derived values of the spectral index cover a considerable range. This can reflect either an inaccuracy of the methods or an intrinsic dependence of the spectral index on some (often implicit, even hidden) assumptions at the basis of the calculation, or both. In the attempt to clarify this point Papathanassiou & Psaltis (2001) undertook a project aimed to a systematic exploration of the parameter space by means of the numerical integration of the relativistic transfer equation for a free-falling flow in a Schwarzschild spacetime. The results reported in their first paper are in agreement with the single model of Zane et al. (1996) but not with the analytical prediction of Turolla et al. (1996). The derived values of the spectral index are similar to those of Titarchuk & Zannias and Laurent & Titarchuk, but care must be taken in comparing this calculation to previous ones because Papathanassiou & Psaltis (2001) use a different definition of $\tau_{s,H}$. In this paper we clarify the role of the spatial distribution of the source of input photons. This has been often overlooked or misunderstood in the literature. In particular, we consider the difference between diffuse and spatially concentrated sources. We support our results with both a systematic numerical analysis and a semi-analytical calculation. The first is carried out by solving the transfer problem for a radially inflowing medium in a Schwarzschild spacetime using the code described in Zane, Turolla, Nobili & Erna (1996). The latter is based on the method introduced by TMK97. Numerical models have been computed up to large enough values of $\tau_{s,H}$ to allow for a direct comparison with analytical, diffusion-approximation results. We show that the spatial distribution of seed photons is unimportant in fixing the power-law index both al small and large accretion rates ($\dot{m}\lesssim 5$ and $\dot{m}\gtrsim 15$ for a scattering-dominated accretion flow onto a black hole), while it influences the index at intermediate accretion rates. However, unless the source is extremely concentrated close to the horizon, the slope of high-energy tail does not depend much on the primary photon distribution. We also convincingly show that the photon index approaches $3$ at large accretion rates, irrespective of the input parameters. The plan of the paper is as follows. In §2 we introduce the numerical method for the solution of the relativistic kinetic equation and present the results of numerical calculations. Approximate analytical solutions in diffusion approximation are derived in §3 and compared with both our numerical models and results from previous investigations. Discussion and conclusions follow in §4. 2 Numerical method and results A fairly general technique for the numerical solution of the relativistic transfer equation in spherical symmetry has been presented and discussed in Zane, Turolla, Nobili & Erna (1996). The method makes use of the characteristics to reduce the comoving-frame transfer equation to an ordinary differential equation for the photon occupation number $f$ along the photon trajectories. To avoid any confusion, we stress that $f=f(r,\mu,E)$, the cosine of the angle between the photon and the radial directions $\mu$ and the photon energy $E$ are all measured by the comoving observer (LRF), while $r$ is the coordinate radius. In the following we deal with a spherical flow in a Schwarzschild spacetime and use units in which $c=G=h=1$. The radial coordinate is in units of the Schwarzschild radius, $r_{S}=2M$, where $M$ is the mass of the central source. For a Schwarzschild geometry Zane et al. derived simple analytical expressions for $\mu=\mu(r,b)$ and $E=E_{\infty}\epsilon(r,b)$, where $b$ is the ray impact parameter and $E_{\infty}$ the photon energy measured by an observer at rest at radial infinity. The transfer equation $$\frac{df}{dr}=\frac{r_{S}{\cal G}(r,\mu,E,f)}{yE(\mu+v)}\,,$$ (1) where $y=\gamma\sqrt{1-r_{S}/r}$ and $\cal G$ is the source term, is then solved for different values of the two parameters $b$ and $E_{\infty}$ to obtain the specific intensity $I=2fE^{3}$. Ordinary $\Lambda$-iteration is used to reach convergence in case the source term contains scattering integrals. The radiation moments (mean intensity $J_{\nu}$, flux $H_{\nu}$ and pressure $K_{\nu}$, here $\nu$ is the photon frequency) are evaluated by direct numerical quadrature over angles of the specific intensity times the required power of $\mu$, at constant $E$ and $r$. We assume conservative (i.e. Thomson) scattering in the electron rest frame. Since we want to assess the effects of dynamical Comptonization, we ignore thermal motion and take the electron rest frame to coincide with the LRF (TMK97). We assume free-fall so that $yv=r^{-1/2}$ which implicitly gives $v$ as a function of $r$. Denoting with $\dot{M}$ the accretion rate, it follows from the rest mass conservation $\dot{M}=4\pi r_{S}^{2}cr^{2}\rho yv$, that the gas density scales as $\rho=\rho_{H}r^{-3/2}$. Since $y\simeq 1$ in free-fall (hence $v\simeq r^{-1/2}$), and introducing the accretion rate in units of the Eddington rate $\dot{m}=\dot{M}/\dot{M}_{E}$ ($\dot{M}_{E}=L_{E}/c^{2}$), the density at the horizon is related to $\dot{m}$ by $\dot{m}=2\kappa_{s}\rho_{H}r_{S}$, where $\kappa_{s}$ is the scattering opacity. The expression for the scattering depth in the flow follows immediately and is $\tau_{s}=\int_{r}^{\infty}\kappa_{s}\rho r_{S}\,dr=2\kappa_{s}\rho_{H}r_{S}r^{% -1/2}=\dot{m}r^{-1/2}$. The effects of bulk motion Comptonization on the emerging spectrum are governed by the product of the scattering depth times the flow velocity which gives the fractional energy change suffered by a photon undergoing repeated scatterings before escaping (see e.g. Nobili, Turolla & Zampieri 1993). In the present case $\tau_{s}v=\dot{m}r^{-1}$ and the trapping radius, defined as the locus where $3\tau_{s}v=1$, is located at $r_{trap}=3\dot{m}$ We have computed several sequences of models for $\dot{m}$ in the range $1\leq\dot{m}\leq 12$. All models include both electron scattering and true emission/absorption. In order to investigate the effects of the spatial distribution of the input photons under a minimal set of assumptions, we adopted an artificial opacity coefficient $\kappa_{a,\nu}$ defined as to produce an absorption depth $$\tau_{a}=\left\{\begin{array}[]{ll}\tau_{a,H}&\mbox{$r\leq r_{a}$}\\ &\cr\displaystyle{\tau_{a,H}\left(\frac{r_{a}}{r}\right)^{n}}&\mbox{$r>r_{a}$}% \end{array}\right.$$ (2) where $\tau_{a,H}$ is the absorption depth at the horizon and $r_{a}$ and $n$ are adjustable parameters. With the above definition absorption is color-blind, i.e. $\kappa_{a,\nu}=\kappa_{a}$. However, since we retain Kirchhoff’s law, the emission coefficient depends on frequency and is given by $$j_{\nu}=\tau_{a}r^{-1}B_{\nu}(T)$$ (3) where $B_{\nu}(T)$ is the Planck function at temperature $T$. We then want to explore the effects of bulk motion Comptonization on the emerging spectrum when $\dot{m}$ is varied, along various sequences of models characterized by different spatial distributions of the emissivity, eq. (3). Particular care must be devoted to the fact that the sequences should be self-similar as far as the role of emission/absorption is concerned. If this is not ensured the spectral behavior may be influenced by the different interplay between scattering and absorption when the accretion rate changes. The self-similarity can be imposed in different ways: here we ask that the scattering and absorption depths become equal at a given radius $r_{c}$ (hereafter the “crossing radius”), which is the same for all models. Using the expression for the absorption depth given by (2) with fixed $n$, the condition $\tau_{s}(r_{c})=\tau_{a}(r_{c})$ allows to derive $\tau_{a,H}$ for each value of $\dot{m}$, $\tau_{a,H}=\dot{m}(r_{c}/r_{a})^{n-1/2}r_{a}^{-1/2}$. In our calculations we used $r_{a}=2.5$, $r_{c}=1.8r_{a}$ and $n$ in the range $3\leq n\leq 7$. We note that both $r_{a}$ and $r_{c}$ should be of order unity if the flow has to be scattering dominated up to small radii. Provided this condition is fulfilled, we have checked that varying them does not change the models much. With this choice of the parameters the absorption depth at $r_{c}$ is always larger than unity for $\dot{m}\geq 1$, so all models have an inner core which is optically thick to true absorption. Beyond $r_{c}$ both the source of seed photons and the true opacity rapidly decay with radius, the distributions becoming sharper with increasing $n$, thus in this region bulk motion Comptonization is the most efficient process. Figure 1 illustrates the dependence of the relevant length-scales on the accretion rate and Figure 2 shows the radial variation of the frequency-integrated emission coefficient for different values of $n$ and $\dot{m}$. Because of our assumption of self-similarity, upon normalization all curves with the same $\dot{m}$ coincide in Fig. 2. All models have been computed on a radial grid which covers the range $0.2\leq\log r\leq 6$ and is equally spaced in $\log r$. Although the actual integration of eq. (1) along the rays has been carried out on a much finer grid, the radiation intensity was stored for 40 values of the radial coordinate. Since our main goal is to investigate the effects of dynamical Comptonization, we consider here an uniform temperature medium. The exact value of $T$ in unimportant and only fixes the scales of both the photon energy and the intensity (the latter because we have thermal emission). We used 30 energy points, taken to coincide with Gauss-Lobatto quadrature abscissae, in the range $0.3\leq E/kT\leq 20$, plus additional ten points both below and above the two limits (see Zane, Turolla, Nobili & Erna 1996 for details). In our scheme the angular resolution is fixed by the number of rays along which the transfer equation is integrated and is not constant along the radial grid. With our present choice of the parameters the minimum number of $\mu$ points (which happens at the photon radius or at the inner boundary if the latter is larger) is 20. Because $\tau_{a}>\tau_{s}>1$ for $r<r_{a}=2.5$ and the absorption opacity in independent on frequency, the spectrum below $r_{a}$ is blackbody (see eq. [3]). For this reason we decided to put the inner boundary just outside the photon radius, at $r_{b}\simeq 1.6$. This speeds up the calculation since the transfer equation needs not to be solved along the trapped photon trajectories. Standard boundary conditions for a non-illuminated medium have been used: $f(r_{b},\mu>0)=B_{\nu}(T)/E^{3}$ and $f(r_{out},\mu<0)=0$, where $r_{out}$ is the outer boundary of the integration domain. The emerging spectrum for different $\dot{m}$ is shown in Figure 3 for the three cases $n=3,\,4,\,5$. The appearance of a high-energy power-law tail is clearly seen comparing the emergent photon distribution with the blackbody spectrum (the dashed line in Fig. 3). The spectral index depends both on $\dot{m}$ and on the particular sequence of models. The derived values of the photon index $p$ are plotted as a function of $\dot{m}$ for different $n$ in Figure 4. 3 Approximated Analytical Solutions The numerical results presented in §2 made evident a dependence of the spectral index on the properties of each particular sequence, at least in the range of $\dot{m}$ that has been spanned. In order to address this point further, we investigate systematically the properties of the analytical solutions presented by TMK97. We then present an application of the method to cases relevant to the models presented in §2. Although this approach strictly holds only for an isotropic radiation field, we will show that it correctly describes the numerical sequences for $\dot{m}$ not too close to unity and helps understanding the dependence of the numerical models on the various input parameters. 3.1 The analytical solution By following TMK97, we write the kinetic equation for the angle-averaged photon occupation number $n=n(r,\nu)$ [their eq. (14)] as $$\tau{\partial^{2}n\over\partial\tau^{2}}-\left(\tau+{3\over 2}\right){\partial n% \over\partial\tau}-{1\over 2}x{\partial n\over\partial x}=-{\dot{m}\over 2}{j_% {\nu}\over\rho\kappa_{s}}\,,$$ (4) where $\tau=(3/2)\dot{m}/r$, $x=h\nu/kT$. As in TMK97, second order terms in $v$ have been neglected and we restricted to the case in which the source function is the product of a purely spatial part, $S(\tau)$, and a purely energy-dependent part, $g(x)$. Moreover, we did not include thermal Comptonization and treated the scattering as elastic in the electron rest frame. The eigenfunctions or the space (radial) operator which are well behaved ($\sim r^{-2}$ for $r\to\infty$) are given by $$R_{k}=C\tau^{5/2}\Phi\left(-\lambda_{k}^{2}+5/2,7/2,\tau\right)$$ (5) where $\Phi(a,b,z)$ is the confluent hypergeometric function (Kummer’s function, see Abramowitz & Stegun 1970), $C$ is a constant and the eigenvalues $\lambda_{k}^{2}$ are the roots of the equation $$p\Phi\left(-\lambda_{k}^{2}+{5\over 2},{9\over 2},\tau_{b}\right)+q\Phi\left(-% \lambda_{k}^{2}+{3\over 2},{9\over 2},\tau_{b}\right)=0$$ (6) with $$\displaystyle p$$ $$\displaystyle=$$ $$\displaystyle\left[{5\over 2}-\left(2\lambda_{1}^{2}+\epsilon\right){\tau_{b}% \over 3}\right]\left(-\lambda_{k}^{2}+{3\over 2}+\tau_{b}\right)+$$ $$\displaystyle\tau_{b}\left(-2\lambda_{k}^{2}+{1\over 2}+\tau_{b}\right)\,,$$ $$\displaystyle q$$ $$\displaystyle=$$ $$\displaystyle\left[{5\over 2}-\left(2\lambda_{1}^{2}+\epsilon-3\right){\tau_{b% }\over 3}\right]\left(\lambda_{k}^{2}+2\right)\,;$$ here $\epsilon=-3(1-A)\sqrt{r_{b}}/[2(1+A)]$, $\tau_{b}$ and $r_{b}$ are depth and the radius of the inner boundary, respectively, and $A$ is the albedo at $r_{b}$. We note that eq. (6) is just a rearrangement of eq. (A6) of TMK97. Expanding the spatial part of the source function over the complete set of eigenfunctions $R_{k}$ $$S(\tau)=\sum_{k=1}^{\infty}c_{k}R_{k}(\tau)\,,$$ (7) and looking only for separable solutions of the form $$n(\tau,x)=R(\tau)N(x)=\sum_{k=1}^{\infty}a_{k}R_{k}(\tau)N_{k}(x)\,,$$ (8) gives $$N_{k}(x)=2{c_{k}\over a_{k}}x^{-2\lambda_{k}^{2}}\int_{0}^{x}t^{2\lambda_{k}^{% 2}-1}g(t)dt\equiv{c_{k}\over a_{k}}\hat{N}_{k}(x)\,.$$ (9) The emergent luminosity is then expressed as $$L(\tau=0,x)\propto x^{3}\sum_{k=1}^{\infty}c_{k}\hat{N}_{k}(x)\,.$$ (10) We note that the coefficients $a_{k}$ do not enter expression (10). The emergent luminosity only depends on the $c_{k}$’s, i.e. on the spatial distribution of the source function. These coefficients may be readily evaluated as integrals of $S(\tau)$ times a weighting function (see TMK97, Appendix B). The source function corresponding to the numerical models presented in §2 is $S(\tau)\propto\tau^{n-1/2}$ for $r\geq r_{a}$ where scattering is dominant. In order to investigate the behavior of the spectral index, we evaluated $L(\tau=0,x)$ in the range $0.1\leq x\leq 500$, for different values of $\tau_{b}$ and $n=3,\,4,\,5$, assuming that $g(x)\propto[\exp(x)-1]^{-1}$ (i.e. the primary photon spectrum is blackbody). We use here the same notation as in TMK97, where the eigenvalues were labeled according to their asymptotic value at $\dot{m}\gg 1$. In particular, the first eigenvalue $\lambda_{1}^{2}$ is the smallest root of eq. (6). Since the different roots never cross each other when $\dot{m}$ decreases, the hierarchy of the eigenvalues can be derived from their asymptotic values at large $\tau_{b}$. As shown by TMK97, in this limit the roots are $$\displaystyle s_{1}$$ $$\displaystyle\sim$$ $$\displaystyle{3\over 2}+{3\over 4}\left({1-A\over 1+A}\right)\sqrt{r_{b}}\,,$$ $$\displaystyle s_{k}$$ $$\displaystyle\sim$$ $$\displaystyle{2k+1\over 2},\qquad k\geq 2$$ out of which only the smallest one is meaningful and should be then inserted back into eq. (6) to compute the $\lambda_{k}^{2}$’s with $k\geq 2$. As the previous expressions shows, it is $\lambda_{1}^{2}=s_{1}$ if $3/2+(3/4)(1-A)(1+A)^{-1}\sqrt{r_{b}}<5/2$ while $\lambda_{1}^{2}=s_{2}$ otherwise. Once eq. (6) is solved with the appropriate value for $\lambda_{1}^{2}$, the two sequences of the eigenvalues at large $\tau_{b}$ turn out to be $$\displaystyle\lambda_{1}^{2}$$ $$\displaystyle\sim$$ $$\displaystyle{3\over 2}+{3\over 4}\left({1-A\over 1+A}\right)\sqrt{r_{b}}\,,$$ $$\displaystyle\lambda^{2}_{k}$$ $$\displaystyle\sim$$ $$\displaystyle{2k+1\over 2},\qquad k\geq 2$$ for $3/2+3/4(1-A)/(1+A)\sqrt{r_{b}}<5/2$ and $$\lambda_{k}^{2}\sim{2k+3\over 2}\,\qquad k\geq 1$$ for $3/2+(3/4)(1-A)(1+A)^{-1}\sqrt{r_{b}}>5/2$. Since $\hat{N}_{k}(x)\sim x^{-2\lambda_{k}^{2}}$ for $x\gg 1$, the flatter spectral mode corresponds to the smaller $\lambda_{k}^{2}$. It is natural to expect that terms of higher order do not contribute significantly to the series (10) at large frequencies. Naively, one might be tempted to conclude that the value of the spectral index in the power law tail is only dictated by the smallest eigenvalue (the same conclusion is in fact at the basis of most previous investigation, see e.g. TMK97). However, while terms with $k>2$ never significantly contribute to the high-energy tail, the dominant mode can be either the first or the second, depending on the parameters of the model. For $\lambda_{1}^{2}\to 5/2$ the first mode is always the dominant one, but when $\lambda_{1}^{2}\to 3/2+(3/4)(1-A)(1+A)^{-1}\sqrt{r_{b}}<5/2$, it dominates only in a limited range of relatively small $\dot{m}$. This is illustrated in Figure 5a, b, which shows the first nine terms of the series (10) for $A=0$, $r_{b}=1$, $n=5$ and two different values of $\dot{m}=2\tau_{b}r_{b}/3$. At relatively low $\dot{m}$ (Fig. 5a) the mode $k=1$ is indeed the dominant one, but when $\dot{m}$ increases (Fig. 5b) it is the $k=2$ term which gives the larger contribution over many decades in frequency. The main reason is that the expansion coefficient $c_{1}$ goes to zero exponentially fast when $\tau_{b}$ increases. This has been already noted by TMK97 (see also Mastichiadis & Kylafis 1992) in discussing the equivalence of their solution to that of Payne & Blandford (1981) for $\tau_{b}\rightarrow\infty$. However they did not point out that, as a consequence, the eigenvalue $\lambda_{1}^{2}$ does not represent the spectral index at large $\dot{m}$. The main consequence is the appearance of a range in $\dot{m}$ over which the spectral index makes a “transition” from the first to the second mode. This is illustrated in Figure. 6 which makes evident that the spectral index in the transition region do depend on the spatial distribution of the source. We stress that all these considerations do not reflect the behavior of the power-law tail in the limit $x\to\infty$. At really large energies, the spectral index is always fixed by $\lambda_{1}^{2}$, as it can be guessed from Figure 5b. However, the observationally accessible part of the high-energy tail is indeed dictated by $\lambda_{2}^{2}$. Figure 7a illustrates the dependence of the spectral index on $r_{b}$. As it can be seen, the variation with $r_{b}$ is not monotonic. At low $\dot{m}$ spectra tend to be harder as the inner boundary moves closer to the horizon, while at larger $\dot{m}$ the behavior of the spectral index is more complicated. Models with different $r_{b}$ approach $2\lambda_{2}^{2}-2$ in correspondence of different values of $\dot{m}$, and beyond $\dot{m}\approx 10$ spectra might be softer at smaller $r_{b}$. It is also worth to stress that, since $\dot{m}=2\tau_{b}r_{b}/3$ and diffusion approximation holds for $\tau_{b}>1$, the larger is $r_{b}$ the larger is the value of $\dot{m}$ at which the analytical solution is not trustworthy any more. The dependence on the albedo $A$ is shown in Figure 7b. Again, all spectral indices tend toward a common asymptotic value which is constant and is given by the second mode. However, before this limit is reached, the dependence on $A$ is monotonic. The larger is $A$ at the inner boundary, the harder is the emergent spectrum, in agreement with previous findings (see Mastichiadis & Kylafis 1992 and TMK97). 3.2 Comparison with numerical models We are now in the position to reconsider the behavior of the spectral index in the numerical sequences in the light of the results discussed in the previous section. The overall situation described by the numerical models is, of course, different from that assumed in the analytical solution, in particular because in the former a) the flow has an inner core, optically thick to true emission/absorption, b) true absorption is consistently accounted for through Kirchhoff’s law, and c) relativistic effects are correctly included. The first consequence is that the numerical sequences we have computed are far from being evaluated at a given, fixed value of both $r_{b}$ and $A$. However, a comparison between numerical and analytical models can be attempted, assuming that $r_{b}$ coincides with the boundary of the region in which scattering is the main source of opacity, $r_{abs}$, where the absorption depth equals unity [see §2 and eq. (2)]. In the region between $r_{a}$ and $r_{abs}$ the role of true absorption is certainly non-negligible and affects the emerging spectrum together with Comptonization. We stress that this is an oversimplification, since it is not possible to define an inner boundary in our numerical models in the same way as it was done in §3.1. This implies also that the value of the albedo at the inner boundary is ill-defined. On the other hand, in all our models the region $r\lesssim r_{abs}$ is optically thick to absorption and is at constant temperature. Under these conditions, we expect the ingoing flux at the boundary to be substantially larger than the outgoing one. This amounts to say the inner part of the flow is illuminated from above and the value of the albedo should consequently be small, $A\lesssim 0.1$. In Figure  8 we show the comparison between numerical and analytical sequences, the latter computed taking $r_{b}=r_{abs}(\dot{m})$, while $A$ is constant along each sequence. For $\dot{m}\gtrsim 4$ the results derived in diffusion approximation are in very good agreement with those computed with the full angle-dependent code for reasonable values of the albedo. As expected, the agreement becomes worse at low $\dot{m}$ where the diffusion approximation starts to break down. 4 Discussion and Conclusions Previous investigations on bulk motion Comptonization studied the properties of the emerging spectrum, in particular as far as the formation of a hard, power-law tail is concerned. In this respect, two issues are of the highest relevance in connection with observations of Galactic X-ray binaries (XRBs) in the high/soft state. The first is whether the appearance of a high energy power-law tail is a ubiquitous feature of converging accretion flows, the second is to which extent the spectral index is independent on the details of model, mainly on the spatial and energy distribution of the seed photons. TMK97, Titarchuk & Zannias (1998) and Laurent & Titarchuk (1999) have shown that, for the cases they have examined, a power-law tail is always present and the spectral index is almost insensitive to source distribution. Our present results strengthen these conclusions. At the same time, we further clarify the properties of the spectral index and its dependence on the detail of the source. The main conclusion we derived in §3.1 (see in particular Fig. 6) is that the photon index $\Gamma$ switches from $\Gamma=2\lambda_{1}^{2}-2$ to $\Gamma=2\lambda_{2}^{2}-2$ as the accretion rate increases. The location and width of the transition region depends in general on the boundary conditions, i.e. on $r_{b}$ and $A$ (see e.g. Fig. 7b). For example, for the case $r_{b}=1$, $A=0$, the transition occurs for $5\lesssim\dot{m}\lesssim 15$ while in solutions with larger $r_{b}$ the transition region is wider and shifted toward larger $\dot{m}$. The properties of the spectral index in the different ranges of $\dot{m}$ (below, within and above the transition region) are dramatically different, as it is shown by both numerical and analytical results. We now discuss this point in more detail by taking as representative the case $r_{b}=1$, $A=0$, illustrated in Figure 6. As we can see, for low accretion rates, $\dot{m}\lesssim 5$, the photon index $\Gamma$ is indeed independent on the source spatial distribution and is related only to the first eigenvalue, $\Gamma=2\lambda_{1}^{2}-2$. Although $\lambda_{1}^{2}$ does depend on both the albedo and the location of the inner boundary, its value is always the same for a flow which is absorption thin all the way down to the horizon. This is likely to reproduce the situation in a black hole XRB, in which most of the material is carried inwards by a standard disk and only a small fraction accretes roughly spherically. The different behavior of our numerical models is not in contradiction with this statement. In fact, an important point to realize is that, owing to the presence of an absorption thick core, the range of $\dot{m}$ explored numerically falls below (or at most at the beginning of) the transition region. The value of the photon index (which, in this range, is dominated by $\lambda_{1}^{2}$) changes because of a change in both the location of the inner boundary ($r_{abs}$ is a function of $\dot{m}$) and the albedo. Therefore, the variation of $\Gamma$ with $n$ (i.e. with the photon source) seen in our numerical sequences is not directly connected with the change of the source itself, but is largely due to the change in the boundary conditions. The situation is different in the transition region, i.e. for intermediate values of the accretion rate ($5\lesssim\dot{m}\lesssim 15$ in the case $r_{b}=1$ and $A=0$). Now the spectral index genuinely depends on the spatial distribution of primary photons. This is true irrespective of the assumed boundary conditions, i.e. all models are expected to exhibit a range in $\dot{m}$ (which can vary depending on $r_{b}$ and $A$) over which the shape of $S(\tau)$ is important in determining $\Gamma$ (see Fig. 7b). It should be noted, nevertheless, that $\Gamma$ depends rather weakly on the spatial distribution, as it can be seen from Fig. 6. As a consequence, spatially concentrated photon sources (like the ones we used here) or more diffused ones (as that employed by TMK97) give raise to spectral indices which are not sensibly different. This implies that $\Gamma$ is about the same for primary photons produced in the disk (diffused source) or in the inner, denser region of an ADAF (concentrated source). For large accretion rates (i.e. $\dot{m}\gtrsim 15$ for $r_{b}=1$ and $A=0$), the spectral index quickly approaches $\Gamma=2\lambda_{2}^{2}-2$ and becomes again independent on the source distribution. Since the value of $\dot{m}$ at which the transition from $2\lambda_{1}^{2}-2$ to $2\lambda_{2}^{2}-2$ is completed is larger than the value at which $\lambda_{2}^{2}$ attains its asymptotic value ($\dot{m}\sim 10$), the index is also insensitive to both $r_{b}$ and $A$, and it is $\Gamma\sim 3$. The same considerations apply to solutions with larger $r_{b}$ (see Fig. 7b). We turn now to the comparison of present results with those derived in previous investigations for the spectral index and its dependence of the accretion rate. Here we are concerned only with the case of $A=0$ and $r_{b}=1$, that is to say we assume that the accreting object is a black hole and true emission/absorption can be neglected. The results are summarized in Fig. 9, where the spectral index is shown as a function of the accretion rate. As it can be seen, the major differences are at low values of $\dot{m}$ where the various approaches give extremely different results. We note first that methods based on the solution of the full (angle-dependent) transfer problem provide quite similar (albeit not exactly equal) results with $\Gamma$ a monotonically decreasing function of $\dot{m}$. Relativistic diffusion approximation predicts much softer spectra and $\Gamma$ is now monotonically increasing with the accretion rate. Non-relativistic diffusion produces somehow an intermediate result in which the spectral index exhibits a minimum. An obvious point to recall is that diffusion-limit solutions are valid only for $\tau\gg 1$, so they are bounded to fail when $\dot{m}\sim\tau_{b}\geq\tau$ is a few or less. Moreover, the relativistic analysis by Turolla, Zane, Zampieri & Nobili (1996) strictly speaking applies only below the trapping radius. Clearly, for small accretion rate the right answer is that provided by the full transfer calculation. However, the comparison of the various curves in Fig. 9 shows that the different behavior of the index at small $\dot{m}$ is not a direct effect of general relativity and hence of the presence of the horizon. Would this be true one should expect the relativistic diffusion approximation to describe much better the solution. Also, in free-fall gravity and dynamics cancel each other at a given radius, so no relativistic effects are expected in the flow local rest frame. This argument is not against the claim that the detection of a power-law tail with index $\sim 2.5$–3 is indicative of a black hole rather than a neutron star. The reason for this, nevertheless, are not general relativistic effects but the fact that only a black hole provides the conditions under which a converging flow may reach $\tau_{b}\gtrsim 1$. This is not possible for a neutron star, because of the much higher accretion efficiency. The presence of a solid crust at $\sim 3r_{g}$ implies that all the kinetic energy of the inflowing material is released upon impact with an efficiency $\approx 1/6$. This means that $\dot{m}\sim 1$ is enough to produce a near-Eddington luminosity which slows down the flow until a settling solution is established (see e.g. Miller 1990; Zampieri, Turolla & Treves 1993). As a consequence, $\tau_{b}^{NS}\ll\tau_{b}^{BH}$ at the same $\dot{m}$ because the velocity is much smaller. Increasing $\dot{m}$ is of no avail since, as Zampieri, Turolla & Treves (1993) have shown, $\tau_{b}$ reaches $\sim 1$ for $\dot{m}\sim 3.5$ then starts to decrease. Consequently, for a spherically accreting neutron star a power-law tail may indeed form but it is always steeper (softer spectrum) than in black holes. It should be realized also that the inner part of the accreting flow, close enough to the star crust, is bound to be effectively thick, so assuming that the surface acts as a reflecting boundary does not appear entirely realistic. Finally, a comment of the asymptotic value of the index for $\tau_{b}\to\infty$ is in order. Both relativistic and non-relativistic diffusion calculations give an asymptotic index $\Gamma=3$. The fact that the relativistic curve of Turolla, Zane, Zampieri & Nobili (1996) approaches 3 at much larger values of $\dot{m}$ is related to the particular source function they used. In fact, monochromatic photons were assumed to be injected at a single radius very close to the horizon. The behavior of the index, as discussed previously, depends on the spatial distribution of the source and the transition region is wider and shifted at larger $\dot{m}$ as the source is more and more concentrated. This shows once more that there are no substantial differences between relativistic and non-relativistic diffusion, once proper allowance for the spatial distribution of seed photons is made. In concluding, we would like to stress that present results can not be directly compared with observations. As several investigations have shown (e.g. Shrader & Titarchuk 1998; Shrader & Titarchuk 1999; Borozdin et al. 1999), modeling the X-ray spectrum of black hole binaries requires the inclusion of both thermal and dynamical comptonization. Moreover, we neglected electron recoil (e.g. we assumed elastic scattering in the electron rest frame), so no high-energy cut-off is present in our models. If, as it seems reasonable, the estimates derived by TMK97 and Laurent & Titarchuk (1999) apply also in the present case, a cut-off at energies $\approx 300-400$ keV is expected. The lack of a break in high-energy spectra of black hole sources may be suggestive of an alternative origin of the power-law tails, e.g. comptonization by thermal/non-thermal electrons (e.g. Gierliński et al. 1999). Up to now, however, observations are not compelling in this respect (e.g. Zdziarski et al. 2001 and Titarchuk & Shrader 2002 for OSSE observations of GRS 1915+105). Also, Laurent & Titarchuk (2001) found a high photon compactness near the horizon due to relativistic ray bending. This should lead to pair production and ultimately to an extension of the power-law tail to energies $\approx 1$ MeV. They have also shown that the power-law part of the spectrum at energies less than $\approx$ 100-200 keV is not affected by nonlinear photon-electron interactions. Acknowledgments Work partially supported by the Italian Ministry for Education, University, and Research (MIUR) through grant COFIN-2000-MM02C71842. LT is grateful to Kinwah Wu for fruitful discussions during his visit to MSSL. References Abramowitz & Stegun (1970) Abramowitz M., & Stegun I.A. 1970, Handbook of Mathematical Functions (Dover: New York) Borozdin et al. (1999) Borozdin K., et al. 1999, ApJ, 517, 367 Chakrabarti & Titarchuk (1995) Chakrabarti S.K., & Titarchuk L. 1995, ApJ, 455, 623 Colpi (1988) Colpi M. 1988, ApJ, 326, 233 Cowsik & Lee (1982) Cowsik R., & Lee M.A. 1982, Proc. R. Soc. London, Ser. A, 383, 409 Ebisawa, Titarchuk & Chakrabarti (1996) Ebisawa K., Titarchuk L., & Chakrabarti S.K. 1996, PASJ, 48, 59 Gierliński et al. (1999) Gierliński M. et al. 1999, MNRAS, 309, 496 Laurent & Titarchuk (1999) Laurent P., & Titarchuk L. 1999, ApJ, 511, 289 Laurent & Titarchuk (2001) Laurent P., & Titarchuk L. 2001, ApJ, 562, L67 Mastichiadis & Kylafis (1992) Mastichiadis A. & Kylafis N.D., 1992, ApJ, 384, 136 Miller (1990) Miller G.S. 1990, ApJ, 356, 572 Narayan, Mahadevan & Quataert (1999) Narayan R., Mahadevan R., & Quataert E. 1999, in The Theory of Black Hole Accretion Disks, Abramowicz M.A., Bjornsson G. and Pringle J.E. eds. (Cambridge University Press: Cambridge) Nobili, Turolla & Zampieri (1993) Nobili L., Turolla R., & Zampieri L. 1993, ApJ, 404, 686 Papathanassiou & Psaltis (2001) Papathanassiou H., & Psaltis D. 2001, preprint (astro-ph/0011447) Payne & Blandford (1981) Payne D.G., & Blandford R.D. 1981, MNRAS, 196, 781 Schneider & Bogdan (1989) Schneider P., & Bogdan T.J. 1989, ApJ, 347, 496 Shakura & Sunyaev (1973) Shakura N.I., & Sunyaev R. 1973, A&A, 24, 337 Shrader & Titarchuk (1998) Shrader C., & Titarchuk, L. 1998, ApJ, 499, L31 Shrader & Titarchuk (1999) Shrader C., & Titarchuk, L. 1999, ApJ, 521, L121 Sunyaev & Titarchuk (1985) Sunyaev R., & Titarchuk, L. 1985, A&A, 143, 374 Sunyaev & Titarchuk (1980) Sunyaev R., & Titarchuk, L. 1980, A&A, 86, 121 Titarchuk, Mastichiadis & Kylafis (1996) Titarchuk L., Mastichiadis A., & Kylafis N.D. 1996, A&AS, 120, 171 Titarchuk, Mastichiadis & Kylafis (1997) Titarchuk L., Mastichiadis A., & Kylafis N.D. 1997, ApJ, 493, 863 (TMK97) Titarchuk & Shrader (2002) Titarchuk L., & Shrader C. 2002, ApJ, 567, 1057 Titarchuk & Zannias (1998) Titarchuk L., & Zannias, T. 1998, ApJ, 493, 863 Turolla, Zane, Zampieri & Nobili (1996) Turolla R., Zane S., Zampieri L., & Nobili L. 1996, MNRAS, 283, 881 Zampieri, Turolla & Treves (1993) Zampieri L., Turolla R., & Treves A. 1993, ApJ, 419, 311 Zane, Turolla, Nobili & Erna (1996) Zane S., Turolla, R., Nobili L., & Erna M. 1996, ApJ, 466, 871 Zdziarski et al. (2001) Zdziarski A., et al. 2001, ApJ, 554, L45
Current-Driven Motion of Magnetic Domain Wall with Many Bloch Lines \nameJunichi \surnameIwasaki${}^{1}$ and \nameNaoto \surnameNagaosa${}^{1,2}$ iwasaki@appi.t.u-tokyo.ac.jpnagaosa@ap.t.u-tokyo.ac.jp${}^{1}$Department of Applied Physics${}^{1}$Department of Applied Physics University of Tokyo University of Tokyo Tokyo 113-8656 Tokyo 113-8656 Japan ${}^{2}$RIKEN Center for Emergent Matter Science (CEMS) Japan ${}^{2}$RIKEN Center for Emergent Matter Science (CEMS)WakoWako Saitama 351-0198 Saitama 351-0198 Japan Japan Abstract The current-driven motion of a domain wall (DW) in a ferromagnet with many Bloch lines (BLs) via the spin transfer torque is studied theoretically. It is found that the motion of BLs changes the current-velocity ($j$-$v$) characteristic dramatically. Especially, the critical current density to overcome the pinning force is reduced by the factor of the Gilbert damping coefficient $\alpha$ even compared with that of a skyrmion. This is in sharp contrast to the case of magnetic field driven motion, where the existence of BLs reduces the mobility of the DW. Domain walls (DWs) and bubbles [1, 2] are the spin textures in ferromagnets which have been studied intensively over decades from the viewpoints of both fundamental physics and applications. The memory functions of these objects are one of the main focus during 70’s, but their manipulation in terms of the magnetic field faced the difficulty associated with the pinning which hinders their motion. The new aspect introduced recently is the current-driven motion of the spin textures [3, 4]. The flow of the conduction electron spins, which follow the direction of the background localized spin moments, moves the spin texture due to the conservation of the angular momentum. This effect, so called the spin transfer torque, is shown to be effective to manipulate the DWs and bubbles compared with the magnetic field. Magnetic skyrmion [5, 6] is especially an interesting object, which is a swirling spin texture acting as an emergent particle protected by the topological invariant, i.e., the skyrmion number $N_{\mathrm{sk}}$, defined by $$N_{\mathrm{sk}}=\frac{1}{4\pi}\int\mathrm{d}^{2}r\ \bm{n}(\bm{r})\cdot\left(% \frac{\partial\bm{n}(\bm{r})}{\partial x}\times\frac{\partial\bm{n}(\bm{r})}{% \partial y}\right)$$ (1) with $\bm{n}(\bm{r})$ being the unit vector representing the direction of the spin as a function of the two-dimensional spatial coordinates $\bm{r}$. This is the integral of the solid angle subtended by $\bm{n}$, and counts how many times the unit sphere is wrapped. The solid angle and skyrmion number $N_{\mathrm{sk}}$ also play essential role when one derives the equation of motion for the center of mass of the spin texture, i.e., the gyro-motion is induced by $N_{\mathrm{sk}}$ in the Thiele equation, where the rigid body motion is assumed [7, 8]. Beyond the Thiele equation [7], one can derive the equation of motion of a DW in terms of two variables, i.e., the wall-normal displacement $q(t,\zeta,\eta)$ and the wall-magnetization orientation angle $\psi(t,\zeta,\eta)$ (see Fig. 1) where $\zeta$ and $\eta$ are general coordinates specifying the point on the DW [9]: $$\displaystyle\frac{\delta\sigma}{\delta\psi}=2M\gamma^{-1}\left[\dot{q}-\alpha% \Delta\dot{\psi}-v^{\mathrm{s}}_{\perp}-\beta\Delta v^{\mathrm{s}}_{\parallel}% (\partial_{\parallel}\psi)\right],$$ (2) $$\displaystyle\frac{\delta\sigma}{\delta q}=-2M\gamma^{-1}\left[\dot{\psi}+% \alpha\Delta^{-1}\dot{q}+v^{\mathrm{s}}_{\parallel}(\partial_{\parallel}\psi)-% \beta\Delta^{-1}v^{\mathrm{s}}_{\perp}\right],$$ (3) Here, $\dot{}$ means the time-derivative. $\parallel$ and $\perp$ indicate the components parallel and perpendicular to the DW respectively. $M$ is the magnetization, $\gamma$ is the gyro-magnetic ratio, and $\sigma$, $\Delta$ are the energy per area and thickness of the DW. $v^{\mathrm{s}}$ is the velocity of the conduction electrons, which produces the spin transfer torque. $\alpha$ is the Gilbert damping constant, and $\beta$ represents the non-adiabatic effect. These equations indicate that $q$ and $\psi$ are canonical conjugate to each other. This is understood by the fact that the generator of the spin rotation normal to the DW, which is proportional to $\sin\psi$ in Fig. 1, drives the shift of $q$. (Note that $\psi$ is measured from the fixed direction in the laboratory coordinates.) In order to reduce the magnetostatic energy, the spins in the DW tend to align parallel to the DW, i.e., Bloch wall. When the DW is straight, this structure is coplanar and has no solid angle. From the viewpoint of eqs. (2) and (3), the angle $\psi$ is fixed around the minimum, and slightly canted when the motion of $q$ occurs, i.e., $\dot{\psi}=0$. However, it often happens that the Bloch lines (BLs) are introduced into the DW as shown schematically in Fig. 1. The angle $\psi$ rotates along the DW and the Néel wall is locally introduced. It is noted here that the solid angle becomes finite in the presence of the BLs. Also with many BLs in the DW, the translation of BLs activates the motion of the angle $\psi$, i.e., $\dot{\psi}\neq 0$, which leads to the dramatic change in the dynamics. In the following, we focus on the straight DW which extends along $x$-direction and is uniform in $z$-direction. Thus, the general coordinates here are $(\zeta,\eta)=(x,z)$. $q(t,x,z)$ is independent of the coordinates $q(t,x,z)=q(t)$, and the functional derivative $\delta\sigma/\delta q$ in eq. (3) becomes the partial derivative $\partial\sigma/\partial q$. In the absence of BLs, we set $\psi(t,x,z)=\psi(t)$, and $\delta\sigma/\delta\psi$ in eq. (2) also becomes $\partial\sigma/\partial\psi$. Then the equation of motion in the absence of BL is $$\displaystyle\frac{\partial\sigma}{\partial\psi}=2M\gamma^{-1}\left[\dot{q}-% \alpha\Delta\dot{\psi}-v^{\mathrm{s}}_{\perp}\right],$$ (4) $$\displaystyle\frac{\partial\sigma}{\partial q}=-2M\gamma^{-1}\left[\dot{\psi}+% \alpha\Delta^{-1}\dot{q}-\beta\Delta^{-1}v^{\mathrm{s}}_{\perp}\right],$$ (5) With many BLs, the sliding motion of Bloch lines along DW, which activates $\dot{\psi}$, does not change the wall energy, i.e., $\delta\sigma/\delta\psi$ in eq. (2) vanishes [2]. Here, for simplicity, we consider the periodic BL array with the uniform twist $\psi(t,x,z)=(x-p(t))/\tilde{\Delta}$ where $\tilde{\Delta}$ is the distance between BLs, which leads to $$\displaystyle 0=2M\gamma^{-1}\left[\dot{q}+\alpha\Delta\tilde{\Delta}^{-1}\dot% {p}-v^{\mathrm{s}}_{\perp}-\beta\Delta\tilde{\Delta}^{-1}v^{\mathrm{s}}_{% \parallel}\right],$$ (6) $$\displaystyle\frac{\partial\sigma}{\partial q}=-2M\gamma^{-1}\left[-\tilde{% \Delta}^{-1}\dot{p}+\alpha\Delta^{-1}\dot{q}+\tilde{\Delta}^{-1}v^{\mathrm{s}}% _{\parallel}-\beta\Delta^{-1}v^{\mathrm{s}}_{\perp}\right],$$ (7) First, let us discuss the magnetic field driven motion without current. The effect of the external magnetic field $H^{\mathrm{ext}}$ is described by the force $\partial\sigma/\partial q=-2MH^{\mathrm{ext}}$ in eqs. (5) and (7). $v^{\mathrm{s}}_{\parallel}$ and $v^{\mathrm{s}}_{\perp}$ are set to be zero. In the absence of BL, as mentioned above, the phase $\psi$ is static $\dot{\psi}=0$ with the slight tilt of the spin from the easy-plane, and one obtains from eq. (5) $$\displaystyle\dot{q}=\frac{\Delta\gamma H^{\mathrm{ext}}}{\alpha}.$$ (8) This is a natural result, i.e., the mobility is inversely proportional to the Gilbert damping $\alpha$. $\psi$ is determined by eq. (4) with this value of the velocity $\dot{q}$. In the presence of many BLs, eqs. (6) and (7) give the velocities of DW and BL sliding driven by the magnetic field as $$\displaystyle\dot{q}=\frac{\alpha}{1+\alpha^{2}}\Delta\gamma H^{\mathrm{ext}},$$ (9) $$\displaystyle\dot{p}=-\frac{1}{1+\alpha^{2}}\tilde{\Delta}\gamma H^{\mathrm{% ext}}.$$ (10) Comparing eqs. (8) and (9), the mobility of the DW is reduced by the factor of $\alpha^{2}$ since $\alpha$ is usually much smaller than unity. We also note that the velocity of the BL sliding $\dot{p}$ is larger than that of the wall $\dot{q}$ by the factor of $\alpha$. Physically, this means that the effect of the external magnetic field $H^{\mathrm{ext}}$ mostly contributes to the rapid motion of the BLs along the DW rather than the motion of the DW itself. These results have been already reported in refs. [2, 9, 10]. Now let us turn to the motion induced by the current $v^{\mathrm{s}}$. In the absence of BL, again we put $\dot{\psi}=0$ in eqs. (4) and (5). Assuming that there is no pinning force or external magnetic field, i.e., $\partial\sigma/\partial q=0$, one obtains from eq. (5) $$\displaystyle\dot{q}=\frac{\beta}{\alpha}v^{\mathrm{s}}_{\perp},$$ (11) and eq. (4) determines the equilibrium value of $\psi$. When the pinning force $\partial\sigma/\partial q=F^{\mathrm{pin}}$ is finite, there appears a threshold current density $\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}$ which is determined by putting $\dot{q}=0$ in eq. (5) as $$\displaystyle\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}=\frac{\gamma% \Delta}{2M\beta}F^{\mathrm{pin}},$$ (12) which is inversely proportional to $\beta$ [11]. Since eq. (11) is independent of $v^{\mathrm{s}}_{\parallel}$, the threshold current density $\left(v^{\mathrm{s}}_{\parallel}\right)_{\mathrm{c}}$ is $\left(v^{\mathrm{s}}_{\parallel}\right)_{\mathrm{c}}=\infty$. In the presence of the many BLs, on the other hand, eqs. (6) and (7) give $$\displaystyle\frac{\partial\sigma}{\partial q}=-2M\gamma^{-1}$$ $$\displaystyle\left[\frac{1+\alpha^{2}}{\alpha}\Delta^{-1}\dot{q}\right.$$ $$\displaystyle\left.-\frac{1+\alpha\beta}{\alpha}\Delta^{-1}v^{\mathrm{s}}_{% \perp}-\frac{\beta-\alpha}{\alpha}\tilde{\Delta}^{-1}v^{\mathrm{s}}_{\parallel% }\right],$$ (13) which is the main result of this paper. From eq. (13), the current-velocity characteristic in the absence of both the pinning and the external field ($\partial\sigma/\partial q$=0) is $$\displaystyle\dot{q}$$ $$\displaystyle=\frac{1+\alpha\beta}{1+\alpha^{2}}v^{\mathrm{s}}_{\perp}-\frac{% \beta-\alpha}{1+\alpha^{2}}\Delta\tilde{\Delta}^{-1}v^{\mathrm{s}}_{\parallel}$$ $$\displaystyle\simeq v^{\mathrm{s}}_{\perp}+(\beta-\alpha)\Delta\tilde{\Delta}^% {-1}v^{\mathrm{s}}_{\parallel},$$ (14) where the fact $\alpha,\beta\ll 1$ is used in the last step. If we neglect the term coming from $v^{\mathrm{s}}_{\parallel}$, the current-velocity relation becomes almost independent of $\alpha$ and $\beta$ in sharp contrast to eq. (11). This is similar to the universal current-velocity relation in the case of skyrmion [12], where the solid angle is finite and also the transverse motion to the current occurs. Note that $v^{\mathrm{s}}_{\parallel}$ slightly contributes to the motion when $\alpha\neq\beta$, while it does not in the absence of BL. Even more dramatic is the critical current density in the presence of the pinning ($\partial\sigma/\partial q=F^{\mathrm{pin}}$). When we apply only the current perpendicular to the DW, i.e., $v^{\mathrm{s}}_{\parallel}=0$, putting $\dot{q}=0$ in eq. (13) determines the threshold current density as $$\displaystyle\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}=\frac{\gamma% \Delta}{2M}\frac{\alpha}{1+\alpha\beta}F^{\mathrm{pin}},$$ (15) which is much reduced compared with eq. (12) by the factor of $\frac{\alpha\beta}{1+\alpha\beta}\ll 1$. Note that $\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}$ in eq. (15) is even smaller than the case of skyrmion [12] by the factor of $\alpha$. Similarly, the critical current density of the motion driven by $v^{\mathrm{s}}_{\parallel}$ is given by $$\displaystyle\left(v^{\mathrm{s}}_{\parallel}\right)_{\mathrm{c}}=\frac{\gamma% \tilde{\Delta}}{2M}\frac{\alpha}{|\beta-\alpha|}F^{\mathrm{pin}},$$ (16) which can also be smaller than eq. (12). Next we look at the numerical solutions of $q(t)$ driven by the current $v^{\mathrm{s}}_{\perp}$ perpendicular to the wall under the pinning force. We assume the following pinning force: $(\gamma\Delta/2M)F^{\mathrm{pin}}(q)=v^{\ast}(q/\Delta)\exp\left[-(q/\Delta)^{% 2}\right]$ (see the inset of Fig. 2(a)). We employ the unit of $\Delta=v^{\ast}=1$ and the parameters $(\alpha,\beta)$ are fixed at $(\alpha,\beta)=(0.01,0.02)$. Here, we compare two DWs without BL and with BLs. The maximum value of the pinning force $(\gamma\Delta/2M)F^{\mathrm{pin}}_{\mathrm{max}}=0.429$ determines the threshold current density $\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}$ as $\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}=21.4$ and $\left(v^{\mathrm{s}}_{\perp}\right)_{\mathrm{c}}=0.00429$ in the absence of BL and in the presence of many BLs, respectively. In Fig. 2(a), both DWs overcome the pinning at the current density $v^{\mathrm{s}}_{\perp}=22.0$, although the velocity of the DW without BL is suppressed in the pinning potential. At the current density $v^{\mathrm{s}}_{\perp}=21.0$ below the threshold value in the absence of BL, the DW without BL is pinned, while that with BLs still moves easily (Fig. 2(b)). The velocity suppression in the presence of BLs is observed at much smaller current density $v^{\mathrm{s}}_{\perp}=0.0043$ (Fig. 2(c)), and finally it stops at $v^{\mathrm{s}}_{\perp}=0.0042$ (Fig. 2(d)). All the discussion above relies on the assumption that the wall is straight and $\psi$ rotates uniformly. When the bending of the DW and non-uniform distribution of BLs are taken into account, the average velocity and the threshold current density take the values between two cases without BL and with many BLs. The situation changes when the DW forms closed loop, i.e., the domain forms a bubble. The bubble with many BLs and large $|N_{\mathrm{sk}}|$ is called hard bubble because the repulsive interaction between the BLs makes it hard to collapse the bubble [2]. At the beginning of the motion, the BLs move along the DW, which results in the tiny critical current. In the steady state, however, the BLs accumulate in one side of the bubble [13, 14]. Then, the configuration of the BLs is static and the Thiele equation is justified as long as the force is slowly varying within the size of the bubble. The critical current density $\left(v^{\mathrm{s}}\right)_{\mathrm{c}}$ is given by $\left(v^{\mathrm{s}}\right)_{\mathrm{c}}\propto F^{\mathrm{pin}}/N_{\mathrm{sk}}$ ($N_{\mathrm{sk}}$ ($\gg 1$): the skyrmion number of the hard bubble), and is reduced by the factor of $N_{\mathrm{sk}}$ compared with the skyrmion with $N_{\mathrm{sk}}=\pm 1$. In conclusion, we have studied the current-induced dynamics of the DW with many BLs. The finite $\dot{\psi}$ in the steady motion activated by BLs sliding drastically changes the dynamics, which has already been reported in the field-driven case. In contrast to the field-driven case, where the mobility is suppressed by introducing BLs, that in the current-driven motion is not necessarily suppressed. Instead, the current-velocity relation shows universal behavior independent of the damping strength $\alpha$ and non-adiabaticity $\beta$. Furthermore, the threshold current density in the presence of impurities is tiny even compared with that of skyrmion motion by the factor of $\alpha$. These findings will stimulate the development of the racetrack memory based on the DW with many BLs. Acknowledgements. We thank W. Koshibae for useful discussion. This work is supported by Grant-in-Aids for Scientific Research (S) (No. 24224009) from the Ministry of Education, Culture, Sports, Science and Technology of Japan. J. I. was supported by Grant-in-Aids for JSPS Fellows (No. 2610547). References [1] A. Hubert and R. Schäfer, Magnetic Domains: The Analysis of Magnetic Microstructures (Springer-Verlag, Berlin, 1998). [2] A. P. Malozemoff and J.C. Slonczewski, Magnetic Domain Walls in Bubble Materials (Academic Press, New York, 1979). [3] J. C. Slonczewski, J. Magn. Magn. Mater. 159, L1–L7 (1996). [4] L. Berger, Phys. Rev. B 54, 9353–9358 (1996). [5] S. Mühlbauer et al., Science 323, 915 (2009). [6] X. Z. Yu et al., Nature 465, 901 (2010). [7] A. A. Thiele, Phys. Rev. Lett. 30, 230 (1973). [8] K. Everschor et al., Phys. Rev. B 86, 054432 (2012). [9] J. C. Slonczewski, J. Appl. Phys. 45, 2705 (1974). [10] A. P. Malozemoff and J. C. Slonczewski, Phys. Rev. Lett. 29, 952 (1972). [11] G. Tatara et al., J. Phys. Soc. Japan 75, 64708 (2006). [12] J. Iwasaki, M. Mochizuki and N. Nagaosa, Nat. Commun. 4, 1463 (2013). [13] G. P. Vella-Coleiro, A. Rosencwaig and W. J. Tabor, Phys. Rev. Lett. 29, 949 (1972) [14] A. A. Thiele, F. B. Hagedorn and G. P. Vella-Coleiro, Phys. Rev. B 8, 241 (1973).
Learning Bounded Treewidth Bayesian Networks with Thousands of Variables Mauro Scanagatta IDSIA   , SUPSI   , USI   Lugano, Switzerland mauro@idsia.ch &Giorgio Corani IDSIA11footnotemark: 1   , SUPSI22footnotemark: 2   , USI33footnotemark: 3   Lugano, Switzerland giorgio@idsia.ch &Cassio P. de Campos Queen’s University Belfast Northern Ireland, UK c.decampos@qub.ac.uk &Marco Zaffalon IDSIA11footnotemark: 1   Lugano, Switzerland zaffalon@idsia.ch Istituto Dalle Molle di studi sull’Intelligenza Artificiale (IDSIA)Scuola universitaria professionale della Svizzera italiana (SUPSI)Università della Svizzera italiana (USI) Abstract We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. We propose a novel algorithm for this task, able to scale to large domains and large treewidths. Our novel approach consistently outperforms the state of the art on data sets with up to ten thousand variables. 1 Introduction We consider the problem of structural learning of Bayesian networks with bounded treewidth, adopting a score-based approach. Learning the structure of a bounded treewidth Bayesian network is a NP-hard problem (Korhonen and Parviainen, 2013). It is therefore unlikely the existence of an exact algorithm with complexity polynomial in the number of variables $n$. Yet learning Bayesian networks with bounded treewidth is deemed necessary to allow exact tractable inference, since the worst-case inference complexity of known algorithms is exponential in the treewidth $k$. The topic has been thoroughly studied in the last years. A pioneering approach, polynomial in both the number of variables and the treewidth bound, has been proposed in (Elidan and Gould, 2009). It provides an upper-bound on the treewidth of the learned structure at each arc addition. The limit of this approach is that, as the number of variables increases, the bound becomes too large leading to sparse networks. An exact method has been proposed in (Korhonen and Parviainen, 2013), which finds the highest-scoring network with the desired treewidth. However, its complexity increases exponentially with the number of variables $n$. Thus it has been applied in experiments with up to only 15 variables. Parviainen et al. (2014) adopted an anytime integer linear programming (ILP). If the algorithm is given enough time, it finds the highest-scoring network with bounded treewidth. Otherwise it returns a sub-optimal DAG with bounded treewidth. The ILP problem has an exponential number of constraints in the number of variables, which limits its scalability. Nie et al. (2014) proposed a more efficient anytime ILP approach with a polynomial number of constraints in the number of variables. Yet they report that the quality of the solutions quickly degrades as the number of variables exceeds a few dozens and that no satisfactory solutions are found with data sets containing more than 50 variables. Approximate approaches are therefore needed to scale to larger domains. Nie et al. (2015) proposed the approximated method S2. It exploits the notion of k-tree, which is an undirected maximal graph with treewidth $k$. A Bayesian network whose moral graph is a subgraph of a k-tree has thus treewidth bounded by $k$. S2 is an iterative algorithm. Each iteration consists of two steps: a) sampling uniformly a k-tree from the space of k-trees and b) recovering via sampling a high-scoring DAG whose moral graph is a sub-graph of the sampled k-tree. The goodness of the k-tree is approximated by using a heuristic evaluation, called Informative Score. Nie et al. (2016) further refines this idea, proposing an exploration guided via A* for finding the optimal k-tree with respect to the Informative Score. This algorithm is called S2+. Recent structural learning algorithms with unbounded treewidth (Scanagatta et al., 2015) can cope with thousands of variables. Yet the unbounded treewidth provides no guarantee about the complexity of the inferences of the inferred models. We aim at filling this gap, learning treewidth-bounded Bayesian network models in domains with thousands of variables. Structural learning is usually accomplished in two steps: parent set identification and structure optimization. Parent set identification produces a list of suitable candidate parent sets for each variable. Structure optimization assigns a parent set to each node, maximizing the score of the resulting structure without introducing cycles. Our first contribution regards parent set identification. We provide a bound for pruning the sub-optimal parent sets when dealing with the BIC score; the bound is often tighter than the currently published ones (de Campos and Ji, 2011). As a second contribution, we propose two approaches for learning Bayesian networks with bounded treewidth. They are based on an iterative procedure which is able to add new variables to the current structure, maximizing the resulting score and respecting the treewidth bound. We compare experimentally our novel algorithms against S2 and S2+, which represent the state of the art on datasets with dozens of variables. Moreover, we present results for domains involving up to ten thousand variables, providing an increase of two order of magnitudes with respect to the results published to date. Our novel algorithms consistently outperform the competitors. 2 Structural learning Consider the problem of learning the structure of a Bayesian Network from a complete data set of $N$ instances $\mathcal{D}=\{D_{1},...,D_{N}\}$. The set of $n$ categorical random variables is $\mathcal{X}=\{X_{1},...,X_{n}\}$. The goal is to find the best DAG $\mathcal{G}=(V,E)$, where $V$ is the collection of nodes and $E$ is the collection of arcs. $E$ can be represented by the set of parents ${\Pi_{1},...,\Pi_{n}}$ of each variable. Different scores can be used to assess the fit of a DAG. We adopt the Bayesian Information Criterion (or simply $\mathrm{BIC}$), which asymptotically approximates the posterior probability of the DAG under common assumptions. The $\mathrm{BIC}$ score is decomposable, being constituted by the sum of the scores of the individual variables: $$\displaystyle\mathrm{BIC}(\mathcal{G})=$$ $$\displaystyle=\sum_{i=1}^{n}\mathrm{BIC}(X_{i},\Pi_{i})=\sum_{i=1}^{n}\left(\mathrm{LL}(X_{i}|\Pi_{i})+\mathrm{Pen}(X_{i},\Pi_{i})\right)\,,$$ $$\displaystyle\mathrm{LL}(X_{i}|\Pi_{i})=\displaystyle\sum\nolimits_{\pi\in|\Pi_{i}|,~{}x\in|X_{i}|}N_{x,\pi}\log\hat{\theta}_{x|\pi}\,,$$ $$\displaystyle\mathrm{Pen}(X_{i},\Pi_{i})=-\frac{\log N}{2}(|X_{i}|-1)(|\Pi_{i}|)\,,$$ where $\hat{\theta}_{x|\pi}$ is the maximum likelihood estimate of the conditional probability $P(X_{i}=x|\Pi_{i}=\pi)$, and $N_{x,\pi}$ represents the number of times $(X=x\land\Pi_{i}=\pi)$ appears in the data set, and $|\cdot|$ indicates the size of the Cartesian product space of the variables given as argument. Thus $|X_{i}|$ is the number of states of $X_{i}$ and $|\Pi_{i}|$ is the product of the number of states of the parents of $X_{i}$. Exploiting decomposability, we first identify independently for each variable a list of candidate parent sets (the parent set identification task). Later, we select for each node the parent set that yields the highest-scoring treewidth-bounded DAG, which we call structure optimization. 2.1 Parent sets identification When learning with limited treewidth it should be noted that the number of parents is a lower bound for the treewidth, since a node and its parents form a clique in the moralized graph. Thus, before running the structure optimization task, the list of candidate parent sets of each node has to include parent sets with size up to $k$, if the treewidth has to be bounded by $k$ (the precise definition of treewidth will be given later on). In spite of that, for values of $k$ greater than 3 or 4, we cannot compute all candidate parent sets, since it already has time complexity $\Theta(N\cdot n^{k+1})$. In this section we present the first contribution of this work: a bound for BIC scores that can be used to prune their evaluations while processing all parent set candidates. We first need a couple of auxiliary results. Lemma 1. Let $X$ be a node of $\mathcal{X}$, and $\Pi=\Pi_{1}\cup\Pi_{2}$ be a parent set of $X$ such that $\Pi_{1}\cap\Pi_{2}=\emptyset$ and $\Pi_{1},\Pi_{2}\neq\emptyset$. Then $\mathrm{LL}(X|\Pi)=$ $$=\mathrm{LL}(X|\Pi_{1})+\mathrm{LL}(X|\Pi_{2})-\mathrm{LL}(X)+N\cdot\mathrm{ii}(X;\Pi_{1};\Pi_{2}),$$ where $\mathrm{ii}$ is the Interaction Information estimated from data. Proof. It follows trivially from Theorem 1 in (Scanagatta et al., 2015). ∎ It is known that $\mathrm{LL}(\Pi_{1})\leq N\cdot\mathrm{ii}(\Pi_{1};\Pi_{2};X)\leq-\mathrm{LL}(\Pi_{1})$, and that the order of arguments is irrelevant (that is, $\mathrm{ii}(\Pi_{1};\Pi_{2};X)=\mathrm{ii}(\Pi_{2};\Pi_{1};X)=\mathrm{ii}(X;\Pi_{1};\Pi_{2})$). These inequalities provide bounds for the log-likelihood in line with the result presented in Corollary 1 of  (Scanagatta et al., 2015). We can manipulate that result to obtain new tigher bounds. Lemma 2. Let $X,Y_{1},\ldots,Y_{t}$ be nodes of $\mathcal{X}$, and $\Pi\neq\emptyset$ be a parent set for $X$ with $\Pi\cap\mathcal{Y}=\emptyset$, where $\mathcal{Y}=\{Y_{1},\ldots,Y_{t}\}$. Then $\mathrm{LL}(X|\Pi\cup\mathcal{Y})\leq\mathrm{LL}(X|\Pi)+\sum_{i=1}^{t}w(X,Y_{i})$, where $w(X,Y_{i})=\mathrm{MI}(X,Y_{i})-\max\{\mathrm{LL}(X);\mathrm{LL}(Y_{i})\}$, where $\mathrm{MI}(X,Y_{i})=\mathrm{LL}(X|Y_{i})-\mathrm{LL}(X)$ is the empirical mutual information. Proof. It follows from the bounds of $\mathrm{ii}(\cdot)$ and the successive application of Lemma 1 to $\mathrm{LL}(X|\Pi\cup\mathcal{Y})$, taking out one node of $\mathcal{Y}$ a time. ∎ The advantage of Lemma 2 is that $\mathrm{MI}(X,Y_{i})$ and $\mathrm{LL}(X)$ and $\mathrm{LL}(Y_{i})$ (and hence $w(X,Y_{i})$) can be all precomputed efficiently in total time $O(N\cdot n)$ for a given $X$, and since BIC is composed of log-likelihood plus penalization (the latter is efficient to compute), we obtain a new means of bounding BIC scores as follows. Theorem 1. Let $X\in\mathcal{X}$, and $\Pi\neq\emptyset$ be a parent set for $X$, $\Pi_{0}=\Pi\cup\{Y_{0}\}$ for some $Y_{0}\in\mathcal{X}\setminus\Pi$, and $Y^{\prime}=\max_{Y\in\mathcal{X}\setminus\Pi_{0}}\left(w(X,Y)+\mathrm{Pen}(X,\Pi\cup\{Y\})\right)$. If $w(X,Y_{0})+\mathrm{Pen}(X,\Pi_{0})\leq\mathrm{Pen}(X,\Pi)$ and $w(X,Y^{\prime})+\mathrm{Pen}(X,\Pi\cup\{Y^{\prime}\})\leq 0$, with $w(\cdot)$ as defined in Lemma 2, then $\Pi_{0}$ and any of its supersets are not optimal. Proof. Suppose $\Pi^{\prime}=\Pi_{0}\cup\mathcal{Y}$, with $\mathcal{Y}=\{Y_{1},\ldots,Y_{t}\}$ and $\mathcal{Y}\cap\Pi_{0}=\emptyset$ ($\mathcal{Y}$ may be empty). We have that $$\displaystyle\mathrm{BIC}$$ $$\displaystyle(X,\Pi^{\prime})=LL(X|\Pi^{\prime})+\mathrm{Pen}(X,\Pi^{\prime})$$ $$\displaystyle\leq$$ $$\displaystyle LL(X|\Pi^{\prime})+\mathrm{Pen}(X,\Pi_{0})+\sum_{i=1}^{t}\mathrm{Pen}(X,\Pi\cup\{Y_{i}\})$$ $$\displaystyle\leq$$ $$\displaystyle\mathrm{LL}(X|\Pi)+\mathrm{Pen}(X,\Pi_{0})+w(X,Y_{0})+$$ $$\displaystyle\sum_{i=1}^{t}\left(w(X,Y_{i})+\mathrm{Pen}(X,\Pi\cup\{Y_{i}\})\right)$$ $$\displaystyle\leq$$ $$\displaystyle\mathrm{BIC}(X,\Pi)+t\left(w(X,Y^{\prime})+\mathrm{Pen}(X,\Pi\cup\{Y^{\prime}\})\right)$$ $$\displaystyle\leq$$ $$\displaystyle\mathrm{BIC}(X,\Pi).$$ First step is the definition of BIC, second step uses the fact that the penalty function is exponentially fast with the increase in number of parents, third step uses Lemma 2, fourth step uses the assumptions of the theorem and the fact that $Y^{\prime}$ is maximal. Therefore we would choose $\Pi$ in place of $\Pi_{0}$ or any of its supersets. ∎ Theorem 1 can be used to discard parent sets during already their evaluation and without the need to wait for precomputing all possible candidates. We point out that these bounds are new and not trivially achievable by current existing bounds for BIC. As a byproduct, we obtain bounds for the number of parents of any given node. Corollary 1. Using BIC score, each node has at most $O(\log N-\log\log N)$ parents in the optimal structure. Proof. Let $X$ be a node of $\mathcal{X}$ and $\Pi$ a possible parent set. Let $Y\in\mathcal{X}\setminus\Pi$. From the fact that $\mathrm{MI}(X,Y)\leq\log|X|$, and $\max\{LL(X);LL(Y)\}\geq-N\cdot\log|X|$, we have that $w(X,Y)\leq(N+1)\log|X|$, with $w(\cdot)$ as defined in Lemma 2. Now $$\displaystyle\log|\Pi|\geq\log\left(\frac{2\log|X|}{|X|-1}\right)+\log\left(\frac{N+1}{\log N}\right)\iff$$ $$\displaystyle(N+1)\log|X|\leq\frac{\log N}{2}\cdot|\Pi|(|X|-1)\Longrightarrow$$ $$\displaystyle w(X,Y)\leq-\mathrm{Pen}(X,\Pi\cup\{Y\})+\mathrm{Pen}(X,\Pi)$$ for any $Y$, and so by Theorem 1 no super set of $\Pi$ is optimal. Note that $\log|\Pi|$ is greater than or equal to the number of parents in $\Pi$, so we have proven that any node in the optimal structure has at most $O(\log N-\log\log N)$, which is similar to previous known results (see e.g. (de Campos and Ji, 2011)). ∎ 2.2 Treewidth and $k$-trees We use this section to provide the necessary definitions and notation. Treewidth We illustrate the concept of treewidth following the notation of (Elidan and Gould, 2009). We denote an undirected graph as $\mathcal{H}=(V,E)$ where $V$ is the vertex set and $E$ is the edge set. A tree decomposition of $H$ is a pair ($\mathcal{C},\mathcal{T}$) where $\mathcal{C}=\{C_{1},C_{2},...,C_{m}\}$ is a collection of subsets of $V$ and $T$ is a tree on $\mathcal{C}$, so that: • $\cup_{i=1}^{m}\,\,C_{i}=V$; • for every edge which connects the vertices $v_{1}$ and $v_{2}$, there is a subset $C_{i}$ which contains both $v_{1}$ and $v_{2}$; • for all $i,j,k$ in $\{1,2,..m\}$ if $C_{j}$ is on the path between $C_{i}$ and $C_{k}$ in $\mathcal{T}$ then $C_{i}\cap C_{k}\subseteq C_{j}$. The width of a tree decomposition is $\max(|C_{i}|)-1$ where $|C_{i}|$ is the number of vertices in $C_{i}$. The treewidth of $H$ is the minimum width among all possible tree decompositions of $G$. The treewidth can be equivalently defined in terms of triangulation of $\mathcal{H}$. A triangulated graph is an undirected graph in which every cycle of length greater than three contains a chord. The treewidth of a triangulated graph is the size of the maximal clique of the graph minus one. The treewidth of $\mathcal{H}$ is the minimum treewidth over all the possible triangulations of $\mathcal{H}$. The treewidth of a Bayesian network is characterized with respect to all possible triangulations of its moral graph. The moral graph $M$ of a DAG is an undirected graph that includes an edge ($i\rightarrow j$) for every edge ($i\rightarrow j$) in the DAG and an edge ($p\rightarrow q$) for every pair of edges ($p\rightarrow i$), ($q\rightarrow i$) in the DAG. The treewidth of a DAG is the minimum treewidth over all the possible triangulations of its moral graph $\mathcal{M}$. Thus the maximal clique of any moralized triangulation of $\mathcal{G}$ is an upper bound on the treewidth of the model. $k$-trees An undirected graph $T_{k}=(V,E)$ is a $k$-tree if it is a maximal graph of tree-width $k$: any edge added to $T_{k}=(V,E)$ increases its treewidth. A $k$-tree is inductively defined as follows (Patil, 1986). Consider a ($k+1$)-clique, namely a complete graph with $k+1$ nodes. A ($k+1$)-clique is a $k$-tree. A ($k+1$)-clique can be decomposed into multiple $k$-cliques. Let us denote by $z$ a node not yet included in the list of vertices $V$. Then the graph obtained by connecting $z$ to every node of a $k$-clique of $T_{k}$ is also a $k$-tree. The treewidth of any subgraph of a $k$-tree (partial $k$-tree) is bounded by $k$. Thus a DAG whose triangulated moral graph is subgraph of a $k$-tree has treewidth bounded by $k$. 3 Incremental treewidth-bounded structure learning We now turn our attention to the structure optimization task. Our approach proceeds by repeatedly sampling an order $\prec$ over the variables and then identifying the highest-scoring DAG with bounded-treewidth consistent with the order. The size search space of the possible orders is $n!$, thus smaller than the search space of the possible k-trees. Once the order is sampled, we incrementally learn the DAG; it is guaranteed that at each step the moralization of the DAG is a subgraph of a $k$-tree. The treewidth of the DAG eventually obtained is thus bounded by $k$. The algorithm proceeds as follows. Initialization The initial k-tree $\mathcal{K}_{k+1}$ is constituted by the complete clique over the first $k+1$ variables in the order. The initial DAG $\mathcal{G}_{k+1}$ is learned over the same $k+1$ variables. Since ($k+1$) is a small number of variables, we can exactly learn $\mathcal{G}_{k+1}$. In particular we adopt the method of Cussens (2011). The moral graph of $\mathcal{G}_{k+1}$ is a subgraph of $\mathcal{K}_{k+1}$ and thus $\mathcal{G}_{k+1}$ has bounded treewidth. Node’s addition We then iteratively add each remaining variable. Consider the next variable in the order, $X_{\prec i}$, where $i\in\{k+2,...,n\}$. Let us denote by $\mathcal{G}_{i-1}$ and $\mathcal{K}_{i-1}$ the DAG and the k-tree which have to be updated by adding $X_{\prec i}$. We add $X_{\prec i}$ to $\mathcal{G}_{i-1}$, under the constraint that its parent set $\Pi_{\prec i}$ is a subset of a complete $k$-clique in $\mathcal{K}_{i-1}$. This yields the updated DAG $\mathcal{G}_{i}$. We then update the k-tree, connecting $X_{\prec i}$ to such $k$-clique. This yields the updated k-tree $\mathcal{K}_{i}$; it contains an additional $k+1$-clique compared to $\mathcal{K}_{i-1}$. By construction, $\mathcal{K}_{i}$ is also a $k$-tree. The moral graph of $\mathcal{G}_{i}$ cannot add arc outside this $(k+1)$-clique; thus it is a subgraph of $\mathcal{K}_{i}$. Pruning orders Notice that $\mathcal{K}_{k+1}$ and $\mathcal{G}_{k+1}$ depend only on which are the first $k+1$ variables and not on their relative positions. Thus all the orders which differ only as for the relative position of the first $k+1$ elements are equivalent for our algorithm. Thus once we have sampled an order and identified the corresponding DAG, we can prune the remaining $(k+1)!-1$ equivalent orders. In order to choose the parent set to be assigned to each variable added to the graph we propose two algorithms: k-A* and k-G. 3.1 k-A* We formulate the problem as a shortest path finding problem. We define each state as a step towards the completion of the structure, where a new variable is added to the DAG $\mathcal{G}$. Given $X_{\prec i}$ the variable assigned in the state $S$, we define a successor state of $S$ for each $k$-clique we can choose for adding the variable $X_{\prec i+1}$. The approach to solve the problem is based on a path-finding A* search, with cost function for state $S$ defined as $f(S)=g(S)+h(S)$. The goal is the state minimizing $f(S)$ where all the variable have been assigned. $g(S)$ is the cost from the initial state to $S$, and we define it as the sum of scores of already assigned parent sets: $$\displaystyle g(S)=\sum\limits_{j=0}^{i}score(X_{\prec j},\Pi_{\prec j})\,.$$ $h(S)$ is the estimated cost from $S$ to the goal. It is the sum of best assignable parent sets for the remaining variables. Note that we know that $X_{a}$ can have $X_{b}$ as parent only if $X_{b}\prec X_{a}$: $$\displaystyle g(S)=\sum\limits_{j=i+1}^{n}best(X_{\prec j})\,.$$ The algorithm uses an open list to store the search frontier. At each step it recovers the state with the smallest $f$ cost, generate the successors state and insert them into open, until the optimal is found. The A* approach requires the $h$ function to be admissible. The function h is admissible if the estimated cost is never greater than the true cost to the goal state. Our approach guarantees this property since the true cost of each step (score of chosen parent set for $X_{\prec i+1}$) is always equal or greater than the estimated (score of best selectable parent set for $X_{\prec i+1}$). We also have that $h$ is consistent, meaning that for any state $S$ and its successor $T$, $h(S)\leq h(T)+c(S,T)$, where $c(S,T)$ is the cost of the edges added in $T$. This follows from the previous argument. Now we have that $f$ is monotonically non-decreasing on any path, and the algorithm is guaranteed to find the optimal path as long as the goal state is reachable. 3.2 k-G In some cases a high number of variables or a high treewidth prevent the use of k-A*. We thus propose a greedy alternative approach, K-G. Following the path-finding problem defined previously, it takes a greedy approach: at each step chooses for the variable $X_{i}$ the highest-scoring parent set that is subset of an existing $k$-clique in $\mathcal{K}$. 3.3 Space of learnable DAGs A reverse topological order is an order $\{v_{1},...v_{n}\}$ over the vertexes $V$ of a DAG in which each $v_{i}$ appears before its parents $\Pi_{i}$. The search space of our algorithms is restricted to the DAGs whose reverse topological order, when used as variable elimination order, has treewidth $k$. This prevents recovering DAGs which have bounded treewidth but lack this property. We start by proving by induction that the reverse topological order has treewidth $k$ in the DAGs recovered by our algorithms. Consider the incremental construction of the DAG previously discussed. The initial DAG $\mathcal{G}_{k+1}$ is induced over $k+1$ variables; thus every elimination ordering has treewidth bounded by $k$. For the inductive case, assume that $\mathcal{G}_{i-1}$ satisfy the property. Consider the next variable in the order, $X_{\prec_{i}}$, where $i\in\{k+2,...,n\}$. Its parent set $\Pi_{\prec_{i}}$ is a subset of a $k$-clique in $\mathcal{K}_{i-1}$. The only neighbors of $X_{\prec_{i}}$ in the updated DAG $\mathcal{G}_{i}$ are its parents $\Pi_{\prec_{i}}$. Consider performing variable elimination on the the moral graph of $\mathcal{G}_{i}$, using a reverse topological order. Then $X_{\prec_{i}}$ will be eliminated before $\Pi_{\prec_{i}}$, without introducing fill-in edges. Thus the treewidth associated to any reverse topological order is bounded by $k$. This property inductively applies to the addition also of the following nodes up to $X_{\prec_{n}}$. Inverted trees An example of DAG non recoverable by our algorithms is the specific class of polytrees that we call inverted trees, that is, DAGs with indegree equal to one. An inverted tree with $m$ levels and treewidth $k$ can be built as follows. Take the root node (level one) and connect it to $k$ child nodes (level two). Connect each node of level two to $k$ child nodes (level three). Proceed in this way up to the m-th level and then invert the direction of all the arcs. Figure 1 shows an inverted tree with $k$=2 and $m$=3. It has treewidth two, since its moral graph is constituted by the cliques {A,B,E}, {C,D,F}, {E,F,G}. The treewidth associated to the reverse topological order is instead three, using the order G, F, D, C, E, A, B. If we run our algorithms with bounded treewidth $k$=2, it will be unable to recover the actual inverted tree. It will instead identify a high-scoring DAG whose reverse topological order has treewidth 2. 3.4 Our implementation of S2 and S2+ Here we provide the details of our implementation of S2 and S2+. They both use the notion of Informative Score (Nie et al., 2015), an approximate measure of the fitness of a k-tree. The I-score of a k-tree $T_{k}$ is defined as $$\displaystyle IS(T_{k})=\frac{S_{mi}(T_{k})}{|S_{l}(T_{k})|}\,,$$ where $S_{mi}(T_{k})$ measures the expected loss of representing the data with the k-tree. Let $I_{ij}$ denote the mutual information of node $i$ and $j$: $$\displaystyle S_{mi}(T_{k})=\sum_{i,j}I_{ij}-\sum_{i,j\notin T_{k}}I_{ij}\,.$$ $S_{l}(T_{k})$ instead is defined as the score of the best pseudo subgraph of the k-tree by dropping the acyclic constraint: $$\displaystyle S_{l}(T_{k})=\max_{m(G)\in T_{k}}\sum_{i\in N}score(X_{i},\Pi_{i})\,,$$ where $m(G)$ is the moral graph of DAG $G$, and $score(X_{i},\Pi_{i})$ is the local score function of variable $X_{i}$ for the parent set $\Pi_{i}$. The first phase of both S2 and S2+ consists in a k-tree sampling. In particular, S2 obtains k-trees by using the Dandelion sampling discussed in (Nie et al., 2014). The proposed k-trees are then accepted with probability: $$\displaystyle\alpha=min\left(1,\frac{IS(T_{k})}{IS(T^{*}_{k})}\right)\,,$$ where $T^{*}_{k}$ is the current k-tree with the largest I-score (Nie et al., 2015). Instead S2+ selects the $k+1$ variables with the largest I-score and finds the k-tree maximizing the I-score from this clique, as discussed in (Nie et al., 2016). Additional k-trees are obtained choosing a random initial clique. The second phase of the algorithms looks for a DAG whose moralization is subgraph of the chosen k-tree. For this task, the authors proposed an approximate approach based on partial order sampling (Algorithm 2 of (Nie et al., 2014)). In our experiments, we found that using Gobnilp for this task yields slightly higher scores, therefore we adopt this approach in our implementation. We believe that it is due to the fact that constraining the structure optimization to a subjacent graph of a k-tree results in a small number of allowed arcs for the DAG. Thus our implementation finds the highest-scoring DAG whose moral graph is a subgraph of the provided k-tree. 3.4.1 Discussion The problem with k-tree sampling is that each k-tree enforces a random constraint over the arcs that may appear in the final structure. The chance that we randomly sample a k-tree that allows good scoring arcs becomes significantly smaller as the number of variables increases, and the space of possible k-tree increases as well. The criterion for probabilistic acceptance, presented in the past section, has been proposed for tackling this issue, but it does not resolve the situation completely. Our approach instead focus immediately on selecting the best arcs, in a way that guarantees the treewidth bound. Experimentally we observed that k-tree sampling is quicker, producing an higher number of candidate DAGs, whose scores are unfortunately low. Our approach instead generates less but higher-scoring DAGs. (Nie et al., 2016) improves on the notion of k-tree, searching for the optimal one with respect to the Informative Score (IS). IS considers only the mutual information between pair of variables, and it may exaggerate the importance of assigning some arcs. The IS criterion may suggest parents for a node with separately have high mutual information but are bad together as a parent set. 4 Experiments We compare k-A*, k-G, S2 and S2+ in various experiments. We compare them through an indicator which we call W-score: the percentage of worsening of the BIC score of the selected treewidth-bounded method compared to the score of the Gobnilp solver (Cussens, 2011). Gobnilp achieves higher score than the treewidth-bounded methods since it has no limits on the treewidth. Let us denote by $G$ the BIC score achieved by Gobnilp and by $T$ the BIC score obtained by the given treewidth-bounded method. Notice that both $G$ and $T$ are negative. The W-score is $W=\frac{G-T}{G}$. W stands for worsening and thus lower values of $W$ are better. The lowest value of W is zero, while there is no upper bound on the value of W. 4.1 Learning inverted trees As already discussed our approach cannot learn an inverted tree with $k$ parents per node if given bounded treewidth $k$. In this section we study their performance in this worst-case scenario. We start with treewidth $k=2$. We consider the number of variables $n\in\{21,41,61,81,101\}$. For each value of $n$ we generate 5 different inverted trees. An inverted tree is generated by randomly selecting a root variable $X$ from the existing graph and adding $k$ new variables as $\Pi_{X}$, until the graph contains $n$ variables. All variables are binary and we sample their conditional probability tables from a Beta(1,1). We sample 10,000 instances from each generated inverted tree. We then perform structural learning with k-A*, k-G, S2 and S2+, setting $k=2$ as limit on the treewidth. We allow each method to run for ten minutes. Both S2 and S2+ could in principle recover the true structure, which is prevented to our algorithms. The results are shown in Fig.2. Qualitatively similar results are obtained repeating the experiments with $k=4$. Despite the unfavorable setting, both k-G and k-A* yield DAGs with higher score than S2 and S2+, consistently for each value of $n$. Thus the limitation of the space of learnable DAGs does not hurt much the performance of k-G and k-A*. In fact S2 could theoretically recover the actual DAG, but this would require too many samples from the space of the k-trees, which is prohibitive. We further investigate the differences between methods by providing in Table 2 some statistics about the candidate solutions they generate. Iterations is the number of proposed solutions; for S2 and S2+ it is the number of explored k-trees, while for k-G and k-A* it is number of explored orders. During the execution, S2 samples almost one million k-trees. Yet it yields the lowest-scoring DAGs among the different methods. This can be explained considering that a randomly sampled k-tree has a low chance to cover a high-scoring DAG. S2+ recovers only a few k-trees, but their scores are higher than those of S2. This confirms the effectiveness of driving the search for good k-trees through the Informative Score. As we will see later, however, this idea does not scale on very large data sets. As for our methods, k-G samples a larger number of orders than k-A* does and this allows it to achieve higher scores, even if it sub-optimally deals with each single order. 4.2 Small data sets We now present experiments on the data sets already considered by (Nie et al., 2016). They involve up to 100 variables. We set the bounded treewidth to $k=4$. We provide each structural learning method with the same pre-computed scores of parent sets. We allow each method to run for ten minutes. We perform 10 experiments on each data set and we report the median scores in Table 1. Our results are not comparable with those reported by (Nie et al., 2016) since we use the BIC while they use BDeu. Remarkably both k-A* and k-G achieve higher scores than both S2 and S2+ do on almost all data sets. Only on the smallest data sets all methods achieve the same score. Between our two novel algorithms, k-A* has a slight advantage over k-G. We provide statistics about the candidate solutions generated by each method in Table 3. The results of the table refer in particular to the community data set ($n$=100). The conclusions are similar to those of previous analyses. S2 performs almost one million iterations, but they are characterized by low scores. S2+ performs a drastically smaller number of iterations, but is able anyway to outperform S2. Similarly k-A* is more effective than k-G, despite generating a lower number of candidate solution. The reduced number of candidate solutions generated by both S2+ and k-A* suggest that they cannot scale on data sets much larger than those of this experiment. 4.3 Large data sets We now consider 10 large data sets ($100\leq n\leq 400$) listed in Table 4. We consider the following treewidths: $k\in\{2,5,8\}$. We split each data set randomly into three subsets. Thus for each treewidth we run 10$\cdot$3=30 structural learning experiments. We provide all structural learning methods with the same pre-computed scores of parent sets and we let each method run for one hour. For S2+, we adopt a more favorable approach, allowing it to run for one hour; if after one hour the first k-tree was not yet solved, we allow it to run until it has solved the first k-tree. In Table 5 we report how many times each method wins against another for each treewidth. The entries are boldfaced when the number of victories of an algorithm over another is statistically significant according to the sign-test (p-value <0.05). Consistently for any chosen treewidth, k-G is significantly better than any competitor, including k-A*; moreover, k-A* is significantly better than both S2 and S2+. This can be explained by considering that k-G explores more orders than k-A*, as for a given order it only finds an approximate solution. The results suggest that it is more important to explore many orders instead of obtaining the optimal DAG given an order. 4.4 Very large data sets As final experiment, we consider 14 very large data sets, containing more than 400 variables. We include in these experiments three randomly-generated synthetic data sets containing 2000, 4000 and 10000 variables respectively. These networks have been generated using the software BNGenerator 111http://sites.poli.usp.br/pmr/ltd/Software/BNGenerator/. Each variable has a number of states randomly drawn from 2 to 4 and a number of parents randomly drawn from 0 to 6. In this case, we perform 14$\cdot$3=42 structural learning experiments with each algorithm. The only two algorithms able to cope with these data sets are k-G and S2. Among them, k-G wins 42 times out of 42; this dominance is clearly significant. This result is consistently found under each choice of treewidth ($k=$2, 5, 8). On average, the improvement of k-G over S2 fills about 60% of the gap which separates S2 from the unbounded solver. The W-scores of such 42 structural learning experiments are summarized in Figure 3. For both S2 and k-G, a larger treewidth allows to recover a higher-scoring graph. In turn this decreases the W-score. However k-G scales better than S2 with respect to the treewidth; its W-score decreases more sharply with the treewidth. It is interesting to analyze the statistics of the solutions generated by the two methods. They are given in Table 7 for the data set Munin. K-G generates a number of solutions which is a few orders of magnitude smaller than that of S2. Yet, the scores of the obtained solutions are much higher. 5 Conclusion Our novel approaches for treewidth-bounded structural learning of Bayesian Networks perform significantly better than state-of-the-art methods. The greedy approach scales up to thousands of nodes and suggests that it is more important to find good k-trees than to solve the internal structure optimization task for each one of them. The methods consistently outperform the competitors on a variety of experiments. All these methods and others for unbounded learning of Bayesian networks can make use of our new bounds for BIC scores in order to reduce the number of parent set evaluations during the precomputation of scores. Further analyses of the bounds are left for future work. References Cussens (2011) Cussens J. Bayesian network learning with cutting planes. In UAI-11: Proceedings of the 27th Conference Annual Conference on Uncertainty in Artificial Intelligence, pages 153–160. AUAI Press, 2011. de Campos and Ji (2011) de Campos C. P. and Ji Q. Efficient structure learning of Bayesian networks using constraints. Journal of Machine Learning Research, 12:663–689, 2011. Elidan and Gould (2009) Elidan G. and Gould S. Learning bounded treewidth Bayesian networks. In Advances in Neural Information Processing Systems 21, pages 417–424. Curran Associates, Inc., 2009. Korhonen and Parviainen (2013) Korhonen J. H. and Parviainen P. Exact learning of bounded tree-width Bayesian networks. In Proc. 16th Int. Conf. on AI and Stat., page 370–378. JMLR W&CP 31, 2013. Nie et al. (2014) Nie S., Mauá D. D., de Campos C. P., and Ji Q. Advances in learning Bayesian networks of bounded treewidth. In Advances in Neural Information Processing Systems, pages 2285–2293, 2014. Nie et al. (2015) Nie S., de Campos C. P., and Ji Q. Learning Bounded Tree-Width Bayesian Networks via Sampling. In ECSQARU-15: Proceedings of the 13th European Conference on Symbol and Quantitative Approaches to Reasoning with Uncertainty, pages 387–396, 2015. Nie et al. (2016) Nie S., de Campos C. P., and Ji Q. Learning Bayesian networks with bounded treewidth via guided search. In AAAI-16: Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016. Parviainen et al. (2014) Parviainen P., Farahani H. S., and Lagergren J. Learning bounded tree-width Bayesian networks using integer linear programming. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 2014. Patil (1986) Patil H. P. On the structure of k-trees. Journal of Combinatorics, Information and System Sciences, pages 57–64, 1986. Scanagatta et al. (2015) Scanagatta M., de Campos C. P., Corani G., and Zaffalon M. Learning Bayesian Networks with Thousands of Variables. In NIPS-15: Advances in Neural Information Processing Systems 28, pages 1855–1863, 2015.
Multi-centered higher spin solutions from $W_{N}$ conformal blocks Ondřej Hulík, a    Joris Raeymaekers a    and Orestis Vasilakis ondra.hulik@gmail.com joris@fzu.cz vasilakis@fzu.cz Institute of Physics of the Czech Academy of Sciences, CEICO, Na Slovance 2, 182 21 Prague 8, Czech RepublicInstitute of Particle Physics and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 180 00 Prague 8, Czech Republic Abstract Motivated by the question of bulk localization in holography, we study the problem of constructing multi-centered solutions in higher spin gravity which describe point particles in the interior of AdS${}_{3}$. In the Chern-Simons formulation these take into account the backreaction after adding Wilson line sources. We focus on chiral solutions where only the left-moving sector is excited. In that case it is possible to choose a gauge where the dynamical variables are a set of Toda fields living in the bulk. The problem then reduces to solving the ${\cal A}_{N-1}$ Toda equations with delta function sources, which in turn requires solving an associated monodromy problem. We show that this monodromy problem is equivalent to the monodromy problem for a particular ${\cal W}_{N}$ vacuum conformal block at large central charge. Therefore, knowledge of the ${\cal W}_{N}$ vacuum block determines the multi-centered solution. Our calculations go beyond the heavy-light approximation by including the backreaction of all higher spin particles. Keywords: \arxivnumber\preprint 1 Introduction The question of how local bulk physics in anti-de-Sitter space emerges holographically from the boundary CFT is an important one and has received considerable attention in recent years following the work of HKKL Hamilton:2006az . Ultimately it has ramifications for our understanding of the black hole interior and the information puzzle Almheiri:2012rt Papadodimas:2012aq . One of the basic questions one can ask in this context is how a bulk state containing a collection of classical particles, tracing out worldlines in the interior of Anti-de Sitter (AdS) space, is described in the dual CFT. In the context of AdS${}_{3}$/CFT${}_{2}$, it was established Hartman:2013mia Faulkner:2013yia Fitzpatrick:2014vua Hijano:2015rla Hijano:2015zsa that these are intimately linked to ‘classical’ Virasoro conformal blocks at large central charge, in that the classical block computes the action of the classical particles. This connection has been further explored and generalized in various ways. In this work we will focus on the generalization to particles in higher spin AdS${}_{3}$ gravity and their relation to classical blocks of the ${\cal W}_{N}$ algebra Ammon:2013hba Castro:2014mza Besken:2016ooo . In most of the investigations into the relation between bulk particles and conformal blocks, a ‘heavy-light’ approximation is made in which at most one of the particles is assumed heavy enough to backreact on the geometry. To go beyond this approximation, one must consider fully backreacted multi-centered solutions in the bulk, which is a daunting task in general. In Hulik:2016ifr , it was shown that the backreaction problem in Lorentzian AdS${}_{3}$ simplifies if one considers spinning particle trajectories which, on the boundary, only excite the left-moving sector. In this case, as we shall review in section 2 below, the spacetime contains a submanifold of constant negative curvature except at the particle locations where there are conical singularities. The problem then reduces to solving a Euclidean Liouville equation on the unit disk with ZZ boundary conditions Zamolodchikov:2001ah and delta-function sources. This requires solving a monodromy problem which is identical to the monodromy problem which determines the classical vacuum block in a specific channel. Therefore the multicentered solution can in principle be constructed once the classical vacuum block is known, and vice versa. The goal of the current work is to generalize the above results to the construction of multi-centered solutions in higher spin gravity theories, coming from backreacting particles with higher spin charges. This problem was analyzed in Castro:2014mza Besken:2016ooo in the heavy-light limit. To go beyond this approximation, we first recast the pure gravity results in Chern-Simons variables. We show that there exists a gauge which is convenient for the problem at hand, in which the left-moving gauge field is a Lax connection for the Liouville theory. Similarly, for the higher spin case, we show in section 3 that we can go to a gauge where one of the gauge fields is a Lax connection for the ${\cal A}_{N-1}$ Toda field theory. One somewhat suprising feature to come out of this analysis is that the standard higher spin theory with $\textrm{sl}(N,\mathbb{R})$ gauge symmetry leads to a non-standard reality condition on the Euclidean Toda fields, while the standard reality condition instead describes higher spin theory with $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$ gauge symmetry. The Toda fields that we construct in this manner live in the bulk, and we argue that by including suitable singular sources in the Toda equations we can take into account the backreaction of localized bulk particles. Point particle sources can be coupled consistently to higher spin gravity in the form of Wilson lines Witten:1989sx Ammon:2013hba , and we consider here a class of ‘chiral’ particles which only couple to the left-moving gauge field. The problem then reduces to solving the Toda system on the unit disk with delta function sources under suitable boundary conditions. Upon taking into account the boundary conditions through a doubling trick, the problem can be reduced to a certain monodromy problem on the complex plane. We show in section 4 that the same monodromy problem arises in the determination of a classical ${\cal W}_{N}$ vacuum block in a specific channel. Therefore, from the knowledge of the classical ${\cal W}_{N}$ vacuum block we can in principle construct the multi-centered solution and vice versa. While we treat the case of spin-3 gravity in the most detail, we discuss the generalization to the spin-$N$ case in section 5. To summarize our results in the context of bulk locality, we provide in this work a prescription to construct a state in the Lorentzian bulk theory containing localized particles from a classical block in a Euclidean Toda CFT. This seems close in spirit to H. Verlinde’s ‘CFT-AdS’ idea, where states in AdS with localized particles or black holes are constructed directly from the Euclidean CFT VerlindeCFTAdS (see also Jackson:2014nla ,Verlinde:2015qfa ). The present work can presumably be viewed as a concrete realization of this idea, albeit in a highly symmetric setting. 2 Chiral Solutions in Pure Gravity In this section we review and expand on an earlier observation Hulik:2016ifr that a class of ‘chiral’ solutions in 2+1 dimensional AdS gravity can be conveniently described in terms of a Liouville field living in the bulk. Here, the term chiral means that, from the point of view of the boundary CFT, only the left-moving sector is excited. We will first derive this Liouville description in the metric formulation, where it has a clear geometric origin, and subsequently in Chern-Simons variables, which will facilitate the extension to higher spin gravity in the following sections. 2.1 Chiral sector and bulk Liouville field The general solution to pure AdS${}_{3}$ gravity which has a cylindrical boundary and satisfies Brown-Henneaux boundary conditions Brown:1986nw , is parametrized by two arbitrary periodic functions $T(x_{+}),\bar{T}(x_{-})$. The metric111Note that we work in units where the AdS radius is set to one. reads, in Fefferman-Graham coordinates, $$ds^{2}=\frac{dy^{2}}{y^{2}}-\frac{1}{4y^{2}}dx_{+}dx_{-}+\frac{T}{4}dx_{+}^{2}% +\frac{\bar{T}}{4}dx_{-}^{2}-\frac{y^{2}}{4}T\bar{T}dx_{+}dx_{-}\,.$$ (1) Here, the boundary is at $y=0$, and $x_{\pm}=t\pm\phi$ are light-cone coordinates on the boundary cylinder, with $(x_{+},x_{-})\sim(x_{+}+2\pi,x_{-}-2\pi)$. For example, the global AdS${}_{3}$ solution is given by $$T=\bar{T}=-1.$$ (2) In this work, we will be interested in the class of ‘chiral’ solutions, where $\bar{T}=-1$ takes the same222One can easily generalize this discussion to the case where $\bar{T}$ is an arbitrary negative constant, as was done in Hulik:2016ifr , though we will focus on $\bar{T}=-1$ in what follows. value as for global AdS , while $T(x_{+})$ can be arbitrary. The metric (1) can then be rewritten in the fibered form $$ds^{2}=-{1\over 4}\left(dx_{-}+{1\over 2}(y^{-2}-Ty^{2})dx_{+}\right)^{2}+{dy^% {2}\over y^{2}}+{1\over 16}\left(y^{-2}+Ty^{2}\right)^{2}dx_{+}^{2}\,.$$ (3) The main observation is that the metric on the 2D base manifold, given by the last two terms in the above expression, has constant negative curvature as one can easily verify. Therefore, we can make a coordinate transformation which brings this base metric in conformal gauge, such that $${dy^{2}\over y^{2}}+{1\over 16}\left(y^{-2}+Ty^{2}\right)^{2}dx_{+}^{2}=e^{-2% \phi(z,\bar{z})}dzd\bar{z}.$$ (4) The field $\phi(z,\bar{z})$ then satisfies the Liouville equation $$\partial\bar{\partial}\phi+e^{-2\phi}=0,$$ (5) and we will see below that the function $T(x_{+})$ is essentially the boundary value of the Liouville stress tensor (see (27)). Note that the Liouville field $\phi$ is a bulk field depending on the original coordinates $y$ and $x_{+}$. For later convenience it will useful to work out this coordinate transformation in more detail. It follows from (4) that there exists a real function $\alpha(y,x_{+})$ such that $$\displaystyle e^{-\phi}dz$$ $$\displaystyle=$$ $$\displaystyle e^{i\alpha}\left(-{dy\over y}+{i\over 4}(y^{-2}+Ty^{2})dx_{+}% \right)\,.$$ (6) Furthermore, one can show that the 3D Einstein equations imply the relation $$i(\partial_{z}\phi dz-\partial_{\bar{z}}\phi d\bar{z})={1\over 2}(y^{-2}-Ty^{2% })dx_{+}-d\alpha\,.$$ (7) In summary, for chiral solutions to pure 3D gravity the metric can be brought in the form $$ds^{2}=-{1\over 4}\left(d\tilde{t}+i(\partial_{z}\phi dz-\partial_{\bar{z}}% \phi d\bar{z})\right)^{2}+e^{-2\phi(z,\bar{z})}dzd\bar{z}\,,$$ (8) where $\phi$ is a Liouville field satisfying (5) and we have defined $$\tilde{t}\equiv x_{-}+\alpha(y,x_{+}).$$ (9) 2.2 Chern-Simons variables With a view towards generalizing this observation to higher spin gravity, it will be useful to describe the above change of variables in the Chern-Simons formulation Achucarro:1987vz ,Witten:1988hc , where the gravitational field is described by a flat connection taking values in $\textrm{sl}(2,\mathbb{R})\oplus\overline{\textrm{sl}(2,\mathbb{R})}$. In this formulation the gauge connections $A,\bar{A}$ are related to the dreibein $e$ and the spin connection $\omega$ as $$e=A-\bar{A},\qquad\omega=A+\bar{A}.$$ (10) For solutions obeying the Chern-Simons equivalent of the Brown-Henneaux boundary conditions, the gauge connections can be taken to have the Fefferman-Graham form Banados:1998gg $$A_{HW}=-L_{0}{dy\over y}+{1\over 2}\left(y^{-1}L_{1}-T(x_{+})yL_{-1}\right)dx_% {+},\qquad\bar{A}_{HW}=\bar{L}_{0}{dy\over y}-{1\over 2}\left(y^{-1}\bar{L}_{-% 1}+y\bar{L}_{1}\right)dx_{-},$$ (11) where in the second formula we have already restricted to chiral solutions with $\bar{T}=-1$. Our claim is that we can perform a gauge transformation to a gauge where the connections take the form: $$\displaystyle\tilde{A}$$ $$\displaystyle=$$ $$\displaystyle 2e^{-\phi}\mathfrak{Re}(dz)L_{0}-\mathfrak{Im}\left((\partial_{z% }\phi-e^{-\phi})dz\right)L_{1}-\mathfrak{Im}\left((\partial_{z}\phi+e^{-\phi})% dz\right)L_{-1}\,,$$ (12) $$\displaystyle\tilde{\bar{A}}$$ $$\displaystyle=$$ $$\displaystyle-{1\over 2}d\tilde{t}(\bar{L}_{1}+\bar{L}_{-1}).$$ (13) The connection $\tilde{A}$ is a Lax connection for the Liouville equation: one verifies that the flatness of $\tilde{A}$ is equivalent to the Liouville equation (5). Indeed, the connection $\tilde{A}$ can be brought into the standard form in the literature for the Lax connection of Liouville theory. For this purpose we perform a by a complex change of basis in the Lie algebra generated by $$V\equiv e^{{i\pi\over 4}(L_{1}-L_{-1})},$$ (14) resulting in the the standard Lax connection for the Liouville equation (see e.g. BabylonTalon ) $$A_{T}=V^{-1}\tilde{A}V=(\partial_{z}\phi dz-\partial_{\bar{z}}\phi d\bar{z})L_% {0}-ie^{-\phi}\left(dzL_{1}+d\bar{z}L_{-1}\right).$$ (15) The right-moving connection becomes in this basis $$\bar{A}_{T}=V^{-1}\tilde{\bar{A}}V=\mathrm{i}d\tilde{t}\bar{L}_{0}\,.$$ (16) Here we have used that $V$ satisfies $V^{-1}\left(L_{1}+L_{-1}\right)V=-2iL_{0}$. The associated field strength $\tilde{F}=d\tilde{A}+\tilde{A}\wedge\tilde{A}$ satisfies $$F_{T}=V^{-1}\tilde{F}V=-2\left(\partial_{z}\partial_{\bar{z}}\phi+e^{-2\phi}% \right)L_{0}dz\wedge d\bar{z}.$$ (17) and its vanishing is indeed equivalent to the Liouville equation. Although the connections $A_{T}$ and $\bar{A}_{T}$ don’t live in the original Lie algebra, they are useful to define since they give a field strength in the Cartan subalgebra and because they can be easily generalized for the case of arbitrary spin that we examine in section 5. It remains to describe the gauge transformation relating (11) and (13) in more detail. A generic gauge transformation can be decomposed into a translation part involving the AdS translation generators $L_{m}-\bar{L}_{m}$, and a local Lorentz part involving the Lorentz generators $L_{m}+\bar{L}_{m}$. As shown in Witten:1988hc , the translation part can be traded for a coordinate transformation $(y,x_{+},x_{-})\rightarrow(z,\bar{z},\tilde{t})$, which we take to be precisely the transformation derived before: $$z=z(y,x_{+}),\qquad\tilde{t}=x_{-}+\alpha(y,x_{+})\,,$$ (18) satisfying (6,7). We then make an additional local Lorentz transformation with gauge parameter $\Lambda\bar{\Lambda}$, where $$\Lambda=e^{-\ln yL_{0}}e^{-{\alpha\over 2}(L_{1}+L_{-1})},\qquad\bar{\Lambda}=% e^{-\ln y\bar{L}_{0}}e^{-{\alpha\over 2}(\bar{L}_{1}+\bar{L}_{-1})}.$$ (19) One can verify that this transformation takes (11) into (12,13), i.e. $$\tilde{A}=\Lambda^{-1}(A_{HW}+d)\Lambda,\qquad\tilde{\bar{A}}=\bar{\Lambda}^{-% 1}(\bar{A}_{HW}+d)\bar{\Lambda}.$$ (20) To find the precise relation between the bulk Liouville field $\phi(z,\bar{z})$ and the boundary stress tensor $T(x_{+})$, we will work out this gauge transformation near the boundary. First of all, by making a conformal transformation we may assume $z$ to take values in the unit disk, with the AdS boundary located at $|z|=1$. For global AdS, $T=-1$, this leads to $$\phi=\ln(1-|z|^{2}),\qquad\alpha=x_{+}$$ (21) and the following relation between the coordinates: $$z={1-y^{2}\over 1+y^{2}}e^{ix_{+}},\qquad\tilde{t}=x_{-}+x_{+}.$$ (22) More generally, we allow Liouville solutions on the unit disk with the same blow-up behaviour near the boundary, i.e. $$\phi\sim\ln(1-|z|^{2})+{\cal O}(1).$$ (23) This boundary condition (21) on the Liouville field was considered in Zamolodchikov:2001ah and is often referred to as the ‘ZZ boundary condition’. In general, we can construct a holomorphic quantity from the Liouville field, the ‘Liouville stress tensor’ $${\cal T}(z)=-4(\partial_{z}\phi)^{2}-4\partial_{z}^{2}\phi,$$ (24) where the overall normalization is chosen for later convenience. Like $\phi$, this is a bulk quantity, though its value at the boundary turns out to be closely related to the boundary stress tensor $T(x_{+})$. Using the Liouville equation one shows that solutions which behave like (23) near $|z|=1$ can be expanded as $$\phi=\ln(1-|z|^{2})-\left.\left({z^{2}\over 24}{\cal T}(z)\right)\right\rvert_% {|z|=1}(1-|z|^{2})^{2}+{\cal O}(1-|z|^{2})^{3}.$$ (25) Plugging this into eq. (6), one finds that the desired gauge transformation near the boundary is of the form (18), (19) with $$z=\left(1-2y^{2}+2y^{4}+{2\over 3}\left(T(x_{+})-2\right)y^{6}+{\cal O}(y^{8})% )\right)e^{ix_{+}},\qquad\alpha=x_{+}$$ (26) and that the boundary stress tensor is related to the Liouville stress tensor as $$T(x_{+})=e^{2ix_{+}}{\cal T}(e^{ix_{+}})-1.$$ (27) This relation was derived previously in Hulik:2016ifr using the metric formulation. It reflects the fact that, on the boundary, the coordinates $z$ and $x_{+}$ are related through $z=e^{ix_{+}}$: the relation (27) is the standard conformal transformation of the stress tensor, with the constant term arising from the Schwarzian derivative. 2.3 Point-particle sources As motivated in the Introduction, in what follows we will not restrict ourselves to pure gravity, but we will introduce point particle sources in the bulk which backreact on the metric. We want to choose the quantum numbers of the particles and their trajectories in such a way that the backreacted metric falls in the chiral class (8). The quantum numbers specifying the particle can be taken to be taken to be the $\textrm{sl}(2,\mathbb{R})\oplus\overline{\textrm{sl}(2,\mathbb{R})}$ weights $(h,\bar{h})$, or equivalently the particle mass $m=h+\bar{h}$ and spin $s=|h-\bar{h}|$. In order not to turn on the right-moving stress tensor $\bar{T}$, we take $\bar{h}=0$, in other words we consider ‘extremal’ spinning particles with $m=s$. We should keep in mind that the particle mass $m$ refers to the coefficient in front of the proper length part of the action $m\int ds$. The extremal particles we are considering are in fact excitations333This seems puzzling in the light that massless higher spin fields in AdS${}_{3}$ do not have any local excitations, though one should keep in mind that they do have global boundary excitations. It would be interesting to elucidate the particle limit of field theory in AdS${}_{3}$. of massless fields with spin, since we recall that the field theory mass in AdS${}_{3}$ is given by $$M^{2}_{AdS}=(m-s)(m+s-2).$$ (28) Note that both concepts of mass coincide in the limit $m\gg s$. Spinning particles are described in metric variables by the Matthisson-Papapetrou-Dixon equations Mathisson:1937zz ,Papapetrou:1951pa ,Dixon:1970zza . As was shown in Castro:2014tta , in locally AdS${}_{3}$ spaces spinning particles still move along geodesics. One can easily check that, in backgrounds of the chiral form (8), the curves of constant $z$ are geodesics, and these curves will be our spinning particle trajectories. In global AdS${}_{3}$ these are helical trajectories spinning around the center of AdS${}_{3}$ at constant radius. These considerations have a nice and elegant counterpart in Chern-Simons variables, which will easily generalize to the higher spin case. In the Chern-Simons description, point particles can be introduced by inserting Wilson lines $W_{R}(C)\bar{W}_{\bar{R}}(C)$ into the path integral Witten:1989sx Ammon:2013hba , with $$W_{R}(C)={\rm tr}_{R}{\cal P}\exp\int_{C}A,\qquad\bar{W}_{\bar{R}}(C)={\rm tr}% _{\bar{R}}{\cal P}\exp\int_{C}\bar{A}\,,$$ (29) where $C$ is a curve and $R$ and $\bar{R}$ are (unitary, infinite-dimensional) $sl(2,\mathbb{R})$ representations with lowest weight $h$ and highest weight444We work in conventions where the AdS${}_{3}$ translation generators are given by $P_{m}=L_{m}-\bar{L}_{m}$. In particular, the energy is $L_{0}-\bar{L}_{0}$, and the representations with energy bounded below are of the (lowest weight, highest weight) type. $-\bar{h}$ respectively. For the case of interest where $\bar{h}=0$, the $\bar{R}$ representation is trivial and our spinning particles only couple to $A$. As advocated in Witten:1989sx Ammon:2013hba , it’s useful to rewrite the trace over Hilbert space states in (29) as a quantum mechanical path integral as we review in Appendix D. For large $h$, it’s justified to make a saddle point approximation which amounts to adding a source term (236) to the classical Chern-Simons action. As was shown in Castro:2014mza , the resulting description is indeed equivalent to the Matthisson-Papapetrou-Dixon equations. As we show in Appendix D, the effect of adding the spinning particles is to introduce delta-function sources on the right hand side of the Liouville equation (5): $$\partial\bar{\partial}\phi+e^{-2\phi}=4\pi G\sum_{i}m_{i}\delta^{2}(z-z_{i},% \bar{z}-\bar{z}_{i})\,,$$ (30) where the $m_{i}$ are the point-particle masses and $z_{i}$ their locations in the $z$ coordinate of (8). We see that the particles must be ‘heavy’, in the sense that $m_{i}\sim G^{-1}\sim c$ in order to produce backreaction. It was derived in Hulik:2016ifr that solving the sourced Liouville equation with the boundary condition (23) amounts to solving a certain monodromy problem. It was also shown there that this monodromy problem is equivalent to the one which determines a certain CFT vacuum conformal block; knowledge of the conformal block therefore allows for the construction of the multi-centered solution and vice versa. In the following sections we will generalize these results to the construction of multi-centered solutions in higher-spin gravity. We end this section with some comments. • We would like to emphasize that the multiparticle configurations we are considering here are specific to Lorentzian AdS${}_{3}$. Indeed, our particle geodesics, which have constant AdS radial coordinate and constant value of $x_{+}$, do not analytically continue to geodesics in Euclidean AdS${}_{3}$, except for the geodesic at the center of AdS${}_{3}$. Unlike their Euclidean counterparts, our Lorentzian geodesics do not reach the boundary and therefore do not correspond to localized sources in the boundary CFT; this is why their description as localized sources of the bulk Liouville field is particularly convenient. • The setup described above might be viewed as a special case of Verlinde’s ‘CFT-AdS’ idea VerlindeCFTAdS (see also Jackson:2014nla ,Verlinde:2015qfa ), where a state in the Lorentzian bulk theory is directly corresponds to a state in the Euclidean CFT. In our case, the bulk state contains a number of spinning particles and corresponds to a state created by the insertion of heavy operators in Euclidean Liouville theory on the unit disk with ZZ boundary conditions. Our bulk Liouville field $\phi$ should also not be confused with the boundary Liouville field constructed in Coussaert:1995zp , though the two are related: the left-moving components of their stress tensors obey the relation (27). 3 Chiral Solutions in Spin-3 Gravity Our main objective is to generalize the above results for pure gravity to the Chern-Simons formulation of higher spin gravity. In this and the next section we will perform the calculations explicitly for the spin-3 theory, and we will extrapolate to arbitrary spin in section 5. It is natural to guess that the role of Liouville theory in the pure gravity problem will be replaced by Toda field theory in the higher spin case, and this will turn out to be so. One subtlety we will pay particular attention to is the following. One can consider two different higher spin theories based on the two noncompact real forms of the algebra ${\cal A}_{2}$, namely $\textrm{sl}(3,\mathbb{R})$ and $\textrm{su}(1,2)$. We will see that chiral solutions in these theories are described by different real sections of complex Toda field theory. Somewhat surprisingly, we will see that the standard Euclidean Toda field theory describes the non-standard higher spin theory with gauge algebra $\textrm{su}(1,2)$ while the standard $\textrm{sl}(3,\mathbb{R})$ higher spin theory requires a non-standard reality condition on the Toda fields. 3.1 Spin-3 Gravity We start by recalling some standard facts about gravity coupled to a massless spin-3 field in Lorentzian AdS${}_{3}$, referring to Campoleoni:2010zq for more details. The action is given by $$S=S_{CS}[A]-S_{CS}[\bar{A}]~{},$$ (31) where $$S_{CS}[A]={k\over 4\pi}\,{\rm tr}\int_{\mathcal{M}}\Big{(}A\wedge dA+{2\over 3% }A\wedge A\wedge A\Big{)}~{}.$$ (32) where the gauge fields $A$ and $\bar{A}$ take values in a real form of the algebra ${\cal A}_{2}$. There are two real forms of this algebra which contain $\textrm{sl}(2,R)$ as a subalgebra and give rise to a higher spin extension to gravity: these are the noncompact real forms $\textrm{sl}(3,\mathbb{R})$ and $\textrm{su}(1,2)$. The theory based on $\textrm{sl}(3,\mathbb{R})$ is the most prevalent in the literature as it leads Campoleoni:2010zq to an asymptotic symmetry which is the standard Zamolodchikov ${\cal W}_{3}$ algebra Zamolodchikov:1985wn , while the $\textrm{su}(1,2)$ leads to a different real form of the complex ${\cal W}_{3}$ algebra which leads to problems with unitarity at the quantum level Campoleoni:2011hg . The two real forms can be conveniently treated simultaneously by introducing a parameter $\sigma=\pm 1$, such that $$\sigma=1:\ \textrm{sl}(3,\mathbb{R})\oplus\overline{\textrm{sl}(3,\mathbb{R})}% ,\qquad\sigma=-1:\ \textrm{su}(1,2)\oplus\overline{\textrm{su}(1,2)}.$$ (33) The parameter $\sigma$ enters the commutation relations which take the form: $$\displaystyle\,[L_{m},L_{n}]=$$ $$\displaystyle(m-n)L_{m+n},$$ $$\displaystyle m,n=$$ $$\displaystyle-1,0,1$$ (34) $$\displaystyle\,[L_{m},W_{a}]=$$ $$\displaystyle(2m-a)W_{a+m},$$ $$\displaystyle a,b=$$ $$\displaystyle-2,\ldots,2$$ (35) $$\displaystyle\,[W_{a},W_{b}]=$$ $$\displaystyle-{\sigma\over 12}(a-b)(2a^{2}+2b^{2}-ab-8)L_{a+b}$$ (36) and similarly for the barred generators. Note that the two real forms are related by the ‘Weyl unitary trick’ of multiplying the spin-3 generators $W_{a}$ with a factor $\mathrm{i}$. An explicit matrix realization is given in Appendix A. The connections $A$ and $\bar{A}$ are linear combinations of the generators $L_{m},W_{a}$ (and their barred counterparts) with real coefficients. The generators $L_{m}$ form an $\textrm{sl}(2,\mathbb{R})$ subalgebra which is said to be ‘principally’ embedded in the full algebra. The restriction of the gauge field to this subalgebra defines the gravitational subsector of the theory, while the coefficients of $W_{a}$ and $\bar{W}_{a}$ describe a massless spin-3 field. As shown in Campoleoni:2010zq , field configurations obeying the higher spin equivalent of the Brown-Henneaux boundary conditions Brown:1986nw can be brought in a standard form which generalizes (11) and is called the ‘highest weight gauge’. Restricting once more to chiral solutions where $\bar{A}$ takes the same form as in global AdS${}_{3}$, solutions in this gauge take the form $$\displaystyle A_{HW}$$ $$\displaystyle=$$ $$\displaystyle-{dy\over y}L_{0}+\left({1\over 2}\left(y^{-1}L_{1}-T(x_{+})yL_{-% 1}\right)+W(x_{+})y^{2}W_{-2}\right)dx_{+}\,,$$ (37) $$\displaystyle\bar{A}_{HW}$$ $$\displaystyle=$$ $$\displaystyle\bar{L}_{0}{dy\over y}-{1\over 2}\left(y^{-1}\bar{L}_{-1}+y\bar{L% }_{1}\right)dx_{-}.$$ (38) Here, $T(x_{+})$ and $W(x_{+})$ are arbitrary periodic functions. 3.2 The Toda field gauge We will now show that we can make a gauge transformation which brings the connection $A$ into the form of a Lax connection for ${\cal A}_{2}$ Toda field theory, which will facilitate the construction of backreacted multi-particle solutions in the next subsection. More precisely, we will go to a gauge where $$\begin{split}\displaystyle A_{T}=&\displaystyle V^{-1}\tilde{A}V={1\over 4}% \left(\partial_{z}\ (\phi_{1}+\phi_{2})dz-\partial_{\bar{z}}(\phi_{1}+\phi_{2}% )d\bar{z}\right)L_{0}\\ &\displaystyle-{\mathrm{i}\over 2\sqrt{2}}\left(e^{{\phi_{1}\over 2}-\phi_{2}}% +e^{{\phi_{2}\over 2}-\phi_{1}}\right)\left(dzL_{1}+d\bar{z}L_{-1}\right)\\ &\displaystyle+{3\over 4\sqrt{\sigma}}\left(\partial_{z}\ (\phi_{1}-\phi_{2})% dz-\partial_{\bar{z}}(\phi_{1}-\phi_{2})d\bar{z}\right)W_{0}\\ &\displaystyle+{\mathrm{i}\over\sqrt{2\sigma}}\left(e^{{\phi_{1}\over 2}-\phi_% {2}}-e^{{\phi_{2}\over 2}-\phi_{1}}\right)\left(dzW_{1}+d\bar{z}W_{-1}\right)% \,,\end{split}$$ (39) $$\bar{A}_{T}=V^{-1}\bar{\tilde{A}}V=-\mathrm{i}d\tilde{t}\bar{L}_{0}.$$ (40) As before, the only role of the complex change of basis generated by $V$ defined in (14) is to simplify the right hand side. The flatness condition $\tilde{F}=d\tilde{A}+\tilde{A}\wedge\tilde{A}=0$ is equivalent to the ${\cal A}_{2}$ Toda field equations $$\begin{split}&\displaystyle\partial_{z}\partial_{\bar{z}}\phi_{1}+e^{-2\phi_{1% }+\phi_{2}}=0\,,\\ &\displaystyle\partial_{z}\partial_{\bar{z}}\phi_{2}+e^{-2\phi_{2}+\phi_{1}}=0% \,.\end{split}$$ (41) If we initially view $\phi_{1}$ and $\phi_{2}$ as complex-valued fields, the requirement that $\tilde{A}$ has real coefficients imposes a certain reality condition on the fields, which depends on the chosen real form, i.e. on the value of $\sigma$. On the components of $A_{T}$ in the expansion $A_{T}=\sum_{m}l_{m}L_{m}+\sum_{a}w_{a}W_{a}$, the condition that $\tilde{A}$ has real coefficients imposes that $$l_{m}^{*}=-l_{-m},\qquad w_{a}^{*}=w_{-a}.$$ (42) Applying this to (40) one therefore finds that the appropriate reality condition is $$\displaystyle\textrm{sl}(3,\mathbb{R}),\ \sigma=$$ $$\displaystyle 1:$$ $$\displaystyle\phi_{2}$$ $$\displaystyle=\bar{\phi}_{1}$$ $$\displaystyle{}\textrm{su}(1,2),\ \sigma=$$ $$\displaystyle-1:$$ $$\displaystyle\phi_{1},\phi_{2}$$ $$\displaystyle\in\mathbb{R}.$$ (43) We observe that, somewhat surprisingly, the standard form of the Toda field equations, where $\phi_{1}$ and $\phi_{2}$ are real fields, is relevant for the nonstandard higher spin theory with algebra $\textrm{su}(1,2)\oplus\overline{\textrm{su}(1,2)}$ and vice versa. As shown in Appendix C 555To simplify notation for this section we have relabeled the currents such that compared to Appendix C we have ${\cal T}\equiv\mathcal{W}^{(2)}$, ${\cal W}\equiv\mathcal{W}^{(3)}$. the Toda equations imply that the following combinations are purely holomorphic: $$\displaystyle{\cal T}(z)$$ $$\displaystyle=$$ $$\displaystyle-(\partial\phi_{1})^{2}-(\partial\phi_{2})^{2}+\partial\phi_{1}% \partial\phi_{2}-\partial^{2}\phi_{1}-\partial^{2}\phi_{2}\,,$$ (44) $$\displaystyle{\cal W}(z)$$ $$\displaystyle=$$ $$\displaystyle-(\partial\phi_{1})^{2}\partial\phi_{2}+(\partial\phi_{2})^{2}% \partial\phi_{1}-\partial^{2}\phi_{1}\partial\phi_{1}+\partial^{2}\phi_{2}% \partial\phi_{2}$$ (45) $$\displaystyle+{1\over 2}\left(-\partial^{2}\phi_{1}\partial\phi_{2}+\partial^{% 2}\phi_{2}\partial\phi_{1}-\partial^{3}\phi_{1}+\partial^{3}\phi_{2}\right)\,.$$ One can show that ${\cal T}$ transforms as a stress tensor under conformal transformations, while ${\cal W}$ transforms as a spin-3 primary. We note from comparing (40) to the pure gravity expression (13) that the gravity subsector is obtained by setting $$\phi_{1}=\phi_{2}=2\phi-\ln 2.$$ (46) Note that we have ${\cal W}=0$ in the gravity subsector as was to be expected. From the pure gravity expression (21) and the Toda equations it follows that the global AdS${}_{3}$ solution is given by $$\phi_{1}=\bar{\phi}_{2}=\ln{(1-|z|^{2})^{2}\over 2}+n\frac{2\pi\mathrm{i}}{3}% \,,\quad n\in\mathbb{Z}$$ (47) and has ${\cal T}={\cal W}=0$. For the $\sigma=-1$ case the fact that we have real Toda fields imposes that $n=0$. More generally, we allow Toda solutions on the unit disc with the same leading blow-up behaviour near the boundary circle. These boundary conditions are the Toda generalization of the ZZ boundary conditions Zamolodchikov:2001ah . Using the Toda equations one shows that such solutions behave near the boundary as $$\displaystyle\phi_{1}$$ $$\displaystyle=$$ $$\displaystyle\ln{(1-|z|^{2})^{2}\over 2}+n\frac{2\pi\mathrm{i}}{3}-\left.\left% ({z^{2}\over 12}{\cal T}\right)\right\rvert_{|z|=1}(1-|z|^{2})^{2}-\left.\left% ({z^{2}\over 12}{\cal T}-{z^{3}\over 60}{\cal W}\right)\right\rvert_{|z|=1}(1-% |z|^{2})^{3}+\ldots$$ $$\displaystyle{}\phi_{2}$$ $$\displaystyle=$$ $$\displaystyle\ln{(1-|z|^{2})^{2}\over 2}-n\frac{2\pi\mathrm{i}}{3}-\left.\left% ({z^{2}\over 12}{\cal T}\right)\right\rvert_{|z|=1}(1-|z|^{2})^{2}-\left.\left% ({z^{2}\over 12}{\cal T}+{z^{3}\over 60}{\cal W}\right)\right\rvert_{|z|=1}(1-% |z|^{2})^{3}+\ldots$$ (48) Combining (3.2) with the reality conditions (43) on the fields, we see that the currents ${\cal T}$ and ${\cal W}$ should satisfy the following reality conditions on the boundary circle: $$\displaystyle\textrm{sl}(3,\mathbb{R}),\ \sigma=$$ $$\displaystyle 1:$$ $$\displaystyle\left.\left(z^{2}{\cal T}\right)\right\rvert_{|z|=1}\in$$ $$\displaystyle\mathbb{R},$$ $$\displaystyle\left.\left(z^{3}{\cal W}\right)\right\rvert_{|z|=1}\in$$ $$\displaystyle i\,\mathbb{R}$$ $$\displaystyle{}\textrm{su}(1,2),\ \sigma=$$ $$\displaystyle-1:$$ $$\displaystyle\left.\left(z^{2}{\cal T}\right)\right\rvert_{|z|=1}\in$$ $$\displaystyle\mathbb{R},$$ $$\displaystyle\left.\left(z^{3}{\cal W}\right)\right\rvert_{|z|=1}\in$$ $$\displaystyle\mathbb{R}.$$ (49) It remains to spell out the gauge transformation between the highest weight gauge (38) and the Toda field gauge (40). We will only work out the near-boundary behaviour of this transformation, from which we will derive the precise relation between the bulk Toda currents ${\cal T}(z),{\cal W}(z)$ and the boundary currents $T(x_{+}),W(x_{+})$. A generic gauge parameter contains a spin-2 and a spin-3 part, each of which consists of a ‘translation’ and ‘local Lorentz’ part. As in the gravity case, we will trade the spin-2 translation for a coordinate transformation $(y,x_{+},x_{-})\rightarrow(z,\bar{z},\tilde{t})$. It turns out that, to the order required, the latter coordinate transformation is unchanged from the pure gravity case, i.e. it still takes the form (26). The required remaining gauge parameter turns out to be $$\begin{split}\displaystyle\Lambda_{3}=\Lambda\bar{\Lambda}V&\displaystyle\exp% \left({W(x_{+})y^{6}}\left({i\over 10}(e^{2ix_{+}}W_{2}-e^{-2ix_{+}}W_{-2})-% \right.\right.\\ &\displaystyle\left.\left.-{3\over 5}(e^{ix_{+}}W_{1}-e^{-ix_{+}}W_{-1})\right% )+{\cal O}(y^{8})\right)V^{-1}\,,\end{split}$$ (50) where $\Lambda,\bar{\Lambda}$ are given in (19). Performing the gauge transformation and making use of the near-boundary expansion (48) we find that (38) transforms into (40) with the currents related as $$T(x_{+})=e^{2ix_{+}}{\cal T}(e^{ix_{+}})-1,\qquad W(x_{+})={i\over\sqrt{\sigma% }}e^{3ix_{+}}{\cal W}(e^{ix_{+}}).$$ (51) As a check we note that $T(x_{+})$ and $W(x_{+})$ are indeed real upon imposing the reality conditions (43). These relations are consistent with the conformal transformation properties of the currents and the fact that the boundary coordinates are related as $z=e^{ix_{+}}$. Point-particle sources Having found convenient variables to describe chiral solutions in pure higher spin-3 gravity, we will now study solutions in the presence of point-particle sources which are compatible with this chiral ansatz. Such higher-spin point particles are once again described by Wilson lines Ammon:2013hba , and in order to preserve the chiral structure we study sources which couple only to $A$ and not to $\bar{A}$: $$W_{R}(C)={\rm tr}_{R}{\cal P}\exp\int_{C}A\,.$$ (52) Such sources correspond to the spin-3 generalization of extremal spinning particles. The representation $R$ is an infinite-dimensional representation of $\textrm{sl}(2,\mathbb{R})$ resp. $\textrm{su}(1,2)$ built on a primary state $|h,w\rangle$ satisfying $$L_{0}|h,w\rangle=h|h,w\rangle,\qquad W_{0}|h,w\rangle=w|h,w\rangle,\qquad L_{1% }|h,w\rangle=W_{1,2}|h,w\rangle=0\,.$$ (53) As shown in Appendix D, in the presence of such backreacting point particles the Toda equations acquire nontrivial delta function sources on the right hand side $$\begin{split}&\displaystyle\partial\bar{\partial}\phi_{1}+\operatorname{e}^{-2% \phi_{1}+\phi_{2}}=16\pi G\sum_{i}\alpha_{1}^{(i)}\delta^{2}\left(z-z_{i},\bar% {z}-\bar{z}_{i}\right)\,,\\ &\displaystyle\partial\bar{\partial}\phi_{2}+\operatorname{e}^{-2\phi_{2}+\phi% _{1}}=16\pi G\sum_{i}\alpha_{2}^{(i)}\delta^{2}\left(z-z_{i},\bar{z}-\bar{z}_{% i}\right)\,,\end{split}$$ (54) where the parameters $\alpha^{(i)}_{1},\alpha^{(i)}_{2}$ are related to the conformal dimension and spin-3 charge of the $i$-th particle as $$\begin{split}&\displaystyle(\alpha_{1}^{(i)})^{2}-\alpha_{1}^{(i)}\alpha^{(i)}% _{2}+(\alpha_{2}^{(i)})^{2}=\frac{1}{4}\left((h^{(i)})^{2}+3\sigma(w^{(i)})^{2% }\right)\,,\\ &\displaystyle\left(\alpha^{(i)}_{2}-\alpha^{(i)}_{1}\right)\alpha^{(i)}_{1}% \alpha^{(i)}_{2}={\mathrm{i}\over\sqrt{4\sigma}}w^{(i)}\left((h^{(i)})^{2}-% \sigma(w^{(i)})^{2}\right)\,.\end{split}$$ (55) It’s a useful consistency check that these sources are compatible with the reality conditions (43) which for $\sigma=1$ require that $\alpha^{(i)}_{2}=\bar{\alpha}^{(i)}_{1}$, and for $\sigma=-1$ that both $\alpha^{(i)}_{1}$ and $\alpha^{(i)}_{2}$ are real. The equations (55) for $\alpha^{(i)}_{1,2}$ are indeed compatible with these reality conditions. 3.3 Properties of the solutions The associate ODE problem: Toda field theory is an integrable system and its solutions can be constructed from solutions to a set of auxiliary ordinary differential equations (ODEs). For more details on these auxiliary ODEs we refer the reader to Appendix C. Here we give just the essentials needed to make the discussion coherent. The $\mathcal{A}_{2}$-Toda field theory has two systems of auxiliary ODEs associated with it. One shows that the fields $\operatorname{e}^{\phi_{i}}$ satisfy the following sets of equations $$\displaystyle\left[\partial^{3}+{\cal T}\partial+{\cal W}+{1\over 2}\partial{% \cal T}\right]\operatorname{e}^{\phi_{1}}$$ $$\displaystyle=0,$$ $$\displaystyle\left[\bar{\partial}^{3}+\bar{\cal T}\bar{\partial}+\sigma\bar{% \cal W}+{1\over 2}\bar{\partial}\bar{\cal T}\right]\operatorname{e}^{\phi_{1}}$$ $$\displaystyle=0,$$ (56) $$\displaystyle\left[\partial^{3}+{\cal T}\partial-{\cal W}+{1\over 2}\partial{% \cal T}\right]\operatorname{e}^{\phi_{2}}$$ $$\displaystyle=0,$$ $$\displaystyle\left[\bar{\partial}^{3}+\bar{\cal T}\bar{\partial}-\sigma\bar{% \cal W}+{1\over 2}\bar{\partial}\bar{\cal T}\right]\operatorname{e}^{\phi_{2}}$$ $$\displaystyle=0,$$ (57) with ${\cal T}(z)$ the stress tensor and ${\cal W}(z)$ the primary spin-3 current given in (44). We will denote by $\psi_{i}(z),\ i=1,2,3$ a set of linearly independent solutions of the equation $$\left[\partial^{3}+{\cal T}\partial+{\cal W}+{1\over 2}\partial{\cal T}\right]% \psi(z)=0$$ (58) and by $\chi^{i}(z),\ i=1,2,3$ a set of independent solutions of $$\left[\partial^{3}+{\cal T}\partial-{\cal W}+{1\over 2}\partial{\cal T}\right]% \chi(z)=0\,.$$ (59) We will now show that we can build Toda solutions $e^{\phi_{i}}$ from the $\psi_{i}$, $\chi^{i}$ in a holomorphically factorized form. Solutions for real Toda fields – ${\rm su}(1,2)$ case: From (56) with $\sigma=1$ we have $$\operatorname{e}^{\phi_{1}}=\Psi^{\dagger}\Lambda\Psi\,,$$ (60) where $\Psi=\left(\psi_{1},\psi_{2},\psi_{3}\right)^{T}$ and $$\Lambda=\textrm{diag}\left(1,-1,1\right)$$ (61) as specified in appendix C. Similarly from (57) with $\sigma=1$ we deduce $$\operatorname{e}^{\phi_{2}}=X^{\dagger}\Lambda_{2}X,$$ (62) where $X=\left(\chi_{1},\chi_{2},\chi_{3}\right)^{T}$. By substituting (60) into the first Toda equation (41) and solving for $\operatorname{e}^{\phi_{2}}$ we find that $\Lambda_{2}=\Lambda$ and $$\chi_{a}=\epsilon_{abc}\Lambda^{bb^{\prime}}\Lambda^{cc^{\prime}}\psi_{b^{% \prime}}\,,\partial\psi_{c^{\prime}}$$ (63) while by substituting (60), (62), (63) in the second Toda equation (41) we independently derive that $$\textrm{det}\Lambda=-1\,,$$ (64) which is consistent with our choice (61). Solutions for complex conjugate Toda fields – ${\rm sl}(3,\mathbb{R})$ case: From (56) and (57) with $\sigma=-1$ we have $$\operatorname{e}^{\phi_{1}}=\psi_{i}\bar{\chi}^{j}\,,\qquad\operatorname{e}^{% \phi_{2}}=\bar{\psi}_{i}\chi^{i}\,.$$ (65) By substituting (65) into the first Toda equation (41) and solving for $\operatorname{e}^{\phi_{2}}$ we find that $$\chi^{a}=-\epsilon^{abc}\psi_{b}\partial\psi_{c}\,.$$ (66) Thus for the Toda fields we have $$\operatorname{e}^{\phi_{1}}=\operatorname{e}^{\bar{\phi}_{2}}=-\epsilon^{abc}% \psi_{a}\bar{\psi}_{b}\bar{\partial}\bar{\psi}_{c}\,.$$ (67) 3.4 Properties of the currents Pole structure: If near the sources $z_{i}$ we can neglect the potential term compared to the kinetic term, then the Toda fields $\phi_{i}$ behave near $z=z_{i}$ as $$\phi_{j}\sim 16G\alpha_{j}^{(i)}\ln|z-z_{i}|\,.$$ (68) Thus the solution of the Toda equations on the disk specifies meromorphic (quasi-)primary currents $\mathcal{T}$ and $\mathcal{W}$. The sources $\alpha^{(i)}_{j}$ fix the most singular terms in ${\cal T},{\cal W}$ which take the form $$\mathcal{T}(z)=\sum_{i=1}^{K}\frac{\epsilon_{i}^{(\mathcal{T})}}{(z-z_{i})^{2}% }+\ldots\,,\quad\mathcal{W}(z)=\sum_{i=1}^{K}\frac{\epsilon_{i}^{(\mathcal{W})% }}{(z-z_{i})^{3}}+\ldots$$ (69) where the ellipses denote lower order poles and a regular part. The constants $\epsilon_{i}^{\mathcal{T}}$ and $\epsilon_{i}^{\mathcal{W}}$ are specified in terms of $\alpha_{1}^{(i)}$ and $\alpha_{2}^{(i)}$ by examining the form of the Toda fields $\phi_{1}$ and $\phi_{2}$ near a specific source. $$\begin{split}&\displaystyle\epsilon_{i}^{(\mathcal{T})}=8G\left(a_{1}^{(i)}+a_% {2}^{(i)}\right)+64G^{2}\left(a_{1}^{(i)}a_{2}^{(i)}-(a_{1}^{(i)})^{2}-(a_{2}^% {(i)})^{2}\right)\,,\\ &\displaystyle\epsilon_{i}^{(\mathcal{W})}=8G\left(a_{2}^{(i)}-a_{1}^{(i)}% \right)\left(1-8Ga_{1}^{(i)}\right)\left(1-8Ga_{2}^{(i)}\right)\,.\end{split}$$ (70) Doubling trick: Having established the pole structure of the currents ${\cal T}(z)$ and ${\cal W}(z)$ as functions on the unit disk, we should also make sure that they obey the reality conditions (49) on the unit circle. As usual, this is imposed by using a ‘doubling trick’ and extending ${\cal T}(z)$ and ${\cal W}(z)$ to meromorphic functions on the complex plane in a suitable manner. Using the Schwarz reflection principle we find that the appropriate reflection properties are $$\mathcal{T}(z)=\frac{1}{z^{4}}\overline{\mathcal{T}}\left(\frac{1}{z}\right)\,% ,\quad\mathcal{W}(z)=-\frac{\sigma}{z^{6}}\overline{\mathcal{W}}\left(\frac{1}% {z}\right)\,.$$ (71) In particular, this means that the currents have poles both at the $z_{i}$ and at their image points $\bar{z}_{i}^{-1}$. Thus they are of the form $$\mathcal{T}=\sum_{i=1}^{K}\left(\frac{\epsilon^{(\mathcal{T})}_{i}}{\left(z-z_% {i}\right)^{2}}+\frac{\tilde{\epsilon}^{(\mathcal{T})}_{i}}{\left(z-1/\bar{z}_% {i}\right)^{2}}+\frac{c_{i}^{(\mathcal{T},1)}}{z-z_{i}}+\frac{\tilde{c}_{i}^{(% \mathcal{T},1)}}{z-1/\bar{z}_{i}}\right)\,,$$ (72) $$\mathcal{W}=\sum_{i=1}^{K}\left(\frac{\epsilon^{(\mathcal{W})}_{i}}{\left(z-z_% {i}\right)^{3}}+\frac{\tilde{\epsilon}^{(\mathcal{W})}_{i}}{\left(z-1/\bar{z}_% {i}\right)^{3}}+\frac{c_{i}^{(\mathcal{W},2)}}{\left(z-z_{i}\right)^{2}}+\frac% {\tilde{c}_{i}^{(\mathcal{W},2)}}{\left(z-1/\bar{z}_{i}\right)^{2}}+\frac{c_{i% }^{(\mathcal{W},1)}}{z-z_{i}}+\frac{\tilde{c}_{i}^{(\mathcal{W},1)}}{z-1/\bar{% z}_{i}}\right)\,,$$ (73) where in the above equations we assumed for simplicity that there is no source located at the origin and as a result there is also no image source at infinity. The parameters $c_{i}^{(s,l)},\tilde{c}_{i}^{(s,l)}$ are called accessory parameters and are not directly determined by the $\alpha^{(i)}_{j}$. Instead we will see below that they are determined by solving a monodromy problem which arises from demanding that the Toda fields are single-valued. Not all the parameters in (72) and (73) are independent however, since we still have to impose the reflection properties (71). We will now determine the number of independent accessory parameters after imposing the reflection properties. Substituting (72) and (73) into (71) and requiring equality of the poles at each order near $z_{i}$ we get from $\mathcal{T}$ $$\epsilon^{(\mathcal{T})}_{i}-\bar{\tilde{\epsilon}}^{(\mathcal{T})}_{i}=0\,,% \quad 2\epsilon^{(\mathcal{T})}_{i}+c_{i}^{(\mathcal{T},1)}z_{i}+\frac{\bar{% \tilde{c}}^{(\mathcal{T},1)}_{i}}{z_{i}}=0$$ (74) and from $\mathcal{W}$ $$\begin{split}&\displaystyle\epsilon^{(\mathcal{W})}_{i}-\sigma\bar{\tilde{% \epsilon}}^{(\mathcal{W})}_{i}=0\,,\\ &\displaystyle 3\epsilon^{(\mathcal{W})}_{i}+c_{i}^{(\mathcal{W},2)}z_{i}+% \sigma\frac{\bar{\tilde{c}}^{(\mathcal{W},2)}_{i}}{z_{i}}=0\,,\\ &\displaystyle 6\epsilon^{(\mathcal{W})}_{i}+4\bar{{c}}^{(\mathcal{W},2)}_{i}z% _{i}+c^{(\mathcal{W},1)}_{i}z_{i}^{2}-\sigma\frac{\bar{\tilde{c}}^{(\mathcal{W% },1)}_{i}}{z_{i}^{2}}=0\,.\end{split}$$ (75) By substituting (74) and (75) into (72) and (73) respectively and requiring (71) to hold at the origin we get for $\mathcal{T}$ $$\begin{split}&\displaystyle\sum_{i=1}^{K}\left(c^{(\mathcal{T},1)}_{i}+\tilde{% c}^{(\mathcal{T},1)}_{i}\right)=0\,,\\ &\displaystyle\sum_{i=1}^{K}\mathfrak{Im}\left(\epsilon_{i}+c^{(\mathcal{T},1)% }_{i}z_{i}\right)=0\end{split}$$ (76) and from $\mathcal{W}$ $$\begin{split}&\displaystyle\sum_{i=1}^{K}\left(c^{(\mathcal{W},1)}_{i}+\tilde{% c}^{(\mathcal{W},1)}_{i}\right)=0\,,\\ &\displaystyle\sum_{i=1}^{K}\left(c_{i}^{(\mathcal{W},2)}+\tilde{c}^{(\mathcal% {W},2)}_{i}+c^{(\mathcal{W},1)}_{i}z_{i}+\frac{\tilde{c}^{(\mathcal{W},1)}_{i}% }{\bar{z}_{i}}\right)=0\,,\\ &\displaystyle\sum_{i=1}^{K}\mathfrak{Re}\left(\sqrt{\sigma}(\epsilon^{(% \mathcal{W})}_{i}+2c_{i}^{(\mathcal{W},2)}z_{i}+c_{i}^{(\mathcal{W},1)}z_{i}^{% 2})\right)=0\,.\end{split}$$ (77) One can check that these conditions guarantee that (71) holds everywhere. Regularity at infinity requires that ${\cal T}$ falls of as $z^{-4}$ and ${\cal W}$ as $z^{-6}$, which is implied by the reflection conditions (71) and the fact that ${\cal T},{\cal W}$ are regular in the origin. So this does not lead to any further conditions. After taking into account these constraints we are left with $2K-3$ independent real accessory parameters coming from $\mathcal{T}$ and $4K-5$ parameters coming from $\mathcal{W}$, giving in total $6K-8$ real accessory parameters to be specified by solving the monodromy problem. Note that the above conditions were derived in the assumption that none of the poles are in the origin. It is straightforward to generalize to the case of a pole in the origin and to check that the number of independent accessory parameters remains unchanged. 3.5 Doubling trick for $\psi_{i}$ The reflection property of the currents (71), together with the associate ODEs, implies there should be a reflection property for the $\psi_{i}$. For the $\textrm{su}(1,2)$ case we find $$\psi_{a}(z)=-z^{2}\epsilon_{abc}\Lambda^{b\tilde{b}}\Lambda^{c\tilde{c}}\bar{% \psi}_{\tilde{b}}(1/z)\partial_{\frac{1}{z}}\bar{\psi}_{\tilde{c}}(1/z)$$ (78) while for the $\textrm{sl}(3,\mathbb{R})$ case we have $$\psi_{a}(z)=\mathrm{i}\operatorname{e}^{-\mathrm{i}\frac{\pi}{6}}z^{2}\bar{% \psi}_{a}\left(1/z\right)\,.$$ (79) To prove these relations we start from the associated ODE (58) satisfied by $\psi_{i}(z)$: $$\left(\partial^{3}_{z}+\mathcal{T}(z)\partial_{z}+\mathcal{W}(z)+\frac{1}{2}% \partial_{z}\mathcal{T}(z)\right)\psi_{i}(z)=0\,.$$ (80) Under a conformal transformation $z\rightarrow\tilde{z}=f(z)$ the fields transform as $$\tilde{\cal T}(\tilde{z})=(f^{\prime})^{-2}\left({\cal T}(z)-2S(f,z)\right),% \qquad\tilde{\cal W}(\tilde{z})=(f^{\prime})^{-3}{\cal W}(z),\qquad\tilde{\psi% }_{i}(\tilde{z})=f^{\prime}\psi_{i}(z).$$ (81) The last transformation can be derived from the fact that $\psi_{i}$ should transform like $e^{\phi_{1}}$, i.e. as a primary of weight $-1$, under holomorphic reparametrizations in order for (80) to be conformally invariant. Using these transformations, we perform a conformal transformation with $f(z)=1/z$ on (80), and use the reflection conditions (71) to obtain, after a relabeling of the coordinate: $$\left(\partial^{3}_{\bar{z}}+\bar{\cal T}(\bar{z})\partial_{\bar{z}}+\sigma% \bar{\cal W}(\bar{z})+\frac{1}{2}\partial_{\bar{z}}\bar{\cal T}(\bar{z})\right% )\left(\bar{z}^{2}\psi_{i}\left({1\over\bar{z}}\right)\right)=0\,.$$ (82) For the $\textrm{su}(1,2)$ case, upon choosing $\sigma=-1$ and comparing with the ODE (59) satisfied by $\chi^{i}$, we see that there should exist a constant matrix $S$ such that $$\Psi(z)=Sz^{2}\bar{X}\left(\frac{1}{z}\right).$$ (83) Here, $S$ is a proportionality matrix whose exact form is independent of the particular solution, and can be fixed by examining the simple solution displayed in appendix C.4. For the case under consideration we find $$S=-\mathbb{I}$$ (84) which upon substituting into (83) leads to (78). Similarly, for the $\textrm{sl}(3,\mathbb{R})$ case, upon setting $\sigma=1$ in (82) we see that there should exist a constant matrix $\tilde{S}$ such that $$\Psi(z)=\tilde{S}z^{2}\bar{\Psi}\left(\frac{1}{z}\right)\,.$$ (85) Once again, the form of $\tilde{S}$ is fixed by examining the simple solution displayed in appendix C.4. For the case under consideration we find $$\tilde{S}=\mathrm{i}\operatorname{e}^{-\mathrm{i}\frac{\pi}{6}}\mathbb{I}$$ (86) which upon substituting into (85) leads to (79). 3.6 The monodromy problem 3.6.1 Single-valuedness of the Toda fields Since the currents are meromorphic functions, the associated ODE contains singularities. Thus after encircling a singular point $z_{i}$ the solution transforms as $$\Psi\rightarrow M_{i}\Psi\,,$$ (87) where $M_{i}\in\textrm{SL}(3,\mathbb{C})$ is a monodromy matrix. However further restrictions are imposed on this monodromy matrix by requiring that the Toda fields are single valued. We will see that in each case the monodromy matrix must be an element of one of the real sections of $\textrm{SL}(3,\mathbb{C})$. Real Toda fields – $\textrm{su}(1,2)$ case: As we derived in (60) we have $$\operatorname{e}^{\phi_{1}}=\Psi^{\dagger}\Lambda\Psi\,.$$ (88) Thus for $\operatorname{e}^{\phi_{1}}$ to be single-valued we should demand that $$M_{i}^{\dagger}\Lambda M_{i}=\Lambda$$ (89) with $\Lambda=\textrm{diag}\left(1,-1,1\right)$. This means that we must have $M_{i}\in\textrm{SU}(1,2)$. In other words in order for the solution of the Toda system to be single-valued we must adjust the accessory parameters such that all the monodromy matrices $M_{i}$ are elements of $\textrm{SU}(1,2)$. Complex conjugate Toda fields – ${\rm sl}(3,\mathbb{R})$ case: For the first field in the Toda system we have (67) $$\operatorname{e}^{\phi_{1}}=-\epsilon^{abc}\psi_{a}\bar{\psi}_{b}\bar{\partial% }\bar{\psi}_{c}\,.$$ (90) In order for it to be single-valued we should have that $$\epsilon^{abc}{M_{a}}^{d}{\overline{M}_{b}}^{e}{\overline{M}_{c}}^{f}=\epsilon% ^{def}\,.$$ (91) which can be rewritten as $$\frac{1}{2!}\epsilon_{gef}\epsilon^{abc}{M_{a}}^{d}{\overline{M}_{b}}^{e}{% \overline{M}_{c}}^{f}=\delta^{d}_{g}\,.$$ (92) At this point we make use of the definition of the adjugate matrix $$(\textrm{adj}{F)^{b}}_{a}=\frac{1}{2!}\epsilon^{bc_{1}c_{2}}\epsilon_{ad_{1}d_% {2}}{F^{d_{1}}}_{c_{1}}{F^{d_{2}}}_{c_{2}}\,.$$ (93) In the present case we have $F^{T}=\overline{M}$ and because $M\in\textrm{SL}(3,\mathbb{C})$ we have $$\textrm{adj}M=M^{-1}\,.$$ (94) Thus (92) becomes $$M=\overline{M}\,.$$ (95) Which means that $M_{i}\in\textrm{SL}(3,\mathbb{R})$.. 3.6.2 Monodromies of image points Here we will prove that the monodromy matrix of a contour that encircles a singular point $z_{i}$ and its image $1/\bar{z}_{i}$ (and none of the other singularities) is the identity matrix. This property will be important in making the connection to classical ${\cal W}_{3}$ blocks in section 4. To do so lets us first consider the monodromy matrix $M_{\gamma_{i}}$ of a point $z_{i}$ which is encircled by a counterclockwise contour $\gamma_{i}$ which has a base point $p$ at the boundary. The mirror contour $\bar{\gamma}_{i}$ encircles the image point $1/\bar{z}_{i}$ in an opposite, clockwise, orientation and results in a monodromy matrix $M_{\bar{\gamma}_{i}}$. We denote the contour encircling the image point counterclockwise as $\bar{\gamma}_{i}^{-1}$. Then $$M_{\bar{\gamma}_{i}^{-1}}=M^{-1}_{\bar{\gamma}_{i}}\,.$$ (96) Thus we need to show that $$M_{\gamma_{i}}M_{\bar{\gamma}_{i}^{-1}}=1$$ (97) which due to (96) is equivalent to showing that $$M_{\gamma_{i}}=M_{\bar{\gamma}_{i}}\,.$$ (98) We will use the reflection property of $\psi_{i}$ to show that this is indeed the case. For simplicity we are going to drop from $M$ the subscript $\gamma_{i}$ and adopt a new notation such that $$M_{\gamma_{i}}\equiv M\,,\quad M_{\bar{\gamma}_{i}}\equiv N\,.$$ (99) Real Toda fields – $\textrm{su}(1,2)$ case: Let us first rewrite (78) as $$\psi_{a}(z)=-z^{2}\epsilon_{abc}\Lambda^{b\tilde{b}}\Lambda^{c\tilde{c}}% \overline{\psi_{\tilde{b}}(1/\bar{z})}\,\overline{\partial_{\frac{1}{\bar{z}}}% }\,\overline{\psi_{\tilde{c}}(1/\bar{z})}\,.$$ (100) Then after encircling the point $z_{i}$ we have $${M_{a}}^{f}\psi_{f}=-z^{2}\epsilon_{ab_{1}b_{2}}\Lambda^{b_{1}c_{1}}\Lambda^{b% _{2}c_{2}}{\overline{N}_{c_{1}}}^{d_{1}}{\overline{N}_{c_{2}}}^{d_{2}}\Lambda_% {d_{1}e_{1}}\Lambda_{d_{2}e_{2}}\overline{\psi}^{e_{1}}\overline{\partial\psi}% ^{e_{2}}\,.$$ (101) At this point we make use of the definition of the adjugate matrix (93) which we can rewrite as $$\epsilon_{bf_{1}f_{2}}(\textrm{adj}{F)^{b}}_{a}=\epsilon_{ad_{1}d_{2}}{F^{d_{1% }}}_{f_{1}}{F^{d_{2}}}_{f_{2}}\,.$$ (102) In the present case $F=\Lambda\overline{N}\Lambda$. Thus (101) becomes $${M_{a}}^{f}\psi_{f}=-z^{2}\epsilon_{be_{1}e_{2}}{(\textrm{adj}(\Lambda% \overline{N}\Lambda))^{b}}_{a}\overline{\psi}^{e_{1}}\overline{\partial\psi}^{% e_{2}}\,,$$ (103) which because of (100) gives $${M_{a}}^{f}\psi_{f}={(\textrm{adj}(\Lambda\overline{N}\Lambda))^{b}}_{a}\psi_{% b}\,.$$ (104) Now because $N\in\textrm{SU}(1,2)$ we have $$\Lambda N^{\dagger}\Lambda=N^{-1}$$ (105) and $$\textrm{adj}N=N^{-1}$$ (106) since $\textrm{det}N=1$. Using in addition that the transpose of an adjugate matrix is the adjugate of the transpose we have for the right hand side of (104) $${(\textrm{adj}(\Lambda\overline{N}\Lambda))^{b}}_{a}={(\textrm{adj}(\Lambda N^% {\dagger}\Lambda))_{a}}^{b}={(\textrm{adj}N^{-1})_{a}}^{b}={N_{a}}^{b}\,.$$ (107) Substituting back in (104) we indeed get (98). Complex conjugate Toda fields – ${\rm sl}(3,\mathbb{R})$ case: Again we start by rewriting (79) as $$\psi_{a}(z)=\mathrm{i}\operatorname{e}^{-\mathrm{i}\frac{\pi}{6}}z^{2}% \overline{\psi_{a}\left(1/\bar{z}\right)}\,.$$ (108) Then after encircling the point $z_{i}$ we have $${M_{a}}^{b}\psi_{b}=\mathrm{i}\operatorname{e}^{-\mathrm{i}\frac{\pi}{6}}z^{2}% {\overline{N}_{a}}^{c}\overline{\psi_{c}\left(1/\bar{z}\right)}\,.$$ (109) Since $N$ is real, by combining the last equation with (108), we immediately see that (98) holds. 3.6.3 Monodromy around all points In this section we will show that the monodromy of a contour encircling all points within the unit disk takes values in one of the real forms $\textrm{SL}(3,\mathbb{R})$ resp. $\textrm{SU}(1,2)$, without making the assumption that the monodromy around each of the points belongs to a real form of $\textrm{SL}(3,\mathbb{C})$. This fact will be of use in the counting of accessory parameters that follows this section. Let us denote by $\Gamma$ a counterclockwise contour encircling all the points within the unit disk, which has a base point $p$ at the boundary. The mirror contour, $\bar{\Gamma}$ encircles all the image points in an opposite, clockwise, orientation. We denote the contour encircling all the image points counterclockwise as $\bar{\Gamma}^{-1}$. Then $$M_{\bar{\Gamma}^{-1}}=M^{-1}_{\bar{\Gamma}}\,.$$ (110) For a counterclockwise contour encircling all of the points and their images we have $$M_{\text{All}}=M_{\Gamma}M_{\bar{\Gamma}^{-1}}=1$$ (111) which because of (110) implies $$M_{\Gamma}=M_{\bar{\Gamma}}\,.$$ (112) The latter is similar to (98). Then the rest of the proof follows the same steps in the case of monodromies of image points by inverting the logic of that section. Specifically, instead of starting with the group the monodromy matrix belongs to and trying to prove (98), we start with (112) and we find the conditions that specify the group the monodromy matrix belongs to. 3.6.4 Parameter counting and monodromy reduction We will now show that the condition that the monodromy matrices take values in $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$, in the generic situation, imposes precisely as many constraints as the number of available undetermined accessory parameters in the ODE, namely $6K-8$. Therefore, for generic values of the particle positions and quantum numbers, we expect our equations to have a unique solution. To count the number of required constraints, we follow the following logic (see Hulik:2016ifr ): we first compute the dimension of the space in which the monodromy matrices around the $K$ singular points in the unit disk take values if the accessory parameters are arbitrary, and subtract from this the dimension of the space in which they take values when the $\textrm{SL}\left(3,\mathbb{R}\right)$ (resp. $\textrm{SU}(1,2)$) condition is imposed. To compute the dimension of the former space, we note that the monodromy matrix $M_{i}$ takes values in $\textrm{SL}\left(3,\mathbb{C}\right)$ which has real dimension 16, leading to $16K$ parameters. Not all of these are independent however, since the coefficients $\epsilon^{(s)}_{i}$ of the most singular pole terms in ${\cal T}$ and ${\cal W}$ are fixed. The latter fix the trace invariants ${\rm tr}M_{i}$ and ${\rm tr}M_{i}^{2}$ and therefore subtract $4K$ parameters. Furthermore, as we proved, the reflection property for $\psi_{i}$ implies that the monodromy, when encircling all singular points in the unit disk, must take values in one of the real forms $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$ , thus subtracting $8$ parameters. Furthermore, by making a change of basis in the space of solutions of the ODE, all monodromies are conjugated by a constant matrix in $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$. Since this conjugation generically acts effectively (i.e. it sweeps out an 8-dimensional subspace), we should subtract another 8 parameters, leading to the desired dimension $16K-4K-8-8=12K-16$. Now we compute the dimension of the space of monodromy matrices after imposing the $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$ conditions. The dimension of these real groups is 8, leading to $8K$ parameters a priori. Again the invariants ${\rm tr}M_{i}$ and ${\rm tr}M_{i}^{2}$ are fixed in terms of the $\epsilon^{(s)}_{i}$, but now these invariants are automatically real, therefore subtracting $2K$ parameters. The reality constraint on the monodromy when encircling all points in the unit disk is now automatically satisfied, and the overall conjugation by a constant matrix in $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$ subtract $8$ parameters. This leads to a dimension of $8K-2K-8=6K-8$. Computing the difference of these two dimensions leads to the number of $6K-8$ constraints we need to impose to reduce the monodromy to $\textrm{SL}\left(3,\mathbb{R}\right)$ resp. $\textrm{SU}(1,2)$, and matches precisely the number of undetermined accessory parameters at our disposal. We conclude that, generically, imposing the single-valuedness of the Toda fields will precisely fix all the accessory parameters. 4 Monodromy problem for classical ${\cal W}_{3}$ blocks In the previous section we have shown that constructing backreacted bulk solutions containing certain spinning particles in the bulk higher spin theory reduces to solving a certain monodromy problem, where the accesssory parameters in an ODE are fixed by requiring monodromies to be in SL$(3,\mathbb{R})$ resp. SU$(1,2)$. Another context where a similar monodromy problem appears, is in the construction of ${\cal W}_{3}$ blocks at large central charge. In this case, the monodromy properties of the ODE are determined by the quantum numbers of the exchanged ${\cal W}_{3}$ families. In this section we will show that the bulk monodromy problem is in fact identical to the one which determines a ${\cal W}_{3}$ vacuum block in a specific channel. We start with a brief discussion of the properties of $\mathcal{W}_{N}$ blocks. $\mathcal{W}_{3}$ blocks As for the Virasoro algebra, the $\mathcal{W}_{N}$ blocks are defined to be the building blocks of correlators which capture the purely kinematical information which is fixed by the $\mathcal{W}_{N}$ Ward identities. The $\mathcal{W}_{N}$-blocks are much less studied objects than their Virasoro counterparts, see however Bowcock:1993wq Wyllard:2009hg Fateev:2011qa Coman:2017qgv . One peculiarity of ${\cal W}_{N}$ blocks as opposed to Virasoro blocks goes back to the well-known fact that the ${\cal W}_{N}$ Ward identities don’t allow one to express arbitrary correlation functions in terms of correlators involving only ${\cal W}_{N}$ primaries Bowcock:1993wq . This leads to the property that generic ${\cal W}_{N}$ blocks depend on an infinite number of extra parameters besides the familiar dependence on the chosen channel, the cross-ratios and the quantum number of the external and exchanged primaries. This arbitrariness is however absent for the vacuum blocks which will turn out to be the ones relevant for our purposes. Let us illustrate these features for the case of the four-point block (which will be relevant for constructing a two-centered bulk solution), and comment on the the generalization to higher point blocks at the end. We start by considering a correlation function of four $\mathcal{W}_{3}$ primary operators666 In order to simplify equations, we are displaying only the holomorphic coordinate dependence in our formulas. $$\mathcal{A}_{4}=\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2}){% \cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{4})\rangle,$$ Here, $\mathcal{O}_{\eta}$ is a $\mathcal{W}_{3}$-primary operator characterized by it’s weights $\eta=(h,w)$ with respect to $L_{0}$ and $W_{0}$. The four-point function can be decomposed into a sum over exchanged ${\cal W}_{3}$-families $$\mathcal{A}_{4}=\sum_{\eta\in\{\text{all primaries}\}}\langle{\cal O}_{\eta_{1% }}(z_{1}){\cal O}_{\eta_{2}}(z_{2})\Pi_{\eta}{\cal O}_{\eta_{3}}(z_{3}){\cal O% }_{\eta_{4}}(z_{4})\rangle\,,$$ (113) where $\Pi_{\eta}$ is a projector onto a particular $\mathcal{W}_{3}$ descendant family of a primary state $|\eta\rangle=\mathcal{O}_{\eta}|0\rangle$, $$\Pi_{\eta}=\sum_{I_{1,2},J_{1,2}}L_{-I_{1}}W_{-J_{1}}|\eta\rangle G^{J_{1}I_{1% },I_{2}J_{2}}\langle\eta|W_{J_{2}}L_{I_{2}}\,.$$ (114) Here capital $I$ is a multi-index so that $L_{I}$ is an abbreviation for $L_{i_{1}}L_{i_{2}}\ldots L_{i_{k}}$. The $G^{J_{1}I_{1},I_{2}J_{2}}$ are elements of the inverse of the inner product matrix $G_{J_{1}I_{1},I_{2}J_{2}}=\langle\eta|W_{J_{1}}L_{I_{1}}L_{-I_{2}}W_{-J_{2}}|\eta\rangle$. Mimicking the definition of Virasoro conformal blocks, we consider the ratio $${\cal B}_{\eta}={\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2})% \Pi_{\eta}{\cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{4})\rangle\over% \langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2})|\eta\rangle\langle% \eta|{\cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{4})\rangle}\,.$$ (115) However, unlike in the Virasoro case, the quantity ${\cal B}_{\eta}$ is not completely fixed by ${\cal W}_{3}$ kinematics and still depends on dynamical information. As shown in Bowcock:1993wq ,Wyllard:2009hg , ${\cal B}_{\eta}$ generically still depends on the ratios of three-point functions $$C_{n}={\langle\eta_{1}|{\cal O}_{\eta_{2}}(1)(W_{-1})^{n}|\eta\rangle\over% \langle\eta_{1}|{\cal O}_{\eta_{2}}(1)|\eta\rangle},\qquad\bar{C}_{n}={\langle% \eta|(W_{1})^{n}{\cal O}_{\eta_{3}}(1)|\eta_{4}\rangle\over\langle\eta|{\cal O% }_{\eta_{3}}(1)|\eta_{4}\rangle}.$$ (116) One therefore defines the ${\cal W}_{3}$ block to be an object depending on an extra set of free parameters $C_{n},\bar{C}_{n}$, which in a specific ${\cal W}_{3}$ CFT should be specialized to take the appropriate values (116). It will be important to note that this infinite arbitrariness is absent777Another case where the arbitrariness is absent is when the external operators are semi-degenerate Wyllard:2009hg . for the vacuum block, since the $C_{n},\bar{C}_{n}$ vanish when $|\eta\rangle=|0\rangle$ is the vacuum. Summarizing, we have represented the correlation function $\mathcal{A}_{4}$ as $${\cal A}_{4}=\sum_{\eta}\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z% _{2})|\eta\rangle\langle\eta|{\cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{% 4})\rangle{\cal B}_{\eta}\,.$$ (117) Note that in writing this expansion, we have also chosen a ‘channel’, i.e. we have chosen to fuse ${\cal O}_{\eta_{1}}$ with ${\cal O}_{\eta_{2}}$ and ${\cal O}_{\eta_{3}}$ with ${\cal O}_{\eta_{4}}$. Similarly, one can perform a decomposition of an $n$-point correlation function by inserting $n-3$ projectors for the exchanged primaries and dividing by $n-2$ three point functions, analogous to (115). Inserting a degenerate primary As is the case for Virasoro blocks (see e.g. Hartman:2013mia ) the ${\cal W}_{N}$ blocks in a certain large-$c$ limit can be determined by solving a monodromy problem Coman:2017qgv deBoer:2014sna . To derive this, one considers the amplitude with an insertion of an additional degenerate primary and makes use of the constraints imposed by the decoupling of its null descendants. To this end we consider the auxiliary object $\mathcal{B}_{\eta}[\Psi]$, which differs from the block ${\cal B}_{\eta}$ defined in (115) by an insertion of an extra degenerate primary $\Psi$ in the numerator: $$\mathcal{B}_{\eta}[\Psi]={\langle\Psi(z){\cal O}_{\eta_{1}}(z_{1}){\cal O}_{% \eta_{2}}(z_{2})\Pi_{\eta}{\cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{4})% \rangle\over\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2})|\eta% \rangle\langle\eta|{\cal O}_{\eta_{3}}(z_{3}){\cal O}_{\eta_{4}}(z_{4})\rangle}.$$ (118) Here, $\Psi$ is a specific degenerate operator associated to a state which satisfies shortening conditions of the form: $$\begin{split}\displaystyle\left[L_{-1}+\kappa_{1,1}W_{-1}\right]|\Psi\rangle&% \displaystyle=0\,,\\ \displaystyle\left[L^{2}_{-1}+\kappa_{2,1}L_{-2}+\kappa_{2,2}W_{-2}\right]|% \Psi\rangle&\displaystyle=0\,,\\ \displaystyle\left[L^{3}_{-1}+\kappa_{3,1}L_{-2}L_{-1}+\kappa_{3,2}L_{-3}+% \kappa_{3,3}W_{-3}\right]|\Psi\rangle&\displaystyle=0\,.\end{split}$$ (119) The explicit coefficients888In what follows we will only need the large $c$ behaviour $\kappa_{3,1}\approx 2\kappa_{3,2}\approx\kappa_{3,3}\approx{24\over c}$ can be found in Coman:2017qgv . Upon inserting the last equation into the correlation function (118), we obtain a shortening relation of the form $$\left[\partial^{3}+\kappa_{3,1}\hat{\cal T}(z)\partial+\kappa_{3,2}\hat{\cal T% }^{\prime}(z)+\kappa_{3,3}\hat{\cal W}(z)\right]\mathcal{B}_{\eta}[\Psi]=0\,.$$ (120) The operators $\hat{\cal T},\hat{\cal W}$ which act on the $\mathcal{B}_{\eta}[\Psi]$ are defined as $$\displaystyle\hat{\cal T}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i}\frac{h_{i}}{(z-z_{i})^{2}}+\frac{1}{(z-z_{i})}\frac{% \partial\ }{\partial z_{i}}\,,$$ (121) $$\displaystyle\hat{\cal W}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i}\frac{w_{i}}{(z-z_{i})^{3}}+\frac{W^{(i)}_{-1}}{(z-z_{i})% ^{2}}+\frac{W^{(i)}_{-2}}{(z-z_{i})}\,,$$ (122) where $h_{i}$ and $w_{i}$ are the conformal dimension and spin-3 charge of the $i^{\rm th}$ operator. By $W^{(i)}_{-k}$ we denote the negative $k^{\rm th}$ mode acting on $i^{\rm th}$ operator inside the $\mathcal{B}_{\eta}[\Psi]$, for example $W^{(1)}_{-k}\mathcal{B}_{\eta}[\Psi]$ is shorthand for the ratio $$W^{(1)}_{-k}\mathcal{B}_{\eta}[\Psi]=\frac{\langle\Psi(z)\left(W_{-k}{\cal O}_% {\eta_{1}}\right)(z_{1}){\cal O}_{\eta_{2}}(z_{2})\Pi_{\eta}{\cal O}_{\eta_{3}% }(z_{3}){\cal O}_{\eta_{4}}(z_{4})\rangle}{\langle{\cal O}_{\eta_{1}}(z_{1}){% \cal O}_{\eta_{2}}(z_{2})|\eta\rangle\langle\eta|{\cal O}_{\eta_{3}}(z_{3}){% \cal O}_{\eta_{4}}(z_{4})\rangle}\,.$$ (123) As it stands, equation (120) is a differential equation which couples different amplitudes. We will now argue that, in a suitable large-$c$ limit, it reduces to an ODE for a single function. Classical, large $c$, limit We now consider the limit of large central charge $c$, in which the primary operators ${\cal O}_{\eta_{i}}$ are assumed to be “heavy” in the sense that $\eta_{i}/c$ remains finite in the limit. It has been argued, see e.g. Harlow:2011ny , although not proven, that in this limit the general conformal block exponentiates $$\lim_{c\rightarrow\infty}\mathcal{B}_{\eta}=\operatorname{e}^{-\frac{c}{6}b_{% \nu}}\,.$$ (124) Here, the parameters $\nu_{i}=(\epsilon_{i},\delta_{i})$ on the right hand side contain the rescaled “classical” weights $$\epsilon_{i}=\frac{24}{c}h_{i},\qquad\delta_{i}=\frac{24}{c}w_{i}\,.$$ (125) The behaviour of (124) is reminiscent of a saddle-point approximation, where $b_{\nu}$ is the action of the saddle-point. Now let’s consider the quantity $\mathcal{B}_{\eta}[\Psi]$ in (118) with the insertion of the degenerate primary $\Psi$. Since $\Psi$ is “light”, in the sense that its charges are of order 1 at large $c$, it is natural to assume that its presence does not change the saddle point and that $\mathcal{B}_{\eta}[\Psi]$ factorizes as $$\lim_{c\rightarrow\infty}\mathcal{B}_{\eta}[\Psi]=\psi_{\nu}\operatorname{e}^{% -\frac{c}{6}b_{\nu}}\,.$$ (126) We expect a similar factorization to occur in the case of $W^{(i)}_{-k}\mathcal{B}_{\eta}[\Psi]$ since the action of $W^{(i)}_{-k}$ does not change the leading behaviour of $\mathcal{W}_{3}$ weights in the large $c$ limit, so that $$\lim_{c\rightarrow\infty}{W^{(i)}_{-k}\mathcal{B}_{\eta}[\Psi]\over W^{(i)}_{-% k}\mathcal{B}_{\eta}}=\psi_{\nu}\,.$$ (127) An important assumption here is that this factorization involves the same function $\psi_{\nu}$ as in (127), see Harlow:2011ny for a justification. Under these assumptions, the decoupling equation (120) reduces in the large-$c$ limit to at holomorphic ODE for the “wave function” $\psi_{\nu}$ $$\left(\partial^{3}+{\cal T}\partial+\frac{1}{2}\partial{\cal T}+{\cal W}\right% )\psi_{\nu}=0\,.$$ (128) Here ${\cal T},{\cal W}$ are the functions $$\displaystyle{\cal T}(z)$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}\left({\epsilon_{i}\over(z-z_{i})^{2}}+{c_{i}\over(z-z_% {i})}\right)\,,$$ (129) $$\displaystyle{\cal W}(z)$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}\left({\delta_{i}\over(z-z_{i})^{3}}+{d_{i}\over(z-z_{i% })^{2}}+{a_{i}\over(z-z_{i})}\right)\,,$$ (130) with coefficients $\epsilon_{i},\delta_{i}$ defined in (125), and the accessory parameters $a_{i},d_{i},c_{i}$ given by the following limits $$\displaystyle c_{i}=\lim_{c\rightarrow\infty}\frac{24}{c}\frac{L^{(i)}_{-1}% \mathcal{B}_{\eta}}{\mathcal{B}_{\eta}},\qquad d_{i}=\lim_{c\rightarrow\infty}% \frac{24}{c}\frac{W^{(i)}_{-1}\mathcal{B}_{\eta}}{\mathcal{B}_{\eta}},\quad% \text{and}\quad a_{i}=\lim_{c\rightarrow\infty}\frac{24}{c}\frac{W^{(i)}_{-2}% \mathcal{B}_{\eta}}{\mathcal{B}_{\eta}}\,.$$ (131) The exponentiation of the classical block (124) implies that the $c_{i}$ are finite and given by $$c_{i}=-4\partial_{z_{i}}b_{\nu}.$$ (132) It is natural to expect that the $a_{i}$ and $d_{i}$ similarly remain finite in the limit. To summarize, we have found associated to a classical ${\cal W}_{3}$ block an ODE (128) which is of the same form as the auxiliary ODE (58) determining the solutions of the ${\cal A}_{2}$ Toda system. Similarly to what happens for classical Virasoro blocks Hartman:2013mia , the choice of the exchanged ${\cal W}_{3}$ primary $\eta$ determines the monodromy properties of the ODE (128). To see this, note that the quantity (118) contains the four-point function $$\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2})\Psi(z)|\eta\rangle% =\langle{\cal O}_{\eta_{1}}(z_{1}){\cal O}_{\eta_{2}}(z_{2})\Psi(z){\cal O}_{% \eta}(0)\rangle\,.$$ (133) Using the OPE between the light operator $\Psi(z)$ and the heavy operator ${\cal O}_{\eta}(0)$ one can show that the trace invariants ${\rm tr}M,\ trM^{2}$ of the monodromy matrix as $\Psi(z)$ encircles the origin are fixed in terms of $\eta$, and this is of course also the same monodromy when $z$ encircles both $z_{1}$ and $z_{2}$. This discussion generalizes straightforwardly to classical $n$-point blocks: they are similarly determined by an ODE whose monodromy properties are determined by the choice of exchanged primaries in the chosen channel. Relating the monodromy problems Now we are ready to relate the monodromy problem for the classical ${\cal W}_{3}$ four-point block to the monodromy problem which determines a 2-centered solution in spin-3 gravity. In studying the latter problem we encountered an ODE on the complex plane of the form (128) with identical singularities in two pairs of image points. Therefore we consider a CFT four-point function with a primary ${\cal O}_{\eta_{1}}$ inserted in the points $z_{1}$ and $1/\bar{z}_{1}$, and a second primary ${\cal O}_{\eta_{2}}$ inserted in $z_{2}$ and $1/\bar{z}_{2}$. Next we should choose a channel in which to perform the conformal block expansion. It turns out that the relation between the two monodromy problems is the simplest if we choose the “mirror channel” in which we fuse the operators located in image points (i.e. $z_{1}$ and $1/\bar{z}_{1}$ resp. $z_{2}$ and $1/\bar{z}_{2}$) together in pairs. The reason is that we derived in paragraph (3.6.2) above that, in the bulk problem, the monodromy when encircling a pair of image points is the identity. From our discussion in the previous paragraph, this means that the exchanged primary in the corresponding ${\cal W}_{3}$ block in the mirror channel is the identity operator. In other words, our bulk monodromy problem determines a ${\cal W}_{3}$ vacuum block. In summary, we have argued that the monodromy problem determining a 2-centered solution in spin-3 gravity is equivalent to that which determines the classical vacuum 4-point block $$b_{0}=-\lim_{c\rightarrow\infty}{6\over c}\ln\frac{\langle{\cal O}_{\eta_{1}}(% z_{1}){\cal O}_{\eta_{1}}(\frac{1}{\bar{z}_{1}})\Pi_{0}{\cal O}_{\eta_{2}}(z_{% 2}){\cal O}_{\eta_{2}}(\frac{1}{\bar{z}_{2}})\rangle}{\langle{\cal O}_{\eta_{1% }}(z_{1}){\cal O}_{\eta_{1}}(\frac{1}{\bar{z}_{1}})|0\rangle\langle 0|{\cal O}% _{\eta_{2}}(z_{2}){\cal O}_{\eta_{2}}(\frac{1}{\bar{z}_{2}})\rangle}\,.$$ (134) The argument can be generalized in a straightforward manner to show that the monodromy problem for a $K$-centered solution determines a classical vacuum $2K$-point block in a “mirror” channel where the operators in image points are fused together in pairs (see fig.1). We should stress that, due to the fact that ${\cal W}_{3}$ vacuum blocks are unique and don’t depend on extra parameters like (116), the derived correspondence is really one-to-one: solving the monodromy problem for the $K$-centered solution determines the 2K-point vacuum block through (132) and, conversely, from the knowledge of the vacuum block we can in principle derive the accessory parameters in the ODE and construct the bulk multi-centered solution. 999Unlike for $c_{i}$, for the parameters $a_{i}$ and $d_{i}$ no closed form expression like (132) exists. So it seems that in order to extract them from (131) one needs not only $\mathcal{B}_{0}$ but also $W^{(i)}_{k}\mathcal{B}_{0}$. These however, for the case of the vacuum conformal block, can in principle be derived from $\mathcal{B}_{0}$ using $\mathcal{W}_{3}$ ward identities and the properties of $\mathcal{W}_{3}$ vacuum. 5 Generalization to Arbitrary Spin Here we present generalization of the above results to higher spin theories with spins $2,\ldots,N$. Our results in this section are either based on generic arguments or are extrapolated from explicit calculations we carried out for $N=3$ and $N=4$. Many of the details of Toda theory related to this section are discussed in appendix C. We focus on gauge connections that belong either to the maximal, $\textrm{sl}(N,\mathbb{R})$, or to the next to maximal, $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, non-compact real form of $\textrm{sl}(N,\mathbb{C})$. 5.1 The Toda gauge connection First we observe that for $N=3$, the left connection (39) in Toda gauge can be rewritten for both values of $\sigma$ as $$\begin{split}\displaystyle A_{T}=&\displaystyle\frac{1}{2}\left(\partial\phi_{% 1}\mathrm{d}z-\bar{\partial}\phi_{1}\mathrm{d}\bar{z}\right)H_{1}+% \operatorname{e}^{-\phi_{1}+\frac{1}{2}\phi_{2}}\left(\mathrm{i}E_{1}^{-}% \mathrm{d}z-\mathrm{i}E_{1}^{+}\mathrm{d}\bar{z}\right)\\ &\displaystyle+\frac{1}{2}\left(\partial\phi_{2}\mathrm{d}z-\bar{\partial}\phi% _{2}\mathrm{d}\bar{z}\right)H_{2}+\operatorname{e}^{-\phi_{2}+\frac{1}{2}\phi_% {1}}\left(\mathrm{i}E_{2}^{-}\mathrm{d}z-\mathrm{i}E_{2}^{+}\mathrm{d}\bar{z}% \right)\,,\end{split}$$ (135) where the matrices $H_{i}$ and $E^{\pm}_{i}$ are defined in appendix B. From (135) we can generalize to arbitrary spin by writing the connection $$A_{T}=\frac{1}{2}\left(\partial\Phi\mathrm{d}z-\bar{\partial}\Phi\mathrm{d}% \bar{z}\right)+\mathrm{i}E_{i}^{-}e^{-\frac{1}{2}\alpha_{i}(\Phi)}\mathrm{d}z-% \mathrm{i}E_{i}^{+}e^{-\frac{1}{2}\alpha_{i}(\Phi)}\mathrm{d}\bar{z}\,,$$ (136) where $\Phi=H_{i}\phi^{i}$. Then (136) becomes $$A_{T}=\frac{1}{2}\left(\partial\phi^{i}\mathrm{d}z-\bar{\partial}\phi^{i}% \mathrm{d}\bar{z}\right)H_{i}+\mathrm{i}E_{i}^{-}e^{-\frac{1}{2}C_{ij}\phi^{j}% }\mathrm{d}z-\mathrm{i}E_{i}^{+}e^{-\frac{1}{2}C_{ij}\phi^{j}}\mathrm{d}\bar{z% }\,,$$ (137) where $C_{ij}=\alpha_{i}(H_{j})$ is the Cartan matrix and is obtained from $$\left[H_{i},E_{j}^{\pm}\right]=\pm\alpha_{j}\left(H_{i}\right)E_{j}^{\pm}\,.$$ (138) In matrix form we have $$C=\begin{pmatrix}2&-1&0&\ldots\\ -1&2&-1&\ldots\\ 0&-1&2&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}\,.$$ (139) The flatness of the $A_{T}$ is equivalent to the Toda system being satisfied. $$\begin{matrix}\partial\bar{\partial}\phi_{1}+e^{-2\phi_{1}+\phi_{2}}=0\,,\\ \partial\bar{\partial}\phi_{2}+e^{-2\phi_{2}+\phi_{1}+\phi_{3}}=0\,,\\ \vdots\\ \partial\bar{\partial}\phi_{j}+e^{-2\phi_{j}+\phi_{j-1}+\phi_{j+1}}=0\,,\\ \vdots\\ \partial\bar{\partial}\phi_{N-1}+e^{-2\phi_{N-1}+\phi_{N-2}}=0\,.\end{matrix}$$ (140) The argument to be made is that (137) can be brought into a form similar to (39) with the presence of a parameter $\sigma$ such that $$\displaystyle\tilde{A}\in\textrm{sl}(N,\mathbb{R}),$$ $$\displaystyle\sigma=$$ $$\displaystyle 1:$$ $$\displaystyle\phi_{i}$$ $$\displaystyle=\bar{\phi}_{N-i}\,,$$ $$\displaystyle\tilde{A}\in\textrm{su}\left(\lfloor\frac{N}{2}\rfloor,\lceil% \frac{N}{2}\rceil\right),$$ $$\displaystyle\sigma=$$ $$\displaystyle-1:$$ $$\displaystyle\phi_{i}$$ $$\displaystyle\in\mathbb{R}.$$ (141) Similarly we choose the right connection $\bar{A}_{T}$ to be proportional to the principally embedded Cartan element of $\textrm{sl}(2,\mathbb{R})$ inside $\textrm{sl}\left(N,\mathbb{R}\right)$ respectively $\textrm{su}\left(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil\right)$ $$\bar{A}_{T}=\mathrm{i}\frac{1}{2}\sum_{i}i(N-i)H_{i}\mathrm{d}\tilde{t}=% \mathrm{i}\bar{L}_{0}\mathrm{d}\tilde{t}\,,$$ (142) which is trivially flat. As discussed in appendix D in the presence of backreacting point particles the equations of motion become $$\begin{matrix}\partial\bar{\partial}\phi_{1}+e^{-2\phi_{1}+\phi_{2}}=\frac{\pi% }{k}\sum_{i}\alpha^{(i)}_{1}\delta^{2}\left(z-z_{i},\bar{z}-\bar{z}_{i}\right)% \,,\\ \partial\bar{\partial}\phi_{2}+e^{-2\phi_{2}+\phi_{1}+\phi_{3}}=\frac{\pi}{k}% \sum_{i}\alpha^{(i)}_{2}\delta^{2}\left(z-z_{i},\bar{z}-\bar{z}_{i}\right)\,,% \\ \vdots\\ \partial\bar{\partial}\phi_{i}+e^{-2\phi_{j}+\phi_{j-1}+\phi_{j+1}}=\frac{\pi}% {k}\sum_{i}\alpha^{(i)}_{j}\delta^{2}\left(z-z_{i},\bar{z}-\bar{z}_{i}\right)% \,,\\ \vdots\\ \partial\bar{\partial}\phi_{N-1}+e^{-2\phi_{N-1}+\phi_{N-2}}=\frac{\pi}{k}\sum% _{i}\alpha^{(i)}_{N-1}\delta^{2}\left(z-z_{i},\bar{z}-\bar{z}_{i}\right)\,.% \end{matrix}$$ (143) where for the $i$-th particle we assumed the decomposition of the momentum $$P_{L,i}=\sum_{j}\alpha_{j}^{(i)}H_{j}\,.$$ (144) The constants $\alpha_{j}^{(i)}$ can be expressed in terms of the quantum numbers of the particles from the constraints coming from the Lagrange multipliers, as explained in appendix D.1. As explained in appendix C there are two systems of associated ODEs (180) and (183) expressed in terms of holomorphic and antiholomorphic currents $U^{(s)}_{1,2}$, $V^{(s)}_{1,2}$ from which we can construct (quasi-)primary currents $\mathcal{W}^{(s)}$, where $s=2,\ldots,N$. 5.2 Properties of the currents Pole structure: If near the sources $z_{i}$ we neglect the potential term compared to the kinetic term, then the Toda fields $\phi_{i}$ behave near $z=z_{i}$ as $$\phi_{j}\sim\frac{\alpha_{j}^{(i)}}{k}\ln|z-z_{i}|\,,$$ (145) Thus the solution of the Toda equations on the disk specifies meromorphic (quasi-)primary currents $\mathcal{W}^{(s)}$ with $s=2,\ldots,N$. The sources $\alpha_{j}^{(i)}$ fix the most singular terms in $\mathcal{W}^{(s)}$ which take the form $$\mathcal{W}^{(s)}(z)=\sum_{i=1}^{K}\frac{\epsilon^{(s)}_{i}}{\left(z-z_{i}% \right)^{s}}+\ldots\,,$$ (146) where the ellipses denote lower order poles and a regular part. The constants $\epsilon^{(s)}_{i}$ are specified in terms of $\alpha^{(i)}_{j}$ (with $j=i,\ldots,N-1$) by examining the form of the Toda fields $\phi_{j}$ near a specific source. Doubling trick: As shown for the $N=3$ case (49), the currents $\mathcal{W}^{(s)}(z)$ must satisfy the following reality condition on the boundary circle $$\displaystyle\textrm{sl}(N,\mathbb{R}),\ \sigma=1:$$ $$\displaystyle\left.\left(\mathrm{i}^{s}z^{s}{\cal W}^{(s)}\right)\right\rvert_% {|z|=1}\in\mathbb{R},$$ $$\displaystyle\textrm{su}\left(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}% \rceil\right),\ \sigma=-1:$$ $$\displaystyle\left.\left(z^{s}{\cal W}^{(s)}\right)\right\rvert_{|z|=1}\in% \mathbb{R}.$$ (147) The reality condition is imposed by using a ‘doubling trick’ and extending $\mathcal{W}^{(s)}(z)$ to meromorphic functions on the complex plane in a suitable manner. Using the Schwarz reflection principle we find that the appropriate reflection properties are $$\mathcal{W}^{(s)}(z)=\frac{\left(-\sigma\right)^{s}}{z^{2s}}\bar{\mathcal{W}}^% {(s)}\left(\frac{1}{z}\right)\,.$$ (148) Constraints on the accessory parameters: As a result of (148) $\mathcal{W}^{(s)}$ now has poles both at the $z_{i}$ and their image points $\bar{z}_{i}^{-1}$. Thus $\mathcal{W}^{(s)}$ is of the form $$\mathcal{W}^{(s)}(z)=\sum_{i=1}^{K}\left(\frac{\epsilon^{(s)}_{i}}{\left(z-z_{% i}\right)^{s}}+\frac{\tilde{\epsilon}^{(s)}_{i}}{\left(z-1/\bar{z}_{i}\right)^% {s}}+\sum_{l=s-1}^{1}\left(\frac{c_{i}^{(s,l)}}{\left(z-z_{i}\right)^{l}}+% \frac{\tilde{c}_{i}^{(s,l)}}{\left(z-1/\bar{z}_{i}\right)^{l}}\right)\right)\,,$$ (149) where in the above equation we assumed for simplicity there is no source located at the origin and as a result there is also no image source at infinity. As in the $\textrm{sl}(3)$ case the parameters $c_{i}^{(s,l)},\tilde{c}_{i}^{(s,l)}$ are the accessory parameters and are not determined by the $\alpha^{(i)}_{j}$. Instead they are determined by solving a monodromy problem, which arises from demanding that the Toda fields are single-valued. Once again not all of the parameters of (149) are independent since we have to impose the reflection condition (148). For each of the currents $\mathcal{W}^{(s)}$ we have in total $4sK$ real parameters from which $4(s-1)K$ are accessory parameters, where $K$ denotes the number of insertions within the unit disk. Substituting (149) in (148) and requiring the equality of poles at each order near the $z_{i}$ we get $2sK$ real conditions. $2K$ out of them are $$e^{(s)}_{i}=(\sigma)^{s}\bar{\tilde{e}}^{(s)}_{i}\,.$$ (150) Thus the parameters $\tilde{e}^{(s)}_{i}$ are also determined in terms of $\alpha^{(i)}_{j}$. So we are left with $4(s-1)K$ accessory parameters and $2(s-1)K$ real conditions on them which gives $2(s-1)K$ parameters left. Substituting these real conditions in (148) and requiring regularity at the origin we get another $2s-1$ additional constraints. Thus from each $\mathcal{W}^{(s)}(z)$ we get $$2(s-1)K-(2s-1)\,,$$ (151) parameters. Summing up $s$ from $2$ to $N$ we get that from all the currents $\mathcal{W}^{(s)}$ we have in total $$KN(N-1)-(N^{2}-1)\,,$$ (152) parameters left that should be fixed by solving the monodromy problem. 5.3 Doubling trick for $\psi_{i}$ To prove the doubling trick for $\psi_{i}$ one needs the associated ODE expressed in terms of the (quasi-)primary currents $\mathcal{W}^{(s)}$. That however is not known for general $N$ and has to be worked out case by case. Thus here we will only sketch the general steps of the proof for arbitrary $N$, which justify our choice when it comes to the doubling trick of $\psi_{i}$. Under a conformal transformation $z\rightarrow\tilde{z}=f(z)$ the fields in the associated ODE for $\psi_{i}(z)$ transform as $$\tilde{\mathcal{W}}^{(s)}\left(\tilde{z}\right)=\left(f^{\prime}\right)^{-s}% \left(\mathcal{W}^{(s)}\left(z\right)-\delta_{s,2}\beta_{N}S\left(f,z\right)% \right)\,,\quad\tilde{\psi}_{i}\left(\tilde{z}\right)=\left(f^{\prime}\right)^% {\frac{N-1}{2}}\psi_{i}\left(z\right)\,.$$ (153) The transformation of $\psi_{i}$ above was derived from the transformation of $\phi_{1}$ (184) and the fact that $\psi_{i}$ is the holomorphic part of $\operatorname{e}^{\phi_{1}}$. These transformations should leave the associated ODE conformally invariant. Upon performing a conformal transformation with $f(z)=1/z$ and using the reflection property (148) we expect to get an ODE with the presence of the parameter $\sigma$ from which one can deduce the following. For the $\textrm{su}\left(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil\right)$ case we have $\sigma=-1$ and thus $$\Psi(z)=Sz^{N-1}\bar{X}\left(\frac{1}{z}\right)\,,$$ (154) with $X(z)$ given by (208). Then $$\psi_{a}(z)=S\epsilon_{ab_{1}\ldots b_{N-1}}\Lambda^{b_{1}\tilde{b}_{1}}\ldots% \Lambda^{b_{N-1}\tilde{b}_{N-1}}\bar{\psi}_{\tilde{b}_{1}}\left(1/z\right)% \partial_{\frac{1}{z}}\bar{\psi}_{\tilde{b}_{2}}\left(1/z\right)\ldots\partial% _{\frac{1}{z}}^{N-2}\bar{\psi}_{\tilde{b}_{N-1}}\left(1/z\right)\,.$$ (155) For the $\textrm{sl}\left(N,\mathbb{R}\right)$ case we have $\sigma=1$ and thus $$\Psi(z)=\tilde{S}z^{N-1}\bar{\Psi}\left(\frac{1}{z}\right)\,.$$ (156) The matrices $S$ and $\tilde{S}$ are proportional to the identity and are respectively given by (223) and (231). 5.4 Monodromies The monodromies work out similarly to the $\textrm{SL}\left(3,\mathbb{C}\right)$ case. Thus for this section we mostly state the results as the proof is a direct generalization of what was described in section 3.6. In general after encircling a singular point $z_{i}$ the solution transforms as $$\Psi\rightarrow M_{i}\Psi\,,$$ (157) where $M_{i}\in\textrm{SL}(N,\mathbb{C})$ is a monodromy matrix. However requiring the Toda fields to be single valued constrains the monodromy matrix. Specifically for real Toda fields, since we have $$\operatorname{e}^{\phi_{1}}=\Psi^{\dagger}\Lambda\Psi\,,$$ (158) single valuedness requires that $$M_{i}^{\dagger}\Lambda M_{i}=\Lambda\,,$$ (159) with $\Lambda=\textrm{diag}\left(1,-1,1,-1,\ldots\right)$. This means that we must have $M_{i}\in\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. For complex conjugate Toda fields we have $$\operatorname{e}^{\phi_{1}}\sim\epsilon^{ab_{1}\ldots b_{N-1}}\psi_{a}\bar{% \psi}_{b_{1}}\bar{\partial}\bar{\psi}_{b_{2}}\ldots\bar{\partial}^{N-2}\bar{% \psi}_{b_{N-1}}\,.$$ (160) Requiring the field to be single valued and by using the definition of the adjugate matrix we get $$M_{i}=\bar{M}_{i}\,,$$ (161) which implies that $M\in\textrm{SL}\left(N,\mathbb{R}\right)$. Using the reflection property of $\Psi$, (156) resp. (155) one can show that, provided the monodromy matrices belong to $\textrm{SL}\left(N,\mathbb{R}\right)$ resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, the monodromy matrix of a contour encircling both a point and its image is the identity matrix. In a similar manner one can show that the monodromy matrix of a contour encircling all points within the unit disk is an element of $\textrm{SL}\left(N,\mathbb{R}\right)$ resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. Parameter counting and monodromy reduction Similarly to section 3.6.4 we will show that the condition the monodromy matrices take values in ${\rm SL}(N,\mathbb{R})$, respectively $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$ imposes in general precisely as many constraints as the number of available undetermined accessory parameters in the ODE. Once again we first compute the dimension of the space in which the monodromy matrices around the $K$ singular points in the unit disk take values and subtract from this the dimension of the space in which they take values when the ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$ condition is imposed. In general the monodromy matrix $M_{i}$ of a single point is an element of $\textrm{SL}\left(N,\mathbb{C}\right)$. Thus for the monodromy matrices of all $K$ points we have in total $2K(N^{2}-1)$ real parameters. However not all of these parameters are independent. The conjugacy class of each matrix $M_{i}$ is fixed due to the relation of the class functions $\textrm{tr}\left(M_{i}^{k}\right)$ (for $k=1,\ldots,N-1$) to the higher spin charges of the particle. The latter subtracts $2K(N-1)$ parameters. Furthermore the monodromy matrix, when encircling all singular points in the unit disk, must take values in one of the real forms ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, thus subtracting $N^{2}-1$ parameters. The last constraint comes from the conjugation of all monodromies by a constant matrix, in ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, after making a change of basis in the space of solutions of the ODE. Thus we should subtract another $N^{2}-1$ parameters. This leads to $2KN(N-1)-2(N^{2}-1)$ independent parameters. Next we compute the dimension of the space of monodromy matrices after imposing the ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$ conditions. These real forms have dimentions $N^{2}-1$, leading to $K(N^{2}-1)$ parameters. Again the conjugacy class is fixed, but now the invariants $\textrm{tr}\left(M_{i}^{k}\right)$ are automatically real, therefore subtracting $K(N-1)$ parameters. The reality constraint on the monodromy when encircling all points in the unit disk is now automatically satisfied, and the overall conjugation by a constant matrix in ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$ subtracts $N^{2}-1$ parameters. This leads to a dimension of $KN(N-1)-(N^{2}-1)$. Computing the difference of these two dimensions leads to the number of $KN(N-1)-(N^{2}-1)$ constraints we need to impose to reduce the monodromy to ${\rm SL}(N,\mathbb{R})$, resp. $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, and matches precisely the number of undetermined accessory parameters at our disposal. Thus, generically, imposing the single-valuedness of the Toda fields will precisely fix all the accessory parameters and guarantees that for generic values of the particle positions and quantum numbers, our equations have a unique solution. Relation to classical ${\cal W}_{N}$ blocks Our arguments in section 4 for the ${\cal W}_{3}$ case can be generalized in a straightforward manner to show that the monodromy problem determining a $K$-centered solution in the spin-$N$ theory is equivalent to the one determining a classical vacuum $2K$-point ${\cal W}_{N}$ block in a “mirror” channel where the operators in image points are fused together in pairs. 6 Outlook To conclude, we list some open problems and possible generalizations: • In this work we argued for a connection between backreacted multi-centered solutions and ${\cal W}_{N}$ blocks from studying the bulk equations of motion. It would also be of interest to evaluate the (regularized) bulk action on these solutions, which one would expect, based on results in the heavy-light approximation Hijano:2015rla , to compute the ${\cal W}_{N}$ vacuum block ${\cal B}_{0}$ itself. Such an analysis would extend the holographic computation of Virasoro and ${\cal W}_{N}$ blocks beyond the heavy-light approximation. • We restricted our attention to the description of a rather special class of particles localized in the bulk, which source only the left-moving Chern-Simons connection. It would be of obvious interest to generalize this to the case where both left- and right-moving connections are excited and make contact with other approaches to bulk localization Verlinde:2015qfa , Nakayama:2015mva . • As we remarked before in section 2, the configurations we studied in this work are particular to the Lorentzian bulk theory, because the geodesics on which our particles move do not analytically continue to geodesics in Euclidean AdS. In contrast, geodesics in the Euclidean AdS begin and end on the boundary, and are described by localized excitations on the boundary. It seems likely that this Euclidean setup has a simple description in terms of the free field variables on the boundary introduced in Campoleoni:2017xyl . • In our multi-centered solutions, we restricted the individual centers to be particles rather than black holes. It would be desirable to generalize our approach to include black hole centers. It would also be worthwhile to construct solutions where the individual centers are particles but the monodromy when encircling all centers is that of a black hole. These would be toy models for black hole microstates in which questions about information loss could be addressed Fitzpatrick:2016ive . • The real forms appearing in our discussion are $\mathrm{SL}(N,\mathbb{R})$ and $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. However, one might wonder whether more general $\textrm{SU}(p,q)$’s are physically relevant. There is seemingly only one restriction given by the existence of $\textrm{SL}(2,\mathbb{R})$ embeddeding in the $\textrm{SU}(p,q)$ which corresponds to the gravitational subsector. A full answer would requre a thorough study of reality conditions of Toda fields and appropriate $\textrm{SL}(2,\mathbb{R})$ embeddings. Secondly, an investigation of embeddings other than the principal one might also be of interest. Acknowledgements.We would like to thank Kara Farnsworth, Renann L. Jusinskas, Tomas Prochazka, Monica Guica, Elli Pomoni, Massimo Porrati, Jan de Boer and Erik Perlmutter for valuable discussions. The research of O.H., J.R. and O.V. was supported by the Grant Agency of the Czech Republic under the grant 17-22899S. The research of J.R. and O.V. was supported by ESIF and MEYS (Project CoGraDS - CZ.02.1.01/0.0/0.0/15 003/0000437). O.V. would also like to thank the Yukawa Institute for Theoretical Physics at Kyoto University for hospitality during the workshop YITP-T-18-04 ”New Frontiers in String Theory 2018” while this work was in progress. Appendix A Explicit representation matrices For general $N$, we can take the explicit representation of Castro:2011iw with all odd spin generators multiplied by $\sqrt{\sigma}$. Concretely, for $N=2$: $$L_{-1}=\left(\begin{array}[]{cc}0&1\\ 0&0\\ \end{array}\right),\qquad L_{0}=\left(\begin{array}[]{cc}\frac{1}{2}&0\\ 0&-\frac{1}{2}\\ \end{array}\right),\qquad L_{1}=\left(\begin{array}[]{cc}0&0\\ -1&0\\ \end{array}\right)\,.$$ (162) For $N=3$: $$L_{-1}=\left(\begin{array}[]{ccc}0&\sqrt{2}&0\\ 0&0&\sqrt{2}\\ 0&0&0\\ \end{array}\right),\ L_{0}=\left(\begin{array}[]{ccc}1&0&0\\ 0&0&0\\ 0&0&-1\\ \end{array}\right),\ L_{1}=\left(\begin{array}[]{ccc}0&0&0\\ -\sqrt{2}&0&0\\ 0&-\sqrt{2}&0\\ \end{array}\right),$$ (163) $$W_{-2}=\sqrt{\sigma}\left(\begin{array}[]{ccc}0&0&2\\ 0&0&0\\ 0&0&0\\ \end{array}\right),\ W_{-1}=\sqrt{\sigma}\left(\begin{array}[]{ccc}0&\frac{1}{% \sqrt{2}}&0\\ 0&0&-\frac{1}{\sqrt{2}}\\ 0&0&0\\ \end{array}\right),\ \ W_{0}=\sqrt{\sigma}\left(\begin{array}[]{ccc}\frac{1}{3% }&0&0\\ 0&-\frac{2}{3}&0\\ 0&0&\frac{1}{3}\\ \end{array}\right),$$ (164) $$W_{1}=\sqrt{\sigma}\left(\begin{array}[]{ccc}0&0&0\\ -\frac{1}{\sqrt{2}}&0&0\\ 0&\frac{1}{\sqrt{2}}&0\\ \end{array}\right),\ W_{2}=\sqrt{\sigma}\left(\begin{array}[]{ccc}0&0&0\\ 0&0&0\\ 2&0&0\\ \end{array}\right)\,.$$ (165) Appendix B Chevalley basis for $\mathcal{A}_{N-1}$ Here we write the matrix basis we used to write down the form of the Toda connection for general $N$, (137). The $N\times N$ matrices can be written down in terms of Kronecker’s delta $$\left(H_{i}\right)_{jk}=\delta_{ij}\delta_{ik}-\delta_{i+1,j}\delta_{i+1,k}\,,% \quad\left(E_{i}^{+}\right)=\delta_{ij}\delta_{i+1,k}\,,\quad\left(E_{i}^{-}% \right)=\delta_{i+1,j}\delta_{ik}\,,$$ (166) where $i=1,\ldots,N-1$. Specifically for $N=3$ we have $$H_{1}=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&0\end{pmatrix}\,,\quad H_{2}=\begin{pmatrix}0&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix}\,,$$ (167) $$E_{1}^{+}=\begin{pmatrix}0&1&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\,,\quad E_{1}^{-}=\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&0&0\end{pmatrix}\,,$$ (168) $$E_{2}^{+}=\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&0&0\end{pmatrix}\,,\quad E_{2}^{-}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&1&0\end{pmatrix}\,.$$ (169) Appendix C Aspects of Toda theory In this appendix we review general aspects of the $\mathcal{A}_{N-1}$ Toda theory that we use in the main text. For the case of real Toda fields, this is material known in the literature and can be found in various sources including BabylonTalon ; Bilal:1988jf ; Fateev:2007ab . Here we will start by assuming the Toda fields to be complex and we will later specify two different reality conditions on them. The system of Toda equations consists of $N-1$ fields $\phi_{i}$ satisfying $$\begin{matrix}\partial\bar{\partial}\phi_{1}+e^{-2\phi_{1}+\phi_{2}}=0\,,\\ \partial\bar{\partial}\phi_{2}+e^{-2\phi_{2}+\phi_{1}+\phi_{3}}=0\,,\\ \vdots\\ \partial\bar{\partial}\phi_{i}+e^{-2\phi_{i}+\phi_{i-1}+\phi_{i+1}}=0\,,\\ \vdots\\ \partial\bar{\partial}\phi_{N-1}+e^{-2\phi_{N-1}+\phi_{N-2}}=0\,.\end{matrix}$$ (170) We observe that the above system has a $\mathbb{Z}_{2}$ symmetry under exchanging $\phi_{i}\longleftrightarrow\phi_{N-i}$. Also if we know one of the Toda fields, let’s say $\operatorname{e}^{\phi_{1}}$, we can find the rest by substituting in the Toda equations and solving them iteratively. The field $\operatorname{e}^{\phi_{1}}$ can be found by solving an $N$-th order associated ODE and the related monodromy problem. C.1 The associated ODE Two expressions for the $N$-th order ODE are $$\prod_{i=1}^{N}\left(\partial+J_{i}\right)\xi=\left(\partial^{N}+\sum_{i=2}^{N% }U^{(i)}\partial^{N-i}\right)\xi=0\,,$$ (171) where we have assumed $U^{(1)}=\sum_{i=1}^{N}J_{i}=0$. The relation between $J_{i}$ and $U^{(i)}$ can be seen by expanding the above equation. Specifically for the $\mathcal{A}_{1}$ Toda (Liouville) we have $$U^{(2)}\equiv T=-\partial J-J^{2}\,,$$ (172) where $J_{1}=-J_{2}\equiv J$ and $T$ is the Liouville stress tensor. For the case of the $\mathcal{A}_{2}$ Toda we have $$\displaystyle U^{(3)}=J_{1}\partial J_{3}+J_{1}J_{2}J_{3}+\partial^{2}J_{3}+% \partial J_{2}J_{3}+J_{2}\partial J_{3}\,,$$ (173) $$\displaystyle U^{(2)}=J_{1}J_{2}+J_{2}J_{3}+J_{1}J_{3}+\partial J_{3}-\partial J% _{1}\,.$$ (174) Upon identifying $$J_{i}\equiv\left(\partial\phi_{N-j}H^{j}\right)_{ii}\,,$$ (175) and provided the Toda equations are satisfied we get that the functions $U^{(s)}$ correspond to $N-1$ holomorphic currents $$\bar{\partial}U^{(s)}_{1}=0\,,\quad s=2,\ldots,N\,.$$ (176) Similarly we define $N-1$ antiholomorphic currents $$\partial V^{(s)}_{1}=0\,,\quad s=2,\ldots,N\,,$$ (177) which are constructed out of $\bar{\partial}^{k}\phi_{i}$. Specifically for the case of $sl(3)$ we have $$\begin{split}&\displaystyle U^{(2)}_{1}=-\partial^{2}\phi_{1}-\partial^{2}\phi% _{2}-(\partial\phi_{1})^{2}-(\partial\phi_{2})^{2}+\partial\phi_{1}\partial% \phi_{2}\,,\\ &\displaystyle U^{(3)}_{1}=\partial^{3}\phi_{1}+\partial\phi_{1}\left(-2% \partial^{2}\phi_{1}+\partial^{2}\phi_{2}-\partial\phi_{1}\partial\phi_{2}+(% \partial\phi_{2})^{2}\right)\,,\end{split}$$ (178) and $$\begin{split}&\displaystyle V^{(2)}_{1}=-\bar{\partial}^{2}\phi_{1}-\bar{% \partial}^{2}\phi_{2}-(\bar{\partial}\phi_{1})^{2}-(\bar{\partial}\phi_{2})^{2% }+\bar{\partial}\phi_{1}\bar{\partial}\phi_{2}\,,\\ &\displaystyle V^{(3)}_{1}=\bar{\partial}^{3}\phi_{1}+\bar{\partial}\phi_{1}% \left(-2\bar{\partial}^{2}\phi_{1}+\bar{\partial}^{2}\phi_{2}-\bar{\partial}% \phi_{1}\bar{\partial}\phi_{2}+(\bar{\partial}\phi_{2})^{2}\right)\,.\end{split}$$ (179) The currents $V^{(s)}$ can be taken from the currents $U^{(s)}$ simply by replacing $\partial$ with $\bar{\partial}$. Then we have two ODEs, one for the holomorphic and one for antiholomorphic currents, which are both satisfied by $\operatorname{e}^{\phi_{1}}$ $$\begin{split}&\displaystyle\left(\partial^{N}+{U}^{(2)}_{1}\partial^{N-2}+% \ldots+U^{(N)}_{1}\right)e^{\phi_{1}}=0\,,\\ &\displaystyle\left(\bar{\partial}^{N}+V^{(2)}_{1}\bar{\partial}^{N-2}+\ldots+% V^{(N)}_{1}\right)e^{\phi_{1}}=0\,.\end{split}$$ (180) Because of the $\mathbb{Z}_{2}$ symmetry of the Toda system we can also make the identification $$J_{i}\equiv\left(\partial\phi_{j}H^{j}\right)_{ii}\,.$$ (181) Then we get a second set of holomorphic and anti-holomorphic currents $$\bar{\partial}U^{(s)}_{2}=0\,,\quad\partial V^{(s)}_{2}=0\,,\quad s=2,\ldots,N\,,$$ (182) and a second set of ODEs that are being satisfied by $\operatorname{e}^{\phi_{N-1}}$ $$\begin{split}&\displaystyle\left(\partial^{N}+{U}^{(2)}_{2}\partial^{N-2}+% \ldots+U^{(N)}_{2}\right)e^{\phi_{N-1}}=0\,,\\ &\displaystyle\left(\bar{\partial}^{N}+V^{(2)}_{2}\bar{\partial}^{N-2}+\ldots+% V^{(N)}_{2}\right)e^{\phi_{N-1}}=0\,.\end{split}$$ (183) In other words we can get (183) from (180) be exchanging $\phi_{i}\leftrightarrow\phi_{N-i}$. C.2 Primary currents One can check that under a conformal mapping $z\rightarrow f(z)$ the Toda system (170) is invariant provided the fields $\phi_{i}$ transform as $$\phi_{i}\rightarrow\phi_{i}-\frac{q_{i}}{2}\log\left(\partial f(z)\right)-% \frac{q_{i}}{2}\log\left(\bar{\partial}\bar{f}(\bar{z})\right)\,.$$ (184) The constants $q_{i}$ are the proportionality constants $\phi_{i}=q_{i}\phi_{grav}$ that appear after restricting to the gravity subsector of the theory as $$\phi_{j}H^{j}=2\phi_{grav}L_{0}\,.$$ (185) From this we immediately deduce $q_{1}=q_{N-1}=N-1$. For example for $N=2$ we have $q=1$ and for $N=3$ we have $q_{1}=q_{2}=2$. Out of the currents $U^{(s)}_{1,2}$ and their derivatives one can construct currents $\mathcal{W}^{(s)}_{1,2}$ that transform as primaries under the above transformation $$\mathcal{W}^{(s)}_{1,2}\rightarrow\left(\partial f(z)\right)^{s}\mathcal{W}^{(% s)}_{1,2}\,,\quad s=3,\ldots N\,.$$ (186) For $s=2$ the current transforms as a quasi-primary $$U^{(2)}_{1,2}\equiv\mathcal{W}^{(2)}_{1,2}\rightarrow\left(\partial f(z)\right% )^{2}\mathcal{W}^{(2)}_{1,2}+\beta_{N}S\left(f(z),z\right)\,,$$ (187) where $\beta_{N}$ is a constant and $$S\left(f(z),z\right)\equiv\frac{\partial^{3}f(z)}{\partial f(z)}-\frac{3}{2}% \left(\frac{\partial^{2}f(z)}{\partial f(z)}\right)^{2}\,,$$ (188) is the Schwarzian derivative. Specifically for the $\textrm{sl}(2)$ case we have $\beta_{2}=\frac{1}{2}$ while for $\textrm{sl}(3)$ we have $\beta_{3}=2$ and $$\mathcal{W}^{(3)}_{1,2}=U^{(3)}_{1,2}-\frac{1}{2}\partial\mathcal{W}^{(2)}_{1,% 2}\,.$$ (189) From (189) and (178) we get $$\displaystyle{\cal W}^{(2)}_{1}(z)$$ $$\displaystyle=$$ $$\displaystyle-(\partial\phi_{1})^{2}-(\partial\phi_{2})^{2}+\partial\phi_{1}% \partial\phi_{2}-\partial^{2}\phi_{1}-\partial^{2}\phi_{2}$$ (190) $$\displaystyle{\cal W}_{1}^{(3)}(z)$$ $$\displaystyle=$$ $$\displaystyle-(\partial\phi_{1})^{2}\partial\phi_{2}+(\partial\phi_{2})^{2}% \partial\phi_{1}-\partial^{2}\phi_{1}\partial\phi_{1}+\partial^{2}\phi_{2}% \partial\phi_{2}$$ (191) $$\displaystyle+{1\over 2}\left(-\partial^{2}\phi_{1}\partial\phi_{2}+\partial^{% 2}\phi_{2}\partial\phi_{1}-\partial^{3}\phi_{1}+\partial^{3}\phi_{2}\right)\,.$$ Thus for the case of $\textrm{sl}(3)$ we observe that $${\cal W}^{(2)}_{1}={\cal W}^{(2)}_{2}\,,\quad{\cal W}^{(3)}_{1}=-{\cal W}^{(3)% }_{2}\,.$$ (192) In general for the primary currents we have the relation $$\mathcal{W}^{(s)}_{1}=\left(-1\right)^{s}\mathcal{W}^{(s)}_{2}\,,\quad s=2,% \ldots,N\,.$$ (193) Thus unless otherwise specified for the primary currents we will drop the bottom index and write $$\mathcal{W}^{(s)}\equiv\mathcal{W}^{(s)}_{1}\,,\quad s=2,\ldots,N\,.$$ (194) Similarly out of the currents $V^{(s)}_{1,2}$ and their derivatives one can construct currents $\mathcal{W}^{(s)}_{b}$ that transform as primaries under transformation (184). Now we have already assumed (194) and the sub-index $b$ simply denotes the primary currents constructed out of $V^{(s)}$ instead of $U{(s)}$. C.3 Properties of the solutions As we alluded in the beginning of this section to fully solve the Toda system we only need to know one of the Toda fields which can be found by satisfying an associated ODE. For the field $\phi_{1}$ we showed that the holomorphic and antiholomorphic ODEs are given by (180). Thus we can write $$\operatorname{e}^{\phi_{1}}=\sum_{i=1}^{N}\psi_{i}\left(z\right)\tilde{\psi}^{% i}\left(\bar{z}\right)\,,$$ (195) where $\psi_{i}$ and $\tilde{\psi}^{i}$ are independent holomorphic and anti-holomorphic solutions of (180) $$\left(\partial^{N}+U^{(2)}_{1}\bar{\partial}^{N-2}+\ldots+U^{(N)}_{1}\right)% \psi_{i}(z)=0\,,$$ (196) $$\left(\bar{\partial}^{N}+V^{(2)}_{1}\bar{\partial}^{N-2}+\ldots+V^{(N)}_{1}% \right)\tilde{\psi}^{i}(\bar{z})=0\,.$$ (197) Similarly for the field $\phi_{N-1}$ we have $$\operatorname{e}^{\phi_{N-1}}=\sum_{i=1}^{N}\chi^{i}\left(z\right)\tilde{\chi}% _{i}\left(\bar{z}\right)\,,$$ (198) where $\chi^{i}$ and $\tilde{\chi}_{i}$ are independent holomorphic and anti-holomorphic solutions of (183) $$\left(\partial^{N}+U^{(2)}_{2}\bar{\partial}^{N-2}+\ldots+U^{(N)}_{2}\right)% \chi^{i}(z)=0\,,$$ (199) $$\left(\bar{\partial}^{N}+V^{(2)}_{2}\bar{\partial}^{N-2}+\ldots+V^{(N)}_{2}% \right)\tilde{\chi}_{i}(\bar{z})=0\,.$$ (200) To further analyze the solutions we will need to impose a reality condition on the Toda fields. Here we will consider two different reality conditions. In the first case we will take the Toda fields to be real. In the second case we will take the Toda fields to be complex conjugate of each other such that $\phi_{i}=\bar{\phi}_{N-i}$. C.3.1 Real Toda fields For convenience we arrange the solutions into a column vector $\Psi=\left(\psi_{1},\ldots\psi_{N}\right)^{T}$. When the Toda fields are real we observe that $$V_{i}^{(s)}=\bar{U}_{i}^{(s)}\,.$$ (201) Thus $\bar{\Psi}(\bar{z})$ and $\tilde{\Psi}$ solve the same ODE. Then in general, $\tilde{\psi}^{i}$ will be linear combinations of $\bar{\psi}_{i}$ $$\widetilde{\Psi}=\Lambda^{T}\bar{\Psi}\,,$$ (202) where $\Lambda\in\textrm{GL}\left(N,\mathbb{C}\right)$ is a constant matrix such that $$\operatorname{e}^{\phi_{1}}=\Psi^{\dagger}\left(\bar{z}\right)\Lambda\Psi\left% (z\right)\,.$$ (203) Taking the conjugate transpose of the above and demanding $\operatorname{e}^{\phi_{1}}$ to be real we get the condition $$\Lambda^{\dagger}=\Lambda\,.$$ (204) As we mentioned before by knowing the solution for $\operatorname{e}^{\phi_{1}}$ and by substituting in the Toda equations we can iteratively obtain all the $\operatorname{e}^{\phi_{i}}$ with $i=1,\ldots,N-1$. To perform this procedure we actually need only $N-2$ from the total of $N-1$ Toda equations. Substituting all the $\operatorname{e}^{\phi_{i}}$ in the last Toda equation we obtain the condition $$W_{\psi}\,\textrm{det}\Lambda\,\overline{W}_{\psi}=\left(-1\right)^{\lfloor% \frac{N}{2}\rfloor}\,,$$ (205) where $W_{\psi}$ is the Wronskian. Also because $U^{(1)}=\bar{U}^{(1)}=0$, since $\sum_{i}J_{i}=0$, we have that the Wronskian is constant and we can set it equal to one. Therefore (205) becomes $$\textrm{det}\Lambda=\left(-1\right)^{\lfloor\frac{N}{2}\rfloor}\,.$$ (206) To fully specify $\Lambda$ we will consider a specific solution of (196) which we will describe in the next subsection. Similarly from the ODEs (183) we have $$\operatorname{e}^{\phi_{N-1}}=X^{\dagger}\Lambda_{2}X\,.$$ (207) Starting from $\operatorname{e}^{\phi_{1}}$ and by iteratively solving the Toda equations we deduce $$\chi_{i}=M^{\{N,i\}}_{W_{\Psi}}\,,\quad i=1,\ldots,N\,,$$ (208) and $\Lambda_{2}=\Lambda$, where $M^{\{N,i\}}_{W_{\Psi}}$ is the $\{N,i\}$ minor of the Wronskian with respect to $\Psi$. C.3.2 Complex conjugate Toda fields $\phi_{i}=\bar{\phi}_{N-i}$ In this case the number of independent Toda equations reduces by half as the second half of Toda equations are the complex conjugate of the first half. This can be seen as a consequence of the $\mathbb{Z}_{2}$ symmetry of the system of Toda equations under exchanging $\phi_{i}\longleftrightarrow\phi_{N-i}$. In the case of an odd number of Toda fields the middle field is real. When we use this reality condition we observe that $$\bar{U}_{1}^{(s)}=V_{2}^{(s)}\,.$$ (209) Thus $\tilde{\Psi}(\bar{z})$ and $\bar{X}(\bar{z})$ solve the same ODE. The same holds for the pair $\tilde{X}(\bar{z})$ and $\bar{\Psi}(\bar{z})$. Consequently we have $$\tilde{\Psi}=N_{1}^{T}\bar{X}\,,\quad\tilde{X}=N_{2}^{T}\bar{\Psi}\,,$$ (210) where $N_{1,2}\in\textrm{GL}\left(N,\mathbb{C}\right)$ are constant matrices such that $$\operatorname{e}^{\phi_{1}}=X^{\dagger}N_{1}\Psi\,,\quad\operatorname{e}^{\phi% _{N-1}}=\Psi^{\dagger}N_{2}X\,.$$ (211) Since we have $\phi_{1}=\bar{\phi}_{N-1}$ we deduce that $N_{1}^{\dagger}=N_{2}$. By substituting in the Toda equations we find that $$N_{1}=N_{2}=\mathbb{I}\,,$$ (212) $$\psi_{a}=(-1)^{\lfloor\frac{N-1}{2}\rfloor}\mathrm{i}^{N-1}\epsilon_{ab_{1}% \ldots b_{N-1}}\chi^{b_{1}}\partial\chi^{b_{2}}\ldots\partial^{N-2}\chi^{b_{N-% 1}}\,,$$ (213) $$\chi^{a}=\left((-1)^{\lceil\frac{N}{2}\rceil}\mathrm{i}\right)^{N-1}\epsilon^{% ab_{1}\ldots b_{N-1}}\psi_{b_{1}}\partial\psi_{b_{2}}\ldots\partial^{N-2}\psi_% {b_{N-1}}\,.$$ (214) Thus we can write $$\operatorname{e}^{\phi_{1}}=\left((-1)^{\lceil\frac{N}{2}\rceil}(-\mathrm{i})% \right)^{N-1}\epsilon^{ab_{1}\ldots b_{N-1}}\psi_{a}\bar{\psi}_{b_{1}}\bar{% \partial}\bar{\psi}_{b_{2}}\ldots\bar{\partial}^{N-2}\bar{\psi}_{b_{N-1}}\,.$$ (215) C.4 A simple solution A simple solution to the associated ODE can be found if we set all the currents to zero, $U^{(s)}_{i}=0$ and $V^{(s)}_{i}=0$. Then the $\psi_{i}$ should be given by linear combinations of $z^{k}$ with $k=0,1,\ldots,N-1$. Here we present these solutions for the two reality conditions on the Toda fields C.4.1 For real Toda fields We have $$\Psi=\frac{1}{A^{\frac{1}{N}}}\left(1,\sqrt{(N-1)}z,\ldots,\sqrt{\binom{N-1}{k% }}z^{k},\ldots,z^{N-1}\right)^{T}\,,$$ (216) where $$A=\prod_{k=0}^{N-1}k!\sqrt{\binom{N-1}{k}}\,,$$ (217) is a normalization constant chosen such that the Wronskian is equal to one. To have proper boundary conditions on the disk we want as $|z|\rightarrow 1$ $$\operatorname{e}^{\phi_{1}}\sim\left(1-|z|^{2}\right)^{q_{1}}=\left(1-|z|^{2}% \right)^{N-1}\,.$$ (218) Then (203) and (206), together with (216) imply $$\Lambda=\textrm{diag}\left(1,-1,1,-1,\ldots\right)\,.$$ (219) Then we have $$\operatorname{e}^{\phi_{1}}=\frac{1}{A^{\frac{2}{N}}}\left(1-|z|^{2}\right)^{N% -1}=\frac{1}{\left(N-1\right)!}\left(1-|z|^{2}\right)^{N-1}\,,$$ (220) from which we observe that the solution (216) also specifies the zeroth order term in the expansion of $\phi_{1}$ as $$f_{0}=-\log\left(N-1\right)!\,,$$ (221) For the reflection property of $\psi_{i}$ $$\Psi=Sz^{N-1}\bar{X}\left(\frac{1}{z}\right)\,,$$ (222) we find the proportionality matrix $S$ to be $$S=(-1)^{\lfloor\frac{N}{2}\rfloor}\frac{B}{A^{\frac{N-2}{N}}}\mathbb{I}=(-1)^{% \lfloor\frac{N}{2}\rfloor}\mathbb{I}\,,$$ (223) where $$B=\prod_{k=0}^{N-2}k!\sqrt{\binom{N-1}{k}}\,.$$ (224) C.4.2 For complex conjugate Toda fields We have $$\Psi=\frac{1}{A^{\frac{1}{N}}}\left(1+\mathrm{i}z^{N-1},z+\mathrm{i}z^{N-2},% \ldots,z^{k}+\mathrm{i}z^{N-k-1}\,\ldots,z^{N-1}+\mathrm{i}\right)^{T}\,,$$ (225) where $$A=\left(1+\mathrm{i}\right)^{\lceil\frac{N}{2}\rceil}\left(1-\mathrm{i}\right)% ^{\lfloor\frac{N}{2}\rfloor}\prod_{k=1}^{N-1}k!\,,$$ (226) is chosen again such that the Wronskian is equal to one. By using (225) together with (215) we find for $N$ being odd $$\operatorname{e}^{\phi_{1}}=\frac{(-1)^{\frac{2}{N}}}{(N-1)!}\left(1-|z|^{2}% \right)^{N-1}\,,$$ (227) while for $N$ being even $$\operatorname{e}^{\phi_{1}}=\frac{1}{(N-1)!}\left(1-|z|^{2}\right)^{N-1}\,.$$ (228) The solution specifies again the zeroth order term in the expansion of $\phi_{1}$ which is now complex for $N$ being odd $$f_{0}=-\log(N-1)!-n\frac{2\pi\mathrm{i}}{N}\,,$$ (229) where $n$ is an integer. For the reflection property of $\psi_{i}$ $$\Psi=\tilde{S}z^{N-1}\bar{\Psi}\left(\frac{1}{z}\right)\,,$$ (230) we find the proportionality matrix to be $$\tilde{S}=\mathrm{i}\left(\frac{\bar{A}}{A}\right)^{\frac{1}{N}}\mathbb{I}\,.$$ (231) Appendix D Coupling higher spin particles via Wilson lines The dynamics of massive point particles coupled to gravity in three dimensions is captured, in the Chern-Simons formulation, by Wilson lines. This connection was initially established for the case of asymptotically flat gravity Witten:1989sx ; Carlip:1989nz . In the context of entanglement entropy it has also been extended to AdS${}_{3}$ both for higher spin theories Ammon:2013hba ; Castro:2014mza and for spinning particles Castro:2014tta . Here we review this construction in for the situation relevant to us, namely that we consider the Lorenzian higher spin theories with independent gauge fields $A,\bar{A}$ taking values in $\textrm{sl}(N,\mathbb{R})$ or $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$, and that we restrict attention to ‘chiral’ point-particle sources which couple only to $A$. D.1 Wilson Lines We start from the higher spin theory described by two Chern-Simons fields $A,\bar{A}$ taking values in $\textrm{sl}(N,\mathbb{R})$ or $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. The most general Wilson line we can add to this theory is of the form $W_{R}(C)\bar{W}_{\bar{R}}(C)$, where $$W_{R}(C)={\rm tr}_{R}{\cal P}\exp\int_{C}A,\qquad\bar{W}_{\bar{R}}(C)={\rm tr}% _{\bar{R}}{\cal P}\exp\int_{C}\bar{A}\,.$$ (232) It depends both on the choice of the curve $C$ and on two representations $R,\bar{R}$ of the gauge group. It is natural to expect that, if we take $R$ and $\bar{R}$ to be unitary irreducible representations, the Wilson line describes a coupling of a point particle to the higher spin theory, with $R$ and $\bar{R}$ carrying the information about the physical properties point particle such as mass, spin and higher spin charges. This interpretation was made precise in Ammon:2013hba , see also Castro:2014tta , Castro:2018srf . Restricting attention to representations where the energy $P_{0}=L_{0}-\bar{L}_{0}$ is bounded below, we are led to the infinite-dimensional, lowest-weight representations. These are built on a lowest weight or primary state $|h,\vec{w}\rangle=|h,w_{3},...,w_{N}\rangle$ satisfying $$\begin{split}&\displaystyle L_{0}|h,\vec{w}\rangle=h|h,\vec{w}\rangle\,,\qquad L% _{1}|h,\vec{w}\rangle=0\,,\\ &\displaystyle W^{(s)}_{0}|h,\vec{w}\rangle=w_{s}|h,\vec{w}\rangle\,,\qquad W^% {(s)}_{j}|h,\vec{w}\rangle=0\,,\quad j=1,\ldots,s-1\,,\end{split}$$ (233) where $s=3,\ldots,N$. The primary state is annihilated by the lowering operators and descendant states are created by acting with the raising operators $L_{-1}$ and $W^{(s)}_{-j}$. The physical properties of the particle are encoded in the $2\times(N-1)$ independent Casimir invariants of the representation $R$ and $\bar{R}$. These can be expressed in terms of the primary weights $(h,\vec{w})$ (and their right-moving cousins) as we will work out below for $N=2,3$. In this work we are be interested in the subclass of particles where $$\bar{R}=1\,,$$ (234) is the singlet representation; so from now on we will deal only with the left-moving Wilson line $W_{R}(C)$. A useful representation of the Wilson line is obtained by interpreting $R$ as the Hilbert space of an auxiliary quantum mechanical system that lives on the Wilson line. The auxiliary quantum system is described by a field $U$ taking values in the gauge group ($\textrm{SL}(N,\mathbb{R})$ or $\textrm{SU}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$) and it’s conjugate momentum $P$ taking values in the Lie algebra. The dynamics of $U,P$ is picked so that upon quantisation the Hilbert space of the system will be the representation $\mathcal{R}$. Then the trace over $R$ is replaced by a path integral $$W_{R}(C)=\int\mathcal{D}Ue^{iS(U;A)_{R,C}}\,,$$ (235) where $S(U;A)_{R,C}$ is a first-order action of the form Castro:2014tta $$S(U;A)_{R,C}=\int_{C}ds\left[{\rm tr}\left(PD_{s}UU^{-1}\right)+\lambda^{(2)}% \left({\rm tr}\left(P^{2}\right)+c_{R}^{(2)}\right)+\ldots+\lambda^{(N)}\left(% {\rm tr}\left(P^{N}\right)+c_{R}^{(N)}\right)\right]\,,$$ (236) and $$D_{s}U=\partial_{s}U+A_{s}U\,,\quad A_{s}=A_{\mu}\frac{dx^{\mu}}{ds}\,.$$ (237) Here $A_{s}$ denotes the pullback of $A$ to the world-line $C$ and $P$ is a canonically conjugate momentum to $U$ and takes values in the Lie algebra $\textrm{sl}(N,\mathbb{R})$ resp. $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. We note that the $\lambda^{(i)}$ are Lagrange multipliers which fix the trace invariants of $P$ in terms of the Casimir invariants $c_{R}^{(i)}$ of the representation $R$. We refer to Castro:2018srf for more details on the equivalence between (232) and (236). The action (236) is invariant under the gauge symmetry $A\rightarrow\Lambda(A+d)\Lambda^{-1}$ under which the worldline fields transform as $$U\rightarrow\Lambda U,\qquad P\rightarrow\Lambda P\Lambda^{-1}\,,$$ (238) where in this formula $\Lambda=\Lambda(x^{\mu}(s))$ is pulled back to the worldline. In the above action we take the trace “${\rm tr}$” to be normalized as in the $N$-dimensional representation. This defines the Killing forms $$h_{a_{1}...a_{m}}={\rm tr}\left(T_{\small(a_{1}}...T_{a_{m}\small)}\right)\,,% \quad m=2,\ldots,N\,,$$ (239) where $T_{a}$ are the generators of $\textrm{sl}(N,\mathbb{R})$ resp. $\textrm{su}(\lfloor\frac{N}{2}\rfloor,\lceil\frac{N}{2}\rceil)$. The Casimir operators are given by $$C^{(m)}=h^{a_{1}...a_{m}}T_{a_{1}}...T_{a_{m}}\,,$$ (240) and the $c^{(m)}_{R}$ are their values in the representation $R$. Specifically for the momentum we have $${\rm tr}\left(P^{m}\right)=h_{a_{1}\ldots a_{m}}P^{a_{1}}\ldots P^{a_{m}}\,,$$ (241) where $P=P^{a}T_{a}$. We will be interested in the regime of the parameters $(h,\vec{w})$ where the path integral (235) is well approximated by its saddle point value; in this regime we have to find solutions the equations following from the total action $$S=S_{CS}(A)-S_{CS}(\bar{A})+S(U;A)_{R,C}\,.$$ (242) D.2 Equations of motion From the above action we derive following equations of motion for the connections $$F_{\mu\nu}=-\frac{2\pi}{k}\epsilon_{\mu\nu\rho}\int_{C}ds\frac{dx^{\rho}}{ds}P% \delta^{(3)}\left(x-x(s)\right),\qquad\bar{F}_{\mu\nu}=0\,,$$ (243) where $k$ is related to the central charge and Newton’s constant through $$c=12k\epsilon_{N}=\frac{3}{2G}\,,$$ (244) where $$\epsilon_{N}=\textrm{Tr}[L_{0}L_{0}]=\frac{N\left(N^{2}-1\right)}{12}\,.$$ (245) The equation from varying $U$ is $$\partial_{s}P+\left[P,\partial_{s}UU^{-1}\right]=0\,,$$ (246) and from varying the momentum we obtain $$\frac{1}{2}D_{s}UU^{-1}+2\lambda^{(2)}P+3\lambda^{(3)}P\times P+...+N\lambda^{% (N)}\underbrace{P\times...\times P}_{N-1}=0\,,$$ (247) where $$\underbrace{P\times...\times P}_{m}=h_{a_{1}...a_{m+1}}P^{a_{1}}...P^{a_{m}}T^% {a_{m+1}}\,.$$ (248) We also have the constraints coming from the Lagrange multipliers $${\rm tr}\left(P^{m}\right)=-c_{R}^{(m)},\quad m=2,...,N\,.$$ (249) Let us now simplify the system equations (243,246,247,249) for the situation at hand. As we argued and showed explicitly for $N=2$ and $N=3$, we can choose a coordinate system $(t,z,\bar{z})$ and a suitable gauge such that $A$ is a Lax connection for the ${\cal A}_{N-1}$ Toda system. In particular, $A$ is of the form $A_{z}dz+A_{\bar{z}}d\bar{z}$ and the worldline $C$ the particle moves on has constant $z=z_{0}$. In this case we can choose the worldline coordinate $s$ such that $t(s)=s$ and we observe that $A_{s}=0$. The equations (246,247) are then solved by $$\partial_{s}P=0\,,\qquad U=1\,,\qquad\lambda^{(m)}=0\,.$$ (250) The remaining equation (243) is then101010Our complex delta-function is normalized as $\frac{\mathrm{i}}{2}\int dzd\bar{z}\delta^{(2)}(z)=1$ and we have e.g. $\bar{\partial}\left({1\over z}\right)=\pi\delta^{(2)}(z)$. $$F_{z\bar{z}}=-{\pi\mathrm{i}\over k}P\delta^{(2)}(z-z_{0})\,,$$ (251) where the constant Lie algebra element $P$ is constrained to satisfy (249). Since $A$ is a Lax connection for the ${\cal A}_{N-1}$ Toda system, the above equations reduce to the Toda equations with delta-function sources. In the following two paragraphs we will work the precise coefficients in front of the delta-functions for $N=2$ and $N=3$. D.3 Spin 2 case In this case we have (see (17)) $$V^{-1}\tilde{F}_{z\bar{z}}V=-2\left(\partial\bar{\partial}\phi+e^{-2\phi}% \right)L_{0}\,,$$ (252) where $V$ is constant element defined in (14). Plugging the latter into (251) shows that $V^{-1}\tilde{P}V$ should be proportional to $L_{0}$: $$V^{-1}\tilde{P}V=-\mathrm{i}2\alpha L_{0}\,.$$ (253) The constant $\alpha$ is determined by the Casimir constraint (249). From (240) the quadratic Casimir takes the value $$c^{(2)}_{R}=2h(h-1)\approx 2h^{2}\,,$$ (254) where in the last approximation we used the fact that the saddle point approximation to (235) is valid for $h\gg 1$. Equation (249) then becomes $${\rm tr}\tilde{P}^{2}=-2\alpha^{2}=-2h^{2}\,,$$ (255) so that (251) reduces to $$\partial\bar{\partial}+e^{-2\phi}={\pi h\over k}\delta^{(2)}(z-z_{0})\,.$$ (256) Using (244) and generalizing to several point-particle sources leads to $$\partial\bar{\partial}\phi+e^{-2\phi}=4\pi G\sum_{i}h_{i}\delta^{(2)}\left(z-z% _{i},\bar{z}-\bar{z}_{i}\right)\,,$$ (257) which is the equation found in Hulik:2016ifr using the metric formulation upon identifying $m_{i}=h_{i}$. D.4 Spin 3 case In this case we have from (39) $$V^{-1}\tilde{F}_{z\bar{z}}V=-\left(\partial\bar{\partial}\phi_{1}+% \operatorname{e}^{-2\phi_{1}+\phi_{2}}\right)H_{1}-\left(\partial\bar{\partial% }\phi_{2}+\operatorname{e}^{-2\phi_{2}+\phi_{1}}\right)H_{2}\,,$$ (258) where $H_{1},H_{2}$ are the diagonal matrices (167)111111 These are related to $L_{0}$ and $W_{0}$ as $H_{1}={1\over 2}L_{0}+{3\over 2\sqrt{\sigma}}W_{0}$, $H_{2}={1\over 2}L_{0}-{3\over 2\sqrt{\sigma}}W_{0}$.. Setting $$V^{-1}\tilde{P}V=-\mathrm{i}(\alpha_{1}H_{1}+\alpha_{2}H_{2}),$$ (259) (251) reduces to $$\begin{split}&\displaystyle\partial\bar{\partial}\phi_{1}+\operatorname{e}^{-2% \phi_{1}+\phi_{2}}=16\pi G\alpha_{1}\delta^{(2)}\left(z-z_{0},\bar{z}-\bar{z}_% {0}\right)\,,\\ &\displaystyle\partial\bar{\partial}\phi_{2}+\operatorname{e}^{-2\phi_{2}+\phi% _{1}}=16\pi G\alpha_{2}\delta^{(2)}\left(z-z_{0},\bar{z}-\bar{z}_{0}\right)\,,% \end{split}$$ (260) where we have used (244). The constants $\alpha_{1,2}$ are determined by the Casimir constraints (249). For the values of the Casimirs we find from (240) $$c^{(2)}_{R}=\frac{1}{2}h^{2}+\frac{3\sigma}{2}w^{2}+\ldots,\qquad c^{(3)}_{R}=% {3\over 4\sqrt{\sigma}}w(h^{2}-\sigma w^{2})+\ldots\,,$$ (261) where the omitted terms are subleading in the regime where the saddle point approximation to (235) is valid. The Casimir constraints (249) then reduce to $$\begin{split}&\displaystyle\alpha_{1}^{2}-\alpha_{1}\alpha_{2}+\alpha_{2}^{2}=% \frac{1}{4}\left(h^{2}+3\sigma w^{2}\right)\,,\\ &\displaystyle\left(\alpha_{2}-\alpha_{1}\right)\alpha_{1}\alpha_{2}={\mathrm{% i}\over\sqrt{4\sigma}}w\left(h^{2}-\sigma w^{2}\right)\,.\end{split}$$ (262) Equations (260) easily generalize to many particles by replacing $\alpha_{j}\rightarrow\sum_{i}\alpha^{(i)}_{j}$. Then the resulting equations are equations (54) appearing in the main text.   References (1) A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Holographic representation of local bulk operators,” Phys. Rev. D74 (2006) 066009, arXiv:hep-th/0606141 [hep-th]. (2) A. Almheiri, D. Marolf, J. Polchinski, and J. Sully, “Black Holes: Complementarity or Firewalls?,” JHEP 02 (2013) 062, arXiv:1207.3123 [hep-th]. (3) K. Papadodimas and S. Raju, “An Infalling Observer in AdS/CFT,” JHEP 10 (2013) 212, arXiv:1211.6767 [hep-th]. (4) T. Hartman, “Entanglement Entropy at Large Central Charge,” arXiv:1303.6955 [hep-th]. (5) T. Faulkner, “The Entanglement Renyi Entropies of Disjoint Intervals in AdS/CFT,” arXiv:1303.7221 [hep-th]. (6) A. L. Fitzpatrick, J. Kaplan, and M. T. Walters, “Universality of Long-Distance AdS Physics from the CFT Bootstrap,” JHEP 08 (2014) 145, arXiv:1403.6829 [hep-th]. (7) E. Hijano, P. Kraus, and R. Snively, “Worldline approach to semi-classical conformal blocks,” JHEP 07 (2015) 131, arXiv:1501.02260 [hep-th]. (8) E. Hijano, P. Kraus, E. Perlmutter, and R. Snively, “Witten Diagrams Revisited: The AdS Geometry of Conformal Blocks,” JHEP 01 (2016) 146, arXiv:1508.00501 [hep-th]. (9) M. Ammon, A. Castro, and N. Iqbal, “Wilson Lines and Entanglement Entropy in Higher Spin Gravity,” JHEP 10 (2013) 110, arXiv:1306.4338 [hep-th]. (10) A. Castro and E. Llabrés, “Unravelling Holographic Entanglement Entropy in Higher Spin Theories,” JHEP 03 (2015) 124, arXiv:1410.2870 [hep-th]. (11) M. Besken, A. Hegde, E. Hijano, and P. Kraus, “Holographic conformal blocks from interacting Wilson lines,” JHEP 08 (2016) 099, arXiv:1603.07317 [hep-th]. (12) O. Hulík, T. Procházka, and J. Raeymaekers, “Multi-centered AdS${}_{3}$ solutions from Virasoro conformal blocks,” JHEP 03 (2017) 129, arXiv:1612.03879 [hep-th]. (13) A. B. Zamolodchikov and A. B. Zamolodchikov, “Liouville field theory on a pseudosphere,” arXiv:hep-th/0101152 [hep-th]. (14) E. Witten, “Topology Changing Amplitudes in (2+1)-Dimensional Gravity,” Nucl. Phys. B323 (1989) 113–140. (15) H. Verlinde, “CFT/AdS and the Black Hole Interior, Talk at IAS, Princeton, June 19, 2014,”. (16) S. Jackson, L. McGough, and H. Verlinde, “Conformal Bootstrap, Universality and Gravitational Scattering,” Nucl. Phys. B901 (2015) 382–429, arXiv:1412.5205 [hep-th]. (17) H. Verlinde, “Poking Holes in AdS/CFT: Bulk Fields from Boundary States,” arXiv:1505.05069 [hep-th]. (18) J. D. Brown and M. Henneaux, “Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity,” Commun. Math. Phys. 104 (1986) 207–226. (19) A. Achucarro and P. K. Townsend, “A Chern-Simons Action for Three-Dimensional anti-De Sitter Supergravity Theories,” Phys. Lett. B180 (1986) 89. [,732(1987)]. (20) E. Witten, “(2+1)-Dimensional Gravity as an Exactly Soluble System,” Nucl. Phys. B311 (1988) 46. (21) M. Banados, “Three-dimensional quantum geometry and black holes,” AIP Conf. Proc. 484 no. 1, (1999) 147–169, arXiv:hep-th/9901148 [hep-th]. (22) O. Babelon, D. Bernard, and M. Talon, Introduction to Classical Integrable Systems. Cambridge University Press, 2007. (23) M. Mathisson, “Neue mechanik materieller systemes,” Acta Phys. Polon. 6 (1937) 163–2900. (24) A. Papapetrou, “Spinning test particles in general relativity. 1.,” Proc. Roy. Soc. Lond. A209 (1951) 248–258. (25) W. G. Dixon, “Dynamics of extended bodies in general relativity. I. Momentum and angular momentum,” Proc. Roy. Soc. Lond. A314 (1970) 499–527. (26) A. Castro, S. Detournay, N. Iqbal, and E. Perlmutter, “Holographic entanglement entropy and gravitational anomalies,” JHEP 07 (2014) 114, arXiv:1405.2792 [hep-th]. (27) O. Coussaert, M. Henneaux, and P. van Driel, “The Asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant,” Class. Quant. Grav. 12 (1995) 2961–2966, arXiv:gr-qc/9506019 [gr-qc]. (28) A. Campoleoni, S. Fredenhagen, S. Pfenninger, and S. Theisen, “Asymptotic symmetries of three-dimensional gravity coupled to higher-spin fields,” JHEP 11 (2010) 007, arXiv:1008.4744 [hep-th]. (29) A. B. Zamolodchikov, “Infinite Additional Symmetries in Two-Dimensional Conformal Quantum Field Theory,” Theor. Math. Phys. 65 (1985) 1205–1213. [Teor. Mat. Fiz.65,347(1985)]. (30) A. Campoleoni, S. Fredenhagen, and S. Pfenninger, “Asymptotic W-symmetries in three-dimensional higher-spin gauge theories,” JHEP 09 (2011) 113, arXiv:1107.0290 [hep-th]. (31) P. Bowcock and G. M. T. Watts, “Null vectors, three point and four point functions in conformal field theory,” Theor. Math. Phys. 98 (1994) 350–356, arXiv:hep-th/9309146 [hep-th]. [Teor. Mat. Fiz.98,500(1994)]. (32) N. Wyllard, “A(N-1) conformal Toda field theory correlation functions from conformal N = 2 SU(N) quiver gauge theories,” JHEP 11 (2009) 002, arXiv:0907.2189 [hep-th]. (33) V. Fateev and S. Ribault, “The Large central charge limit of conformal blocks,” JHEP 02 (2012) 001, arXiv:1109.6764 [hep-th]. (34) I. Coman, E. Pomoni, and J. Teschner, “Toda conformal blocks, quantum groups, and flat connections,” arXiv:1712.10225 [hep-th]. (35) J. de Boer, A. Castro, E. Hijano, J. I. Jottar, and P. Kraus, “Higher spin entanglement and ${\mathcal{W}}_{\mathrm{N}}$ conformal blocks,” JHEP 07 (2015) 168, arXiv:1412.7520 [hep-th]. (36) D. Harlow, J. Maltz, and E. Witten, “Analytic Continuation of Liouville Theory,” JHEP 12 (2011) 071, arXiv:1108.4417 [hep-th]. (37) Y. Nakayama and H. Ooguri, “Bulk Locality and Boundary Creating Operators,” JHEP 10 (2015) 114, arXiv:1507.04130 [hep-th]. (38) A. Campoleoni, S. Fredenhagen, and J. Raeymaekers, “Quantizing higher-spin gravity in free-field variables,” JHEP 02 (2018) 126, arXiv:1712.08078 [hep-th]. (39) A. L. Fitzpatrick, J. Kaplan, D. Li, and J. Wang, “On information loss in AdS${}_{3}$/CFT${}_{2}$,” JHEP 05 (2016) 109, arXiv:1603.08925 [hep-th]. (40) A. Castro, R. Gopakumar, M. Gutperle, and J. Raeymaekers, “Conical Defects in Higher Spin Theories,” JHEP 02 (2012) 096, arXiv:1111.3381 [hep-th]. (41) A. Bilal and J.-L. Gervais, “Extended C=Infinity Conformal Systems from Classical Toda Field Theories,” Nucl. Phys. B314 (1989) 646–686. (42) V. A. Fateev and A. V. Litvinov, “Correlation functions in conformal Toda field theory. I.,” JHEP 11 (2007) 002, arXiv:0709.3806 [hep-th]. (43) S. Carlip, “Exact Quantum Scattering in (2+1)-Dimensional Gravity,” Nucl. Phys. B324 (1989) 106–122. (44) A. Castro, N. Iqbal, and E. Llabrés, “Wilson Lines and Ishibashi States in AdS${}_{3}$/CFT${}_{2}$,” arXiv:1805.05398 [hep-th].
Efficient Least Squares for Estimating Total Effects under Linearity and Causal Sufficiency F. Richard Guolabel=e1]ricguo@uw.edu [    Emilija Perkovićlabel=e2]perkovic@uw.edu [ Department of Statistics University of Washington, Seattle Department of Statistics University of Washington Box 354322 Seattle, WA 98195 E-mail: e2 Abstract Recursive linear structural equation models are widely used to postulate causal mechanisms underlying observational data. In these models, each variable equals a linear combination of a subset of the remaining variables plus an error term. When there is no unobserved confounding or selection bias, the error terms are assumed to be independent. We consider estimating a total causal effect in this setting. The causal structure is assumed to be known only up to a maximally oriented partially directed acyclic graph (MPDAG), a general class of graphs that can represent a Markov equivalence class of directed acyclic graphs (DAGs) with added background knowledge. We propose a simple estimator based on recursive least squares, which can consistently estimate any identified total causal effect, under point or joint intervention. We show that this estimator is the most efficient among all regular estimators that are based on the sample covariance, which includes covariate adjustment and the estimators employed by the joint-IDA algorithm. Notably, our result holds without assuming Gaussian errors. , structural equation model, least squares, semiparametric efficiency, partially directed acyclic graph, observational studies, \setattribute journalname \usetkzobjall \AtAppendix \AtAppendix \AtAppendix \AtAppendix \externaldocumentsupplement {aug} and January 17, 2021 total effect 1 Introduction A linear structural equation model (SEM) specifies a causal mechanism underlying a set of variables (Bollen, 1989). Each variable equals a linear combination of a subset of the remaining variables plus an error term. A SEM is associated with a mixed graph, also known as a path diagram (Wright, 1921, 1934), which consists of both directed edges and bi-directed edges. A directed edge $i\rightarrow j$ represents that variable $i$ appears as a covariate in the structural equation defining variable $j$. The equation for variable $j$ takes the form $$X_{j}=\sum_{i:i\rightarrow j}\gamma_{ij}X_{i}+\epsilon_{j},$$ (1) where $\epsilon_{j}$ is an error term. Often, the errors are assumed to follow a multivariate normal distribution, but it need not be the case. A bi-directed edge $i\leftrightarrow j$ indicates that errors $\epsilon_{i}$ and $\epsilon_{j}$ are dependent, which is assumed when there exists an unobserved (i.e., latent) confounder between $i$ and $j$. The mixed graph is usually assumed to be acyclic, i.e., the graph does not contain cycles made of directed edges. We focus on the setting when there is no unobserved confounder or selection bias, a condition also known as causal sufficiency; see Spirtes et al. (2000, Chap. 3) and Pearl (2009, Chap. 6). In this setting, all the error terms are assumed to be mutually independent and the mixed graph associated with the linear SEM is a directed acyclic graph (DAG), often called a causal DAG. Aside from being a statistical model for observational data, the linear SEM is also a causal model in the sense that it specifies the behavior of the system under interventions (see Section 3.2). Therefore, the total causal effect of one treatment variable (point intervention) or several treatment variables (joint intervention) on some outcome varibles can be defined. The underlying causal DAG is usually unknown. In fact, linear SEMs associated with different DAGs may define the same observed distribution (Drton et al., 2011). Without further assumptions on the error distributions, the underlying DAG can only be learned from observational data up to its Markov equivalence class, which can be uniquely represented by a completed partially directed acyclic graph (CPDAG) (Meek, 1995; Andersson et al., 1997). Additional background knowledge, such as knowledge of certain causal relationships (Meek, 1995; Fang and He, 2020) or partial orderings (Scheines et al., 1998), restrictions on the error distributions (Hoyer et al., 2008; Peters and Bühlmann, 2014), and other assumptions (Hauser and Bühlmann, 2012; Wang et al., 2017; Rothenhäusler et al., 2018; Eigenmann et al., 2017) can be used to further refine the Markov equivalence class of DAGs, resulting in representing the causal structure as a maximally oriented PDAG (MPDAG), which is a rather general class of graphs containing directed and undirected edges that subsumes DAGs and CPDAGs (Meek, 1995). A given total causal effect is identified given a graph, if it can be expressed as a functional of the observed distribution, which is the same for every DAG in the equivalence class. Recently, a necessary and sufficient graphical criterion for identification given an MPDAG has been shown by Perković (2020). In general, there may be more than one identifying functional. Naturally, the next step is to develop estimators for an identified total effect with desirable properties. When the effect is unidentified, the reader is referred to the IDA-type approaches (Maathuis et al., 2009; Nandy et al., 2017), which are beyond the scope of this paper. Among others, we consider the following desiderata. Completeness. Can the estimator consistently estimate every identified effect, under either point or joint interventions? Efficiency. Does the estimator achieve the smallest asymptotic (co)-variance compared to a reasonably large class of estimators? To the best of our knowledge, no estimator proposed in the literature fulfills both desiderata. Indeed, the commonly used covariate adjustment estimators (Pearl, 1993; Shpitser et al., 2010; Maathuis and Colombo, 2015; Perković et al., 2015) do not exist for certain total effects under joint interventions (Nandy et al., 2017; Perković et al., 2018; Perković, 2020). Furthermore, when they exist, even with an optimal adjustment set chosen to maximize efficiency (Henckel et al., 2019; Rotnitzky and Smucler, 2019; Witte et al., 2020), we will show that covariate adjustment can compare less favorably against a larger class of estimators. We propose an estimator that is based on recursive least squares, that affirmatively fulfills both desiderata. In particular, our proposed estimator achieves the efficiency bound among all regular estimators that only depend on the sample covariance; see Section 6 for the precise definition of the class of estimators. Remarkably, our result holds regardless of the type of error distribution in the underlying linear SEM. Our method is implemented in the R (R Core Team, 2020) package eff2 (https://github.com/richardkwo/eff2), which standards for “efficient effect” (estimate). The paper is organized as follows. In Section 2, we review related work on efficient estimation of total causal effects in over-identified settings. In Section 3, we introduce the preliminaries on linear structural equation models, causal graphs and the identification of total causal effects. The concept of bucket decomposition is introduced. In Section 4, we introduce a block-recursive representation for the observational data and identify the total causal effect under such a representation. We first derive the proposed least squares estimator by finding the maximum likelihood estimator (MLE) under the assumption of Gaussian errors in Section 5. We then prove the optimal efficiency of our proposed estimator under arbitrary error distributions in Section 6. Additional preliminaries, proofs and numerical results can be found in the Appendix. 2 Related work The statistical performance of an estimator of a total causal effect, in over-identified settings, has recently received more attention; see, e.g, Kuroki and Miyakawa (2003); Henckel et al. (2019); Witte et al. (2020); Gupta et al. (2020); Rotnitzky and Smucler (2019); Smucler et al. (2020); Kuroki and Nanmo (2020). Here, “over-identified” (Koopmans and Reiersøl, 1950) refers to the fact that the total causal effect can be expressed as more than one functional of the (population) observed distribution, all of which coincide due to the additional conditional independence constraints obeyed by the observed distribution. For example, in the case where a total causal effect can be identified through covariate adjustment, usually there exists more than one valid adjustment set (Henckel et al., 2019). This is in contrast to the more traditional setting of causal inference, where the observed data distribution is nonparametric and is not expected to satisfy extra conditional independences. Intuitively, the conditional independences in over-identified models can be exploited to maximize asymptotic efficiency; see, e.g., Sargan (1958); Hansen (1982) for early works in this direction. Under a linear SEM with independent errors, a total causal effect can be estimated via covariate adjustment as the least squares coefficient from the regression of the outcome on the treatment and adjustment variables. Henckel et al. (2019) recently showed that, under a linear SEM with independent errors, a valid adjustment set that minimizes asymptotic variance, also referred to as the optimal adjustment set, can be graphically characterized; see also Witte et al. (2020) for a further characterization of such an optimal set. This result was generalized by Rotnitzky and Smucler (2019) beyond linear SEMs: an optimal adjustment set is shown to always exist for point interventions, and a semiparametric efficient estimator is developed for this case. Note that, while valid adjustment sets (called “time-independent” adjustment sets by Rotnitzky and Smucler (2019)) exist for point interventions (Perković, 2020, Proposition 4.2), they may not exist for joint interventions (Nandy et al., 2017; Perković et al., 2018; Perković, 2020). Less is known about how to efficiently estimate the total causal effect of a joint intervention, at least in a generic fashion. For linear SEMs with independent errors, with the knowledge of the parents of the treatment variables in the underlying causal DAG, Nandy et al. (2017) considered two estimators for the joint-IDA algorithm, one based on recursive least squares and one based on a modified Cholesky decomposition. However, the efficiency properties of these estimators were not explored. In Section 7, numerical comparisons will show that our proposed estimator significantly outperform the estimators of Nandy et al. (2017). Other results on the linear SEM include explicit calculations and comparisons for typical examples with either a particular structure or only a few variables; see, e.g., Kuroki and Cai (2004); Gupta et al. (2020). Gaussian errors are also assumed in these calculations. 3 Linear SEMs, causal graphs and effect identification 3.1 Linear SEMs under causal sufficiency A linear SEM postulates a causal mechanism that generates data. Let $X$ denote a vector of variables generated by a linear SEM, where $X$ is indexed by $V$ ($X=X_{V}$). Let $\mathcal{D}$ be the associated DAG on vertices $V$. For this $|V|$-dimensional random vector $X$, the model in Eq. 1 can be compactly rewritten as $${X}=\Gamma^{\top}{X}+\epsilon,\quad\Gamma=(\gamma_{ij}),\quad i\rightarrow j% \text{ not in }\mathcal{D}\,\Rightarrow\,\gamma_{ij}=0.$$ (2) where $\Gamma\in\mathbb{R}^{|V|\times|V|}$ is a coefficient matrix, and $\epsilon=(\epsilon_{i})$ is a $|V|$-dimensional random vector. DAG $\mathcal{D}$ is associated with the linear SEM in Eq. 1 in the sense that the non-zero entries of $\Gamma$ correspond to the edges in $\mathcal{D}$. Under causal sufficiency (no latent variables), we assume $$\{\epsilon_{i}:i\in V\}\text{ are independent},\ \operatorname{\mathbb{E}}% \epsilon=0,\ \operatorname{\mathbb{E}}\epsilon\epsilon^{\top}\succ\bm{0},$$ (3) where for a real, symmetric matrix $A$, $A\succ\bm{0}$ means $A$ is positive definite. The errors $\{\epsilon_{i}:i\in V\}$ are not necessarily Gaussian, nor identically distributed. The law $P(X)$ is called the observed distribution. For a given $\mathcal{D}$, we will use $\mathcal{P}_{\mathcal{D}}$ to denote the set of possible laws of $X$, namely the collection of $P(X)$ as $\Gamma$ and the error distribution vary subject to Eqs. 2 and 3. The linear SEM poses certain restrictions on the set of laws $\mathcal{P}_{\mathcal{D}}$. Let $\operatorname{Pa}(i,\mathcal{D})$ denote the set of parents of vertex $i$, i.e., $\{j:j\rightarrow i\text{ is in }\mathcal{D}\}$. For any $P\in\mathcal{P}_{\mathcal{D}}$, among other constraints, (i) $P$ factorizes according to $\mathcal{D}$, (ii) $\operatorname{\mathbb{E}}[X_{i}\mid X_{\operatorname{Pa}(i,\mathcal{D})}]$ is linear in $X_{\operatorname{Pa}(i,\mathcal{D})}$ and (iii) $\operatorname{\mathrm{var}}[X_{i}\mid X_{\operatorname{Pa}(i,\mathcal{D})}]$ is constant in $X_{\operatorname{Pa}(i,\mathcal{D})}$. We observe $n$ independent and identically distributed (iid) samples generated by the model above, namely $X^{(i)}=(I-\Gamma)^{-\top}\epsilon^{(i)}$ for $i=1,\dots,n$. Note that $(I-\Gamma)$ is invertible because $\Gamma$ can be permuted into a lower-triangular matrix according to a topological ordering (i.e., causal ordering) of vertices in $\mathcal{D}$. 3.2 Interventions and total causal effects The assumed linear SEM also dictates the behavior of the system under interventions. Let $A\subseteq V$ be a set of vertices indexing treatment variables $X_{A}$. We use $\mathrm{do}(X_{A}=x_{A})$ to denote intervening on variables $X_{A}$ and forcing them to take values $x_{A}$ (Pearl, 1995). We call this a point intervention if $A$ is a singleton, and a joint intervention if $A$ consists of several vertices, which correspond to the case of multiple treatments. While $X_{A}$ is fixed to $x_{A}$, the remaining variables are generated by their corresponding structural equations Eq. 1, with each $X_{i}$ for $i\in A$ appearing in the equations replaced by the corresponding enforced value $x_{i}$ (Strotz and Wold, 1960). This generating mechanism defines the interventional distribution, denoted by $P(X|\mathrm{do}(X_{A}=x_{A}))$, where the conditional probability notation is only conventional. More formally, the interventional distribution is expressed as $$P(X|\mathrm{do}(X_{A}=x_{A}))=\prod_{j\in A}\delta_{x_{j}}(X_{j})\prod_{i% \notin A}P\left(X_{i}|X_{\operatorname{Pa}(i,\mathcal{D})}\right),$$ (4) where $\delta$ denotes a Dirac measure. Factor $P\left(X_{i}|X_{\operatorname{Pa}(i)}\right)$ is defined by the structural equation for $X_{i}$. Eq. 4 is known as the truncated factorization formula (Pearl, 2009), manipulated density formula (Spirtes et al., 2000) or the g-formula (Robins, 1986). Definition 1 (Total causal effect, Pearl, 2009; Nandy et al., 2017). Let $X_{A}$ be a vector of treatment variables and $X_{Y}$ with $Y\in V\setminus A$ be an outcome variable. The total causal effect of $X_{A}$ on $X_{Y}$ is defined as the vector $\tau_{AY}\in\mathbb{R}^{|A|}$, where $$(\tau_{AY})_{i}=\frac{\partial}{\partial x_{A_{i}}}\operatorname{\mathbb{E}}[X% _{Y}\mid\mathrm{do}(X_{A}=x_{A})],\quad i=1,\dots,|A|.$$ That is, $\tau_{AY}$ is the gradient of the linear map $x_{A}\mapsto\operatorname{\mathbb{E}}[Y|\mathrm{do}(X_{A}=x_{A})]$. When multiple outcomes $Y=\{Y_{1},\dots,Y_{k}\}$, $k>1$, are considered, the total causal effect of $X_{A}$ on $X_{Y_{1}},\dots,X_{Y_{k}}$ can be defined by concatenating $\tau_{AY_{1}},\dots,\tau_{AY_{k}}$. Therefore, throughout, we assume the outcome variable is a singleton without loss of generality. Each coordinate of the total causal effect $\tau_{AY}$ can be expressed as a sum-product of the underlying linear SEM coefficients along certain causal paths from $A$ to $Y$ in $\mathcal{D}$, that is, certain paths of the form $A_{1}\to\dots\to Y_{i}$ for $A_{1}\in A$; see also Wright (1934); Sullivant et al. (2010). 3.3 Causal graphs Two different linear SEMs on the same set of variables can define the same observed distribution. For example, under Gaussian errors, linear SEMs associated with DAGs $A\rightarrow Y$ and $A\leftarrow Y$, define the same set of observed distributions, namely the set of centered bivariate Gaussian distributions. Without making additional assumptions on the error distribution, such as non-Gaussianity (Shimizu et al., 2006), partial non-Gaussianity (Hoyer et al., 2008), or equal variance of errors (Peters and Bühlmann, 2014; Chen et al., 2019), the underlying causal DAG can only be learned from the observed distribution up to its Markov equivalence class (Pearl and Verma, 1995; Chickering, 2002). CPDAGs Two DAGs on the same set of vertices are Markov equivalent if they encode the same set of d-separation relations between the vertices. The d-separations between the vertices, prescribe conditional independences between the corresponding variables (known as the Markov condition (Lauritzen, 1996, §3.2.2)); see Appendix C for the definition of d-separation and more background. This equivalence relation defines a Markov equivalence class, which consists of DAGs as elements. A Markov equivalence class can be uniquely represented by a completed partially directed acyclic graph (CPDAG), also known as an essential graph (Meek, 1995; Andersson et al., 1997). A CPDAG $\mathcal{C}$ is a graph on the same set of vertices, that can contain both directed and undirected edges. We use $[\mathcal{C}]$ to denote the Markov equivalence class represented by CPDAG $\mathcal{C}$. A directed edge $i\rightarrow j$ in $\mathcal{C}$ implies $i\rightarrow j$ is in every $\mathcal{D}\in[\mathcal{C}]$, whereas an undirected edge $i-j$ in $\mathcal{C}$ implies there exist $\mathcal{D}_{1},\mathcal{D}_{2}\in[\mathcal{C}]$ such that $i\rightarrow j$ in $\mathcal{D}_{1}$ but $i\leftarrow j$ in $\mathcal{D}_{2}$. Given a DAG $\mathcal{D}$, the CPDAG $\mathcal{C}$ representing the Markov equivalence class of $\mathcal{D}$ can be drawn by keeping the skeleton of $\mathcal{D}$, adding all the unshielded colliders from $\mathcal{D}$ and completing the orientation rules R1–R3 of Meek (1995); see Fig. C.1 in Appendix C. For example, DAGs $A\rightarrow Y$ and $A\leftarrow Y$ are represented by CPDAG $A-Y$. To slightly abuse the notation, for a distribution $Q$, we write $Q\in[\mathcal{C}]$ if $Q$ factorizes according to some DAG $\mathcal{D}\in[\mathcal{C}]$; see Lauritzen (1996, §3.2.2). There are various structure learning algorithms that can be used to uncover CPDAG $\mathcal{C}$ from observational data. Some well-known examples are the PC algorithm (Spirtes et al., 2000) and the greedy equivalence search (Chickering, 2002). Choosing an appropriate algorithm for the dataset at hand is beyond the scope of this paper; the reader is referred to Drton and Maathuis (2017, §4) for a recent overview. MPDAGs Certain background knowledge, if present, can be used to further orient some undirected edges in a CPDAG $\mathcal{C}$. Typically, knowledge of temporal orderings can inform the orientation of certain undirected edges; see Spirtes et al. (2000, §5.8.4) for an example. Adding these background-knowledge orientations and the additionally implied orientations based on the orientation rules of Meek (1995) to $\mathcal{C}$ results in a maximally oriented partially directed graph (MPDAG) $\mathcal{G}$. See Fig. C.1 and Algorithm 1 in Appendix C. MPDAGs are a rather general class of graphs that subsumes both DAGs and CPDAGs. An MPDAG $\mathcal{G}$ represents a restricted Markov equivalence class of DAGs, which we also denote by $[\mathcal{G}]$. Analogously to the case of a CPDAG, $i\rightarrow j$ in $\mathcal{G}$ implies $i\rightarrow j$ is in every $\mathcal{D}\in[\mathcal{G}]$, and $i-j$ in $\mathcal{G}$ implies there exist $\mathcal{D}_{1},\mathcal{D}_{2}\in[\mathcal{G}]$ such that $i\rightarrow j$ in $\mathcal{D}_{1}$ but $i\leftarrow j$ in $\mathcal{D}_{2}$. For the rest of the paper, we will assume that we have access to an MPDAG $\mathcal{G}$ that represents our structural knowledge about the underlying DAG $\mathcal{D}$. That is, $$\text{causal DAG }\mathcal{D}\in[\mathcal{G}],\quad\mathcal{G}\text{ is an % MPDAG},$$ (5) where $[\mathcal{G}]$ represents a collection of DAGs that are Markov equivalent, but can be strictly smaller than the corresponding Markov equivalence class due to background knowledge. 3.4 Causal effect identification Throughout the paper we will use the following notatitions. Given treatment variables $X_{A}$ and an outcome variable $X_{Y}$ such that $Y\notin A$, we are interested in learning the total causal effect $\tau_{AY}$. We assume that we have access to an MPDAG $\mathcal{G}$, and to observational data that are generated as iid samples from a linear SEM defined by Eqs. 2 and 3, where the causal DAG $\mathcal{D}$ is in $[\mathcal{G}]$. Before estimation can be performed, we need to make sure that $\tau_{AY}$ can be identified from observational data. That is, we need to ensure that $\tau_{AY}$ can be expressed as a functional of the observed distribution that is the same for every DAG in $[\mathcal{G}]$. We have the following graphical criterion. Theorem 1 (Perković, 2020). The total causal effect $\tau_{AY}$ of $X_{A}$ on $X_{Y}$ is identified given an MPDAG $\mathcal{G}$ if and only if there is no proper, possibly causal path from $A$ to $Y$ in $\mathcal{G}$ that starts with an undirected edge. Theorem 1 is Proposition 3.2 of Perković (2020). This result does not require that the data is generated by a linear SEM. However, Perković (2020) proves that when the criterion fails, then two linear SEMs with Gaussian errors can be constructed such that their observed distributions coincide but their $\tau_{AY}$’s are different. Hence, even if we restrict ourselves to linear SEMs, Theorem 1 still holds. A few terms need some explanation. A path from $A$ to $Y$ in $\mathcal{G}$ is a sequence of distinct vertices $\langle v_{1},\dots,v_{k}\rangle$ for $k>1$ with $v_{1}\in A$ and $v_{k}=Y$, such that every pair of successive vertices are adjacent in $\mathcal{G}$. The path is proper when only its first vertex is in $A$. The path is possibly causal if no edge $v_{l}\leftarrow v_{r}$ is in $\mathcal{G}$ for $1\leq l<r\leq k$. The reader is referred to Appendix C for more graphical preliminaries. When $\mathcal{G}$ satisfies Theorem 1 relative to vertex sets $A$ and $Y$, the interventional distribution $P(X_{Y}|\mathrm{do}(X_{A}=x_{A}))$, and hence the total effect, can be computed from the observed distribution $P(X)$. To express the identification formula, we require the following concepts. 3.4.1 Buckets and bucket decomposition Let $\mathcal{G}=(V,E,U)$ be a partially directed graph, where $V$ is the set of vertices, and $E$ and $U$ are sets of directed and undirected edges respectively. Let $B_{1},\dots,B_{K}$ be the maximal connected components of the undirected graph $\mathcal{G}_{U}:=(V,\emptyset,U)$. Then $V=B_{1}\,\dot{\cup}\dots\dot{\cup}\,B_{K}$, where symbol $\dot{\cup}$ denotes disjoint union. Note that all the directed edges within each $B_{i}$ are due to background knowledge. If we ignore the distinction between directed and undirected edges, then the subgraph induced by each $B_{i}$ is chordal (Andersson et al., 1997, §4). Suppose the connected components are ordered such that $$i\rightarrow j\in E,\ i\in B_{i},\ j\in B_{j}\quad\Rightarrow\quad i<j.$$ (6) One can show that such a partial causal ordering always exists, though it may not be unique; see Algorithm 2 in Appendix C to obtain such an ordering. Our result does not depend on the particular choice of partial causal ordering. We call $B_{1},\dots,B_{K}$ the bucket decomposition of $V$ and call each $B_{k}$ for $k=1,\dots,K$ a bucket; see Fig. 6.1(a) for an example. If it is clear which graph $\mathcal{G}$ is being referred to, we will shorten $\operatorname{Pa}(j,\mathcal{G})$ as $\operatorname{Pa}(j)$ to reduce clutter. For a set of vertices $C$ in $\mathcal{G}$, we use $\operatorname{Pa}(C):=\cup_{i\in C}\operatorname{Pa}(i)\setminus C$ to denote the set of their external parents. Clearly, $\operatorname{Pa}(B_{k})\subseteq B_{[k-1]}$, where $B_{[k-1]}:=B_{1}\cup\dots\cup B_{k-1}$. Lemma 1. Let $i$ and $j$ be two distinct vertices in MPDAG $\mathcal{G}=(V,E,U)$ such that $i\rightarrow j\in E$. Suppose that there is no undirected path from $i$ to $j$ in $\mathcal{G}$. If there is a vertex $k$, and an undirected path $j-\dots-k$ in $\mathcal{G}$, then $i\rightarrow k\in E$. By definition of the parent set above we have that $\operatorname{Pa}(B_{k})=\cup_{i\in B_{k}}\operatorname{Pa}(i)\setminus B_{k}$, $k=1,\dots,K$. However, since a bucket $B_{k}$ is a maximal subset of $V$ that is connected by undirected edges in $\mathcal{G}$, Lemma 1 implies the following important property. Corollary 1 (Restrictive property). Let $B_{1},\dots,B_{K}$ be the bucket decomposition of $V$ in MPDAG $\mathcal{G}=(V,E,U)$. Then, all vertices in the same bucket have the same set of external parents, namely $$\operatorname{Pa}(B_{k})=\operatorname{Pa}(i)\setminus B_{k},\quad\text{for % any }i\in B_{k},\,k=1,\dots,K.$$ The causal identification formula for $P(X_{Y}|\mathrm{do}(X_{A}=x_{A}))$ of Perković (2020) relies on a decomposition of certain ancestors of $Y$ in MPDAG $\mathcal{G}$ according to the buckets. We call vertex $i$ an ancestor of vertex $j$ in $\mathcal{G}$ if there exists a directed path $i\to\dots\to j$ in $\mathcal{G}$; we use the convention that $j$ is an ancestor of itself. We denote the set of ancestors of $j$ in $\mathcal{G}$ as $\operatorname{An}(j,\mathcal{G})$, or shortened as $\operatorname{An}(j)$. Let $\mathcal{G}_{V\setminus A}=(V\setminus A,E^{\prime},U^{\prime})$ denote the subgraph of $\mathcal{G}$ induced by the vertices $V\setminus A$, where $E^{\prime}$ includes those edges in $E$ that are between vertices in $V\setminus A$, and similarly for $U^{\prime}$. Consider the set of ancestors of $Y$ in $\mathcal{G}_{V\setminus A}$, denoted as $$D:=\operatorname{An}(Y,\mathcal{G}_{V\setminus A}).$$ (7) The bucket decomposition $D_{1},\dots,D_{K}$ of $D$, induced by the bucket decomposition of $V$, is simply $$D=\dot{\bigcup}_{k=1}^{K}D_{k},\quad D_{k}=D\cap B_{k},\quad i=1,\dots,K.$$ (8) Lemma 2. When the criterion in Theorem 1 is satisfied, we have $\operatorname{Pa}(D_{k},\mathcal{G})=\operatorname{Pa}(B_{k},\mathcal{G})$ for every nonempty $D_{k}$. Proofs of Lemmas 1 and 2 are left to Appendix B. Theorem 2 (Perković, 2020). Suppose the criterion in Theorem 1 is satisfied for $A,Y$ in MPDAG $\mathcal{G}=(V,E,U)$ such that $Y\notin A$. Let $P(X)$ be the observed distribution. Let $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and $D_{1},\dots D_{K}$ be the bucket decomposition of $D$ as in Eq. 8. Then the interventional distribution $P(X_{Y}|\mathrm{do}(X_{A}=x_{A}))$ can be identified as $$P(X_{Y}|\mathrm{do}(X_{A}=x_{A}))=\bigintsss\left\{\prod_{k=1}^{K}P\left(X_{D_% {k}}|X_{\operatorname{Pa}(D_{k})}\right)\right\}\mathop{}\!\mathrm{d}X_{D% \setminus Y}$$ (9) for values $X_{\operatorname{Pa}(D_{k})}$ in agreement with $x_{A}$, where $P\left(X_{D_{k}}|X_{\operatorname{Pa}(D_{k})}\right)\equiv 1$ if $D_{k}=\emptyset$. The expression in Eq. 9 above is a generalization of the truncated factorization Eq. 4 from DAGs to MPDAGs. Theorem 2 holds generally even when an underlying linear SEM is not assumed. 4 Block-recursive representation In this section, we express the observed distribution $P(X)$ induced by a linear SEM compatible with MPDAG $\mathcal{G}=(V,E,U)$ in a block-recursive form. Each block corresponds to a bucket in the bucket decomposition of $V$. Such a reparameterization is necessitated by the fact that the causal ordering of $\mathcal{D}$ is unknown, whereas the buckets can be arranged into a valid partial causal ordering as in Eq. 6. We will use this representation to compute an estimator for the total causal effect. Recall that $\mathcal{P}_{\mathcal{D}}$ denotes the family of laws of $X$ arising from a linear SEM Eqs. 2 and 3 compatible with DAG $\mathcal{D}$. Let $\mathcal{P}_{\mathcal{G}}:=\cup_{\mathcal{D}\in[\mathcal{G}]}\mathcal{P}_{% \mathcal{D}}$, which denotes the family of laws of $X$ arising from a linear SEM compatible with a DAG in $[\mathcal{G}]$. Proposition 1 (Block-recursive form). Let $\mathcal{D}$ be the causal DAG associated with the linear SEM and $\mathcal{G}$ an MPDAG such that $\mathcal{D}\in[\mathcal{G}]$. Further, let $B_{1},\dots,B_{K}$ be the bucket decomposition of $V$ in $\mathcal{G}$. Then the linear SEM Eqs. 2 and 3 can be rewritten as $$X=\Lambda^{\top}X+\varepsilon,$$ for some matrix of coefficients $\Lambda=(\lambda_{ij})\in\mathbb{R}^{|V|\times|V|}$ and random vector $\varepsilon=(\varepsilon_{i})\in\mathbb{R}^{|V|}$ such that $$\displaystyle j\in B_{l},\ i\notin\operatorname{Pa}(B_{l},\mathcal{G})\quad% \Rightarrow\quad\lambda_{ij}=0,$$ (10) $$\displaystyle\operatorname{\mathbb{E}}\varepsilon=0,\quad\operatorname{\mathbb% {E}}\varepsilon_{B_{k}}\varepsilon_{B_{k}}^{\top}\succ\bm{0},\ (k=1,\dots,K),% \quad\varepsilon_{B_{1}},\dots,\varepsilon_{B_{K}}\text{ are mutually % independent},$$ (11) and $$\text{law of }\left(\varepsilon_{B_{k}}\right)\in\mathcal{P}_{\mathcal{G}_{B_{% k}}},\quad k=1,\dots,K,$$ (12) where $\mathcal{G}_{B_{k}}$ is the subgraph of $\mathcal{G}$ induced by $B_{k}$. Note that in contrast to symbol $\epsilon$ used in Eqs. 2 and 3, symbol $\varepsilon$ is used here to denote the errors in the block-recursive form. The coordinates within each $\varepsilon_{B_{k}}$ may be dependent. Proof. For $k=2,\dots,K$, by Eq. 2 and the restrictive property (Corollary 1), we have $$X_{B_{k}}=\Gamma^{\top}_{\operatorname{Pa}(B_{k}),B_{k}}X_{\operatorname{Pa}(B% _{k})}+\Gamma_{B_{k}}^{\top}X_{B_{k}}+\epsilon_{B_{k}},$$ where $\operatorname{Pa}(B_{k})=\operatorname{Pa}(B_{k},\mathcal{G})$. The expression can be rewritten as $$\begin{split}\displaystyle X_{B_{k}}&\displaystyle=\left(I-\Gamma_{B_{k}}% \right)^{-\top}\Gamma^{\top}_{\operatorname{Pa}(B_{k}),B_{k}}X_{\operatorname{% Pa}(B_{k})}+\left(I-\Gamma_{B_{k}}\right)^{-\top}\epsilon_{B_{k}}\\ &\displaystyle=\Lambda_{\operatorname{Pa}(B_{k}),B_{k}}^{\top}X_{\operatorname% {Pa}(B_{k})}+\varepsilon_{B_{k}},\end{split}$$ where $\varepsilon_{B_{k}}:=\left(I-\Gamma_{B_{k}}\right)^{-\top}\epsilon_{B_{k}}$ for $k=1,\dots,K$ (note that $X_{B_{1}}=\varepsilon_{B_{1}}$). Additionally, $\Lambda_{\operatorname{Pa}(B_{k}),B_{k}}=\Gamma_{\operatorname{Pa}(B_{k}),B_{k% }}\left(I-\Gamma_{B_{k}}\right)^{-1}$ for $k=2,\dots,K$. Matrix $\Lambda\in\mathbb{R}^{|V|\times|V|}$ in the statement of the proposition is defined by blocks $\Lambda_{\operatorname{Pa}(B_{k}),B_{k}}$ for $k=2,\dots,K$ and zero entries otherwise. Therefore, $\lambda_{ij}=0$ if $j\in B_{l}$ and $i\notin\operatorname{Pa}(B_{l})$ for some $l=1,\dots K$. Hence, by putting the blocks together, the model can be written as $X=\Lambda^{\top}X+\varepsilon$. The “new” errors $\varepsilon$ satisfy $$\varepsilon_{B_{k}}=\Gamma_{B_{k}}^{\top}\varepsilon_{B_{k}}+\epsilon_{B_{k}},% \quad k=1,\dots,K.$$ It then follows from Eqs. 2 and 3 that for every $k$, $$\text{law of }\varepsilon_{B_{k}}\in\mathcal{P}_{\mathcal{D}_{B_{k}}}\subset% \mathcal{P}_{\mathcal{G}_{B_{k}}},$$ since $D\in[\mathcal{G}]$. Moreover, for every $k$, $$\operatorname{\mathbb{E}}\varepsilon_{B_{k}}=0,\quad\operatorname{\mathbb{E}}% \varepsilon_{B_{k}}\varepsilon_{B_{k}}^{\top}=(I-\Gamma_{B_{k}})^{-\top}% \operatorname{\mathbb{E}}\epsilon_{B_{k}}\epsilon_{B_{k}}^{\top}(I-\Gamma_{B_{% k}})^{-1}\succ\bm{0},$$ where both $(I-\Gamma_{B_{k}})$ and $\operatorname{\mathbb{E}}\epsilon_{B_{k}}\epsilon_{B_{k}}^{\top}$ are full rank, because $\Gamma_{B_{k}}$ can be permuted into an upper-triangular matrix and $\operatorname{\mathbb{E}}\epsilon\epsilon^{\top}\succ\bm{0}$ by Eq. 3. ∎ Corollary 2. Under the same conditions as Proposition 1, it holds that $$\begin{split}\displaystyle X_{B_{1}}&\displaystyle=\varepsilon_{B_{1}},\\ \displaystyle X_{B_{k}}&\displaystyle=\Lambda^{\top}_{\operatorname{Pa}(B_{k})% ,B_{k}}X_{\operatorname{Pa}(B_{k})}+\varepsilon_{B_{k}},\quad\varepsilon_{B_{k% }}\mathrel{\text{\scalebox{1.07}{$\perp\mskip-10.0mu \perp$}}}X_{\operatorname% {Pa}(B_{k})},\quad k=2,\dots,K,\end{split}$$ (13) where $\operatorname{Pa}(B_{k})=\operatorname{Pa}(B_{k},\mathcal{G})$. Next, we show that if the total causal effect $\tau_{AY}$ is identifiable from MPDAG $\mathcal{G}$ (Theorem 1), then it can be calculated from $\Lambda$ in the block-recursive representation of Proposition 1. Therefore, the distribution of $\varepsilon$ is a nuisance relative to estimating $\tau_{AY}$. Proposition 2. Suppose the criterion in Theorem 1 is satisfied for $A,Y$ in MPDAG $\mathcal{G}=(V,E,U)$ such that $Y\notin A$. Let $\Lambda$ be the block-recursive coefficient matrix given by Proposition 1. The total causal effect of $X_{A}$ on $X_{Y}$ is identified as $$\tau_{AY}=\Lambda_{A,D}\left[(I-\Lambda_{D,D})^{-1}\right]_{D,Y},$$ (14) where $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and the last subscript denotes the column corresponding to $Y\in D$. Proof. We derive this result using Theorem 2. Recall that $D_{1},\dots,D_{K}$ is a partition of $D$ induced by the bucket decomposition $B_{1},\dots,B_{K}$ of $V$ in the sense that $D_{k}=D\cap B_{k}$ for $k=1,\dots,K$. When $D_{k}=\emptyset$, we use the convention that $P(X_{D_{k}}|X_{\operatorname{Pa}(D_{k})})\equiv 1$. By definition of $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and Eq. 6, observe that a vertex in $\operatorname{Pa}(D_{k})=\operatorname{Pa}(D_{k},\mathcal{G})$ is either in $D_{1}\cup\dots\cup D_{k-1}$ or in $A$. Let $F_{k}:=A\cap\operatorname{Pa}(D_{k})$. In Eq. 9, we note that the joint interventional distribution of $X_{D}$ is given by $$P(X_{D}|\mathrm{do}(X_{A}=x_{A}))=\prod_{k=1}^{K}P(X_{D_{k}}|X_{\operatorname{% Pa}(D_{k})})=\prod_{k=1}^{K}P(X_{D_{k}}|X_{\operatorname{Pa}(D_{k})\setminus F% _{k}},X_{F_{k}}=x_{F_{k}}),$$ where $x_{F_{k}}$ is fixed by the $\text{do}(X_{A}=x_{A})$ operation. Further, fix a factor $i\in\{1,\dots,K\}$. By Lemma 2, $\operatorname{Pa}(D_{i})=\operatorname{Pa}(B_{i})$. By Eq. 13 and $\varepsilon_{D_{i}}\mathrel{\text{\scalebox{1.07}{$\perp\mskip-10.0mu \perp$}}% }X_{\operatorname{Pa}(B_{i})}$, we have $$\begin{split}\displaystyle X_{D_{i}}\mid\left\{X_{\operatorname{Pa}(D_{i})% \setminus F_{i}},X_{F_{i}}=x_{F_{i}}\right\}&\displaystyle=_{d}\Lambda_{% \operatorname{Pa}(D_{i})\setminus F_{i},D_{i}}^{\top}X_{\operatorname{Pa}(D_{i% })\setminus F_{i}}+\Lambda_{F_{i},D_{i}}x_{F_{i}}+\varepsilon_{D_{i}}\\ &\displaystyle=\Lambda_{\operatorname{Pa}(D_{i})\cap D,D_{i}}^{\top}X_{% \operatorname{Pa}(D_{i})\cap D}+\Lambda_{\operatorname{Pa}(D_{i})\cap A,D_{i}}% x_{\operatorname{Pa}(D_{i})\cap A}+\varepsilon_{D_{i}}\end{split}$$ The fact that the display above holds for every $i=1,\dots,K$ implies that the joint interventional distribution $P(X_{D}|\mathrm{do}(X_{A}=x_{A}))$ satisfies $$X_{D}=\Lambda_{D,D}^{T}X_{D}+\Lambda_{A,D}^{\top}x_{A}+\varepsilon_{D}.$$ It follows that $X_{D}=(I-\Lambda_{D,D})^{-\top}(\Lambda_{A,D}^{\top}x_{A}+\varepsilon_{D})$ and hence $$\operatorname{\mathbb{E}}[X_{D}\mid\text{do}(X_{A}=x_{A})]=(I-\Lambda_{D,D})^{% -\top}\Lambda_{A,D}^{\top}x_{A}.$$ Since $Y\in D$, by Definition 1 we have $$\tau_{AY}=\frac{\partial}{\partial x_{A}}\operatorname{\mathbb{E}}[X_{Y}\mid% \text{do}(X_{A}=x_{A})]=\Lambda_{A,D}\left[(I-\Lambda_{D,D})^{-1}\right]_{D,Y}.$$ ∎ We say vertex $j$ is a possible descendant of $i$, denoted as $j\in\operatorname{PossDe}(i)$, if there exists a possibly causal path from $i$ to $j$. For a set of vertices $A$, define $\operatorname{PossDe}(A):=\cup_{i\in A}\operatorname{PossDe}(i)$. See Appendix C for more details. Corollary 3. If $Y\notin\operatorname{PossDe}(A)$, then $\tau_{AY}=0$. Proof. Since $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and $Y\notin\operatorname{PossDe}(A)$, $\Lambda_{A,D}=\bm{0}$. ∎ 5 Recursive least squares Consider the special case when the errors in the linear SEM Eq. 1 are jointly Gaussian. In this case, by the standard maximum likelihood theory, the Cramér–Rao bound is achieved by the maximum likelihood estimator (MLE) of the total causal effect, which can be obtained by plugging in the MLE for $\Lambda$ in the block-recursive form (Proposition 1) into the formula Eq. 14. We now compute the MLE for $\Lambda$ given an MPDAG $\mathcal{G}$. When $\epsilon$ is multivariate Gaussian, the block-recursive form in Proposition 1 is a linear Gaussian model parameterized by $\{(\Lambda_{k})_{k=2}^{K},(\Omega_{k})_{k=1}^{K}\}$, where $\Lambda_{k}:=\Lambda_{\operatorname{Pa}(B_{k}),B_{k}}$ and $\Omega_{k}$ is the covariance for $\varepsilon_{B_{k}}$. Because $\varepsilon$ are independent between blocks (Proposition 1), the likelihood factorizes as $$\mathcal{L}((\Lambda_{k})_{k},(\Omega_{k})_{k})=\prod_{k=1}^{K}\mathcal{N}% \left(X_{B_{k}}-\Lambda^{\top}_{k}X_{\operatorname{Pa}(B_{k})};\bm{0},\Omega_{% k}\right).$$ (15) Denote the MLE of $\Lambda$ by $\widehat{\Lambda}^{\mathcal{G}}$, which consists of blocks $(\widehat{\Lambda}_{k}^{\mathcal{G}})_{k=2}^{K}$ and zero values elsewhere, and the MLE of $\Omega$ by $\widehat{\Omega}^{\mathcal{G}}=(\widehat{\Omega}_{k}^{\mathcal{G}})_{k=1}^{K}$. The superscripts highlight the dependence on MPDAG $\mathcal{G}$. The MLE maximizes $\mathcal{L}((\Lambda_{k})_{k},(\Omega_{k})_{k})$ subject to Eq. 12, namely $$\mathcal{N}(\bm{0},\Omega_{k})\in\mathcal{P}_{\mathcal{G}_{B_{k}}},\quad k=1,% \dots,K,$$ where $\mathcal{G}_{B_{k}}$ is the subgraph of $\mathcal{G}$ induced by $B_{k}$. This further translates to a set of algebraic constraints on $(\Omega_{k})_{k=1}^{K}$, namely for $k=1,\dots,K$, $$\det\left[(\Omega_{k})_{\{i\}\cup C,\{j\}\cup C}\right]=0,\ \text{if }\text{$i% $ and $j$ are d-separated by $C$ in $\mathcal{G}_{B_{k}}$};$$ (16) see, e.g., Drton et al. (2008, §3.1). Although the constraints Eq. 16 may seem daunting, we will show that they do not affect the MLE for $\Lambda$. Let the sample covariance matrix be computed with respect to mean zero, i.e., $$\widehat{\Sigma}^{(n)}:=\frac{1}{n}\sum_{i=1}^{n}X^{(i)}X^{(i)\top},$$ (17) where $n$ is the sample size, and the superscripts are reserved to index samples. To reduce clutter, for a set of indices $C$, we often abbreviate $\Sigma_{C,C}$ as $\Sigma_{C}$. Lemma 1. Suppose $X^{(i)}:i=1,\dots,n$ is generated iid from a linear SEM Eqs. 2 and 3 associated with an unknown causal DAG $\mathcal{D}$. Suppose the error $\epsilon$ is distributed as multivariate Gaussian. Suppose $\mathcal{D}\in[\mathcal{G}]$ for a known MPDAG $\mathcal{G}$. Let $\widehat{\Sigma}^{(n)}$ be the sample covariance as defined in Eq. 17. The MLE for $\Lambda_{k}=\Lambda_{\operatorname{Pa}(B_{k}),B_{k}}$ in the block-recursive form is given by $$\widehat{\Lambda}_{k}^{\mathcal{G}}=\left(\widehat{\Sigma}^{(n)}_{% \operatorname{Pa}(B_{k})}\right)^{-1}\widehat{\Sigma}^{(n)}_{\operatorname{Pa}% (B_{k}),B_{k}},\quad k=2,\dots,K.$$ (18) Proof. By factorization in Eq. 15, MLE $(\widehat{\Lambda}_{k}^{\mathcal{G}},\widehat{\Omega}_{k}^{\mathcal{G}})$ is the maximizer of log-likelihood $$\begin{split}&\displaystyle\quad\ell_{n}(\Lambda_{k},\Omega_{k})\\ &\displaystyle=-\frac{1}{2}\sum_{i=1}^{n}\left(X_{B_{k}}^{(i)}-\Lambda_{k}^{% \top}X_{\operatorname{Pa}(B_{k})}^{(i)}\right)^{\top}\Omega_{k}^{-1}\left(X_{B% _{k}}^{(i)}-\Lambda_{k}^{\top}X_{\operatorname{Pa}(B_{k})}^{(i)}\right)-\frac{% n}{2}\log\det(\Omega_{k})\\ &\displaystyle=-\frac{1}{2}\operatorname{\mathrm{Tr}}\left(\sum_{i=1}^{n}% \Omega_{k}^{-1}(X_{B_{k}}^{(i)}-\Lambda_{k}^{\top}X_{\operatorname{Pa}(B_{k})}% ^{(i)})(X_{B_{k}}^{(i)}-\Lambda_{k}^{\top}X_{\operatorname{Pa}(B_{k})}^{(i)})^% {\top}\right)-\frac{n}{2}\log\det(\Omega_{k}),\end{split}$$ subject to Eq. 16. Taking a derivative with respect to $\Lambda_{k}\in\mathbb{R}^{|\operatorname{Pa}(B_{k})|\times|B_{k}|}$, we have $$\frac{\partial\ell_{n}(\Lambda_{k},\Omega_{k})}{\partial\Lambda_{k}}=-2\sum_{i% =1}^{n}X_{\operatorname{Pa}(B_{k})}^{(i)}X_{B_{k}}^{(i)\top}\Omega_{k}^{-1}+2% \sum_{i=1}^{n}X_{\operatorname{Pa}(B_{k})}^{(i)}X_{\operatorname{Pa}(B_{k})}^{% (i)\top}\Lambda_{k}\Omega_{k}^{-1}.$$ For any positive definite $\Omega_{k}$ satisfying Eq. 16, setting the derivative ${\ell_{n}(\Lambda_{k},\Omega_{k})}/{\partial\Lambda_{k}}$ to zero yields the estimate $$\widehat{\Lambda}_{k}^{\mathcal{G}}=\left(\frac{1}{n}\sum_{i=1}^{n}X_{% \operatorname{Pa}(B_{k})}^{(i)}X_{\operatorname{Pa}(B_{k})}^{(i)\top}\right)^{% -1}\left(\frac{1}{n}\sum_{i=1}^{n}X_{\operatorname{Pa}(B_{k})}^{(i)}X_{B_{k}}^% {(i)\top}\right)=\left(\widehat{\Sigma}_{\operatorname{Pa}(B_{k})}^{(n)}\right% )^{-1}\widehat{\Sigma}_{\operatorname{Pa}(B_{k}),B_{k}}^{(n)}.$$ ∎ Remark 1. Because of the restrictive property (Corollary 1), each $\widehat{\Lambda}_{k}^{\mathcal{G}}$ is computed by optimizing over the space of $|\operatorname{Pa}(B_{k})|\times|B_{k}|$ matrices and the resulting MLE takes the simple form as above; see also Anderson and Olkin (1985, §5) and Amemiya (1985, §6.4). However, such a simple form is unavailable in general, when the zero constraints on $\Lambda$ do not obey the restrictive property, even if we ignore the algebraic constraints Eq. 16 on $\Omega$. In fact, the likelihood function can be multimodal; see also Drton and Richardson (2004); Drton (2006); Drton et al. (2009) on seemingly unrelated regressions. Since $\widehat{\Lambda}^{\mathcal{G}}$ is obtained by simply regressing each $B_{i}$ onto $\operatorname{Pa}(B_{i},\mathcal{G})$ using ordinary least squares, we call this specific recursive least squares $\mathcal{G}$-regression. The resulting MLE for an identified total causal effect is a plugin estimator using the formula in Proposition 2. Definition 1 ($\mathcal{G}$-regression estimator). Suppose $X^{(i)}:i=1,\dots,n$ is generated iid from a linear SEM Eqs. 2 and 3 associated with an unknown causal DAG $\mathcal{D}$. Suppose $\mathcal{D}\in[\mathcal{G}]$ for a known MPDAG $\mathcal{G}$. Further, suppose for $A\subset V$, $Y\in V\setminus A$, $\tau_{AY}$ is identified under the criterion of Theorem 1. The $\mathcal{G}$-regression estimator for the total causal effect $\tau_{AY}$ is defined as $$\widehat{\tau}_{AY}^{\mathcal{G}}=\widehat{\Lambda}_{A,D}^{{\mathcal{G}}}\left% [(I-\widehat{\Lambda}_{D,D}^{{\mathcal{G}}})^{-1}\right]_{D,Y},$$ (19) where $\widehat{\Lambda}^{\mathcal{G}}$ is given by Lemma 1. 6 Efficiency theory In this section, we establish the asymptotic efficiency of our $\mathcal{G}$-regression estimator, when the errors in the generating linear SEM are not necessarily Gaussian, among a reasonably large class of estimators—all regular estimators that only depend on the sample covariance. This class of estimators, despite not covering all the estimators considered in the standard semiparametric efficiency theory, includes many in the literature, such as covariate adjustment (Henckel et al., 2019; Witte et al., 2020), recursive least squares (Gupta et al., 2020; Nandy et al., 2017), and modified Cholesky decomposition of the sample covariance (Nandy et al., 2017). Definition 1. Consider an estimator $\widehat{\theta}_{n}$ of $\theta$, $\theta\in\mathbb{R}^{k}$. We say that the asymptotic covariance of $\widehat{\theta}_{n}$ is $S$, and write $\operatorname{\mathrm{acov}}\widehat{\theta}_{n}=S$, if $\sqrt{n}(\widehat{\theta}_{n}-\theta)\rightarrow_{d}\mathcal{N}(\bm{0},S)$. When $k=1$, we write $\operatorname{\mathrm{avar}}\widehat{\theta}_{n}$ for asymptotic variance. For real valued symmetric matrices $A,B$, we say $A\succeq B$ if $A-B$ is positive semidefinite. We now state our main result. Theorem 3 (Asymptotic efficiency of the $\mathcal{G}$-regression estimator). Suppose data is generated iid from a linear SEM Eqs. 2 and 3 associated with an unknown causal DAG $\mathcal{D}$. Suppose $\mathcal{D}\in[\mathcal{G}]$ for a known MPDAG $\mathcal{G}$. Further, suppose for $A\subset V$, $Y\in V\setminus A$, $\tau_{AY}$ is identified under the criterion of Theorem 1. Let $\widehat{\tau}_{AY}^{\mathcal{G}}$ be the $\mathcal{G}$-regression estimator of $\tau_{AY}$ (Definition 1). Consider any consistent estimator $\widehat{\tau}_{AY}=\widehat{\tau}_{AY}(\widehat{\Sigma}^{(n)})$ that is a differentiable function of the sample covariance. It holds that $$\operatorname{\mathrm{acov}}\left(\widehat{\tau}_{AY}\right)\succeq% \operatorname{\mathrm{acov}}\left(\widehat{\tau}_{AY}^{\mathcal{G}}\right).$$ It is clear from definitions that both $\widehat{\tau}_{AY}^{\mathcal{G}}$ and $\widehat{\tau}_{AY}$ are asymptotically linear. Therefore, their asymptotic covariances are well-defined. To prove Theorem 3, it suffices to show that for every $w\in\mathbb{R}^{|A|}$ $$\operatorname{\mathrm{avar}}\left(w^{\top}\widehat{\tau}_{AY}\right)\geq% \operatorname{\mathrm{avar}}\left(w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}% \right).$$ To this end, for any fixed $w\in\mathbb{R}^{|A|}$ we define $\tau_{w}$ as $$\tau_{w}:=w^{\top}\tau_{AY}=\tau_{w}(\Lambda),$$ (20) which is a smooth function of $\Lambda$. The corresponding $\mathcal{G}$-regression estimator $\widehat{\tau}_{w}^{\mathcal{G}}:=w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}=% \tau_{w}(\widehat{\Lambda}^{\mathcal{G}})$ is still a plugin estimator (now of $\tau_{w}$). Additionally, for a consistent estimator $\widehat{\tau}_{AY}$ of $\tau_{AY}$, the corresponding $\widehat{\tau}_{w}:=w^{\top}\widehat{\tau}_{AY}=\widehat{\tau}_{w}(\widehat{% \Sigma}^{(n)})$ is a consistent estimator of $\tau_{w}$, in the form of a differentiable function of the sample covariance. It suffices to show $\operatorname{\mathrm{avar}}\widehat{\tau}_{w}\geq\operatorname{\mathrm{avar}}% \widehat{\tau}_{w}^{\mathcal{G}}$ for every $w\in\mathbb{R}^{|A|}$. The rest of this section is devoted to proving Theorem 3. First, we introduce graph $\bar{\mathcal{G}}$ as a saturated version of $\mathcal{G}$ (Proposition 3). In Section 6.1, we show that $\mathcal{G}$-regression with $\mathcal{G}$ replaced by $\bar{\mathcal{G}}$, aptly named $\bar{\mathcal{G}}$-regression, is a diffeomorphism between the space of covariance matrices and the space of parameters. In Section 6.2, we characterize the class of estimators relative to which $\mathcal{G}$-regression is optimal. To prove Theorem 3, we establish an efficiency bound for this class of estimators in Section 6.4 and verify that $\mathcal{G}$-regression achieves this bound in Section 6.5. Some of the proofs are left to Appendix A. See also Fig. A.1 in for an overview of the dependency structure of our results in this section. 6.1 $\bar{\mathcal{G}}$-regression as a diffeomorphism Proposition 3 (Saturated MPDAG $\bar{\mathcal{G}}$). For MPDAG $\mathcal{G}=(V,E,U)$, an associated saturated MPDAG is $\bar{\mathcal{G}}=(V,\bar{E},U)$, such that $\operatorname{Pa}(B_{k},\mathcal{\bar{G}})=B_{[k-1]}$ for $k=2,\dots,K$, where $(B_{1},\dots,B_{K})$ is a bucket decomposition of $V$ in both $\mathcal{G}$ and $\bar{\mathcal{G}}$. The proof can be found in Appendix B. In words, to create the saturated MPDAG $\bar{\mathcal{G}}$, we add all the possible directed edges between buckets $B_{1},\dots,B_{K}$ subject to the ordering $B_{1},\dots,B_{K}$. By construction, $\bar{\mathcal{G}}$ also satisfies the restrictive property in Corollary 1. See Fig. 6.1 for an example. In the following, we introduce $\bar{\mathcal{G}}$-regression as a technical tool for establishing a diffeomorphism between the space of sample covariance matrices and the space of parameters in our semiparametric model. This link is the key to analyzing the efficiency of the estimators under consideration. Recall that $\mathcal{P}_{\mathcal{G}}$ is the set of observed distributions generated by some linear SEM associated with a causal DAG $\mathcal{D}\in[\mathcal{G}]$, which is characterized by Proposition 1. More explicitly, let $Q_{k}$ be the law of $\varepsilon_{B_{k}}$ for $k=1,\dots,K$. The set of laws is explicitly prescribed as $$\mathcal{P}_{\mathcal{G}}=\left\{Q_{1}(X_{B_{1}})\prod_{k=2}^{K}Q_{k}\left(X_{% B_{k}}-\Lambda_{B_{[k-1]},B_{k}}^{\top}X_{B_{[k-1]}}\right):Q_{k}\in\mathcal{P% }_{\mathcal{G}_{B_{k}}},\ i\rightarrow j\text{ not in }\mathcal{G}\Rightarrow% \lambda_{ij}=0\right\}$$ (21) where the law is indexed by $\Lambda=(\lambda_{ij})$ and $(Q_{k})_{k=1}^{K}$. This is a semiparametric model and $(Q_{k})_{k}$ is an infinite-dimensional nuisance (van der Vaart, 2000, Chap. 25). Consider the set of laws $\mathcal{P}_{\bar{\mathcal{G}}}$ associated with the saturated graph. Let $\Omega_{k}:=\operatorname{\mathbb{E}}_{Q_{k}}\varepsilon\varepsilon^{\top}$ be the covariance of $Q_{k}$ for $k=1,\dots,K$. Let $\mathbb{R}_{\text{PD}}^{{n}\times{n}}$ denote the set of $n\times n$ symmetric, positive definite matrices. By our assumption, $\Omega_{k}\in\mathbb{R}_{\text{PD}}^{{|B_{k}|}\times{|B_{k}|}}$. Also, consider the coefficients $\Lambda=(\lambda_{ij})$ such that $\lambda_{ij}\neq 0$ only if $i\rightarrow j$ in $\bar{\mathcal{G}}$, or equivalently, $i\in B_{l}$ and $j\in B_{m}$ for $l<m$. Then, the covariance of $X$, denoted as $\Sigma$, under any $P\in\mathcal{P}_{\bar{\mathcal{G}}}$ is determined from $(\Omega_{k})_{k}$ and $\Lambda$. Let us write this covariance map as $$\Sigma=\phi_{\bar{\mathcal{G}}}\left((\Lambda_{k})_{k=2}^{K},(\Omega_{k})_{k=1% }^{K}\right),$$ where $\Lambda_{k}=\Lambda_{B_{[k-1]},B_{k}}$ is of dimension $(|B_{1}|+\dots+|B_{k-1}|)\times|B_{k}|$. It follows from Corollary 2 that the covariance map $\phi_{\bar{\mathcal{G}}}$ is explicitly given by $$\Sigma_{B_{1}}=\Omega_{1},\quad\Sigma_{B_{k}}=\Lambda_{k}^{\top}\Sigma_{B_{[k-% 1]}}\Lambda_{k}+\Omega_{k},\quad\Sigma_{B_{[k-1]},B_{k}}=\Sigma_{B_{[k-1]}}% \Lambda_{k},\quad k=2,\dots,K.$$ (22) Further, the covariance map $\phi_{\bar{\mathcal{G}}}$ is a diffeomorphism between its domain and the set of $|V|\times|V|$ positive definite matrices. Lemma 1. Covariance map $\phi_{\bar{\mathcal{G}}}$ given by Eq. 22 is invertible. Further, $\left((\Lambda_{k})_{k=2}^{K},(\Omega_{k})_{k=1}^{K}\right)\leftrightarrow\Sigma$ given by $\phi_{\bar{\mathcal{G}}}$ and its inverse $\phi^{-1}_{\bar{\mathcal{G}}}$ is a diffeomorphism between $\left({}_{k=2}^{K}\mathbb{R}^{(|B_{1}|+\dots+|B_{k-1}|)\times|B_{k}|}\right)% \times\left({}_{k=1}^{K}\mathbb{R}_{\text{PD}}^{{|B_{k}|}\times{|B_{k}|}}\right)$ and $\mathbb{R}_{\text{PD}}^{{|V|}\times{|V|}}$. Proof. By definition, covariance map $\phi_{\bar{\mathcal{G}}}$ is differentiable. To show diffeomorphism, we need to show that $\phi_{\bar{\mathcal{G}}}^{-1}(\Sigma)$ exists for every $\Sigma\in\mathbb{R}_{\text{PD}}^{{|V|}\times{|V|}}$ and that $\phi_{\bar{\mathcal{G}}}^{-1}$ is differentiable. For any positive definite $\Sigma$, the inverse covariance map $\phi_{\bar{\mathcal{G}}}^{-1}(\Sigma)$ is explicitly given by $$\Lambda_{k}=\left(\Sigma_{B_{[k-1]}}\right)^{-1}\Sigma_{B_{[k-1]},B_{k}},\quad k% =2,\dots,K,$$ (23) and $$\Omega_{k}=\Sigma_{B_{k}\cdot B_{[k-1]}}=\Sigma_{B_{k}}-\Sigma_{B_{[k-1]},B_{k% }}^{\top}\Sigma_{B_{[k-1]}}^{-1}\Sigma_{B_{[k-1]},B_{k}},\quad k=1,\dots,K,$$ (24) where $\Sigma_{B_{k}\cdot B_{[k-1]}}$ is the Schur complement of block $B_{k}$ with respect to block $B_{[k-1]}$. Because $\Sigma$ is positive definite, Schur complement $\Omega_{k}$ is also positive definite (Horn and Johnson, 2012, page 495). Clearly, the map $\phi_{\bar{\mathcal{G}}}^{-1}(\cdot)$ is differentiable. ∎ By Eqs. 23 and 24, $\Lambda_{k}$ is the matrix of population least squares coefficients in a regression of $X_{B_{k}}$ onto $X_{B_{1}\cup\dots\cup B_{k-1}}$ according to $\bar{\mathcal{G}}$, and $\Omega_{k}$ is the corresponding covariance of regression residuals. Hence, $\phi^{-1}_{\bar{\mathcal{G}}}(\Sigma)$ is called “$\bar{\mathcal{G}}$-regression”. Remark 2. In the special case when $\mathcal{G}$ is a DAG such that every bucket $B_{i}$ is a singleton, Lemma 1 reduces to $(\Lambda,\omega)\leftrightarrow\Sigma$ given by $(\phi_{\bar{\mathcal{G}}},\phi_{\bar{\mathcal{G}}}^{-1})$ being a diffeomorphism between $$\left\{\Lambda\in\mathbb{R}^{|V|\times|V|}:\text{$\Lambda$ is upper-triangular% }\right\}\times\left\{\omega\in\mathbb{R}^{|V|}:\omega_{i}>0,\,i=1,\dots,|V|% \right\}\,\longleftrightarrow\,\mathbb{R}_{\text{PD}}^{{|V|}\times{|V|}}.$$ The covariance map is $\Sigma=\phi_{\bar{\mathcal{G}}}(\Lambda,\omega)=(I-\Lambda)^{-\top}% \operatorname{\mathrm{diag}}(\omega)(I-\Lambda)^{-1}$, and the inverse covariance map $\phi_{\bar{\mathcal{G}}}^{-1}$ is given by the unique LDL decomposition of $\Sigma^{-1}$. Lemma 1 is a generalization of Drton (2018, Theorem 7.2). 6.2 Covariance-based, consistent estimators We now characterize the class of estimators relative to which the optimality of our estimator is established. Recall that under $P\in\mathcal{P}_{\mathcal{G}}$, $\widehat{\Sigma}=\widehat{\Sigma}^{(n)}$ is the sample covariance, $\Sigma$ is the population covariance and $\tau_{w}=w^{\top}\tau_{AY}$. We assume that $n>\max_{k}\{|B_{k}|+|\operatorname{Pa}(B_{k},\mathcal{G})|\}$ such that $\widehat{\Sigma}^{(n)}$ is positive definite almost surely (Drton and Eichler, 2006, Sec. 3.1). For simplicity, the superscript $(n)$ is often omitted. Definition 2. The class of estimators for $\tau_{w}$ under consideration is $$\displaystyle\mathcal{T}_{w}:=\bigg{\{}\widehat{\tau}_{w}\left(\widehat{\Sigma% }^{(n)}\right):\mathbb{R}_{\text{PD}}^{{|V|}\times{|V|}}\rightarrow\mathbb{R}:% \\ \displaystyle\widehat{\tau}_{w}\text{ differentiable},\ \widehat{\tau}_{w}(% \widehat{\Sigma}^{(n)})\rightarrow_{p}\tau_{w}(P)\text{ as $n\rightarrow\infty% $ under every $P\in\mathcal{P}_{\mathcal{G}}$}\bigg{\}}.$$ (25) By definition, in particular, $\mathcal{T}_{w}$ includes all regular estimators computable with least squares operations. It also includes shrinkage estimators such as ridge or lasso regression whose shrinkage parameter does not depend on $n$. Characterizing $\mathcal{T}_{w}$ Let $(\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}})_{k=2}^{K},(\widehat{\Omega}_{k}^{% \bar{\mathcal{G}}})_{k=1}^{K}$ be the image of $\widehat{\Sigma}$ under $\phi_{\bar{\mathcal{G}}}^{-1}$. Recall that $(\Lambda_{k})_{k=2}^{K},(\Omega_{k})_{k=1}^{K}$ is the image of $\Sigma$ under $\phi_{\bar{\mathcal{G}}}^{-1}$. For a matrix $C$, let $\operatorname{vec}C$ denote vectorizing $C$ by concatenating its columns. Each $\operatorname{vec}\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}}$ can be split by coordinates into vectors $$\widehat{\Lambda}_{k,\mathcal{G}}^{\bar{\mathcal{G}}}=\left(\widehat{\lambda}^% {\bar{\mathcal{G}}}_{ij}:j\in B_{k},i\in\operatorname{Pa}(B_{k},\mathcal{G})% \right),\quad\widehat{\Lambda}_{k,\mathcal{G}^{c}}^{\bar{\mathcal{G}}}=\left(% \widehat{\lambda}^{\bar{\mathcal{G}}}_{ij}:j\in B_{k},i\in\operatorname{Pa}(B_% {k},\bar{\mathcal{G}})\setminus\operatorname{Pa}(B_{k},\mathcal{G})\right),$$ (26) where $\left(\widehat{\Lambda}_{k,\mathcal{G}}^{\bar{\mathcal{G}}}\right)_{k}$ corresponds to between-bucket edges in $\mathcal{G}$ and $\left(\widehat{\Lambda}_{k,\mathcal{G}^{c}}^{\bar{\mathcal{G}}}\right)_{k}$ corresponds to between-bucket edges in $\bar{\mathcal{G}}$ but not in $\mathcal{G}$. In the example of Fig. 6.1, we have $\widehat{\Lambda}_{2,\mathcal{G}}^{\bar{\mathcal{G}}}=(\widehat{\lambda}^{\bar% {\mathcal{G}}}_{12},\widehat{\lambda}^{\bar{\mathcal{G}}}_{13},\widehat{% \lambda}^{\bar{\mathcal{G}}}_{14})^{\top}$, $\widehat{\Lambda}_{3,\mathcal{G}}^{\bar{\mathcal{G}}}=(\widehat{\lambda}^{\bar% {\mathcal{G}}}_{45},\widehat{\lambda}^{\bar{\mathcal{G}}}_{46})^{\top}$ and $\widehat{\Lambda}_{2,\mathcal{G}^{c}}^{\bar{\mathcal{G}}}=\texttt{NULL}$, $\widehat{\Lambda}_{3,\mathcal{G}^{c}}^{\bar{\mathcal{G}}}=(\widehat{\lambda}^{% \bar{\mathcal{G}}}_{15},\widehat{\lambda}^{\bar{\mathcal{G}}}_{16},\widehat{% \lambda}^{\bar{\mathcal{G}}}_{25},\widehat{\lambda}^{\bar{\mathcal{G}}}_{26},% \widehat{\lambda}^{\bar{\mathcal{G}}}_{35},\widehat{\lambda}^{\bar{\mathcal{G}% }}_{36})^{\top}$. Similarly, $\operatorname{vec}\Lambda_{k}$ can be split into $\Lambda_{k,\mathcal{G}}$ and $\Lambda_{k,\mathcal{G}^{c}}$ for $k=2,\dots,K$. The following lemma directly follows from Definition 2 and Lemma 1. Lemma 2. An estimator $\widehat{\tau}_{w}\in\mathcal{T}_{w}$ can be written as $$\widehat{\tau}_{w}\left(\widehat{\Sigma}^{(n)}\right)=\widehat{\tau}_{w}\left(% (\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}})_{k=2}^{K},\,(\widehat{% \Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}})_{k=2}^{K},\,(\widehat{\Omega% }^{\bar{\mathcal{G}}}_{k})_{k=1}^{K}\right)$$ for function $\widehat{\tau}_{w}\left((\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}}% )_{k=2}^{K},\,(\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}})_{k=2% }^{K},\,(\widehat{\Omega}^{\bar{\mathcal{G}}}_{k})_{k=1}^{K}\right)$ that is differentiable in its arguments. The consistency of $\widehat{\tau}_{w}$ implies the following two results. Lemma 3. For any $\widehat{\tau}_{w}\in\mathcal{T}_{w}$, it holds that $$\widehat{\tau}_{w}\left((\Lambda_{k,\mathcal{G}})_{k=2}^{K},(\bm{0})_{k=2}^{K}% ,(\Omega_{k})_{k=1}^{K}\right)\equiv\tau_{w}\left((\Lambda_{k,\mathcal{G}})_{k% =2}^{K}\right)$$ (27) for all $(\Lambda_{k,\mathcal{G}})_{k}$ and all positive definite $(\Omega_{k})_{k}$. Proof. Under any $P\in\mathcal{P}_{\mathcal{G}}$, since $\widehat{\Sigma}\rightarrow_{p}\Sigma$ as $n\rightarrow\infty$ by the law of large numbers, by Lemma 1 and the continuous mapping theorem (van der Vaart, 2000, page 11), we have $\widehat{\Lambda}_{k,\mathcal{G}}^{\bar{\mathcal{G}}}\rightarrow_{p}\Lambda_{k% ,\mathcal{G}}$, $\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}\rightarrow_{p}\bm{0}$ and $\widehat{\Omega}^{\bar{\mathcal{G}}}_{k}\rightarrow_{p}\Omega_{k}$ for $k=2,\dots,K$. By Lemma 2 and continuous mapping again, $\widehat{\tau}_{w}\rightarrow_{p}\widehat{\tau}_{w}\left((\Lambda_{k,\mathcal{% G}})_{k=2}^{K},(\bm{0})_{k=2}^{K},(\Omega_{k})_{k=1}^{K}\right)$. The result then follows from the consistency of $\widehat{\tau}_{w}$ under every $P\in\mathcal{P}_{\mathcal{G}}$. ∎ Corollary 4. For $\widehat{\tau}_{w}\in\mathcal{T}_{w}$, at any $((\Lambda_{k,\mathcal{G}})_{k=2}^{K},(\bm{0})_{k=2}^{K},(\Omega_{k})_{k=1}^{K})$, it holds that $$\frac{\partial\widehat{\tau}_{w}}{\partial\Lambda_{k,\mathcal{G}}}=\frac{% \partial\tau_{w}}{\partial\Lambda_{k,\mathcal{G}}}\ (k=2,\dots,K),\quad\frac{% \partial\widehat{\tau}_{w}}{\partial\Omega_{k}}=\bm{0}\ (k=1,\dots,K).$$ (28) Proof. Let symbol $\langle\cdot,\cdot\rangle$ denote inner product. Since $\widehat{\tau}_{w}$ is differentiable (Lemma 2), by a Taylor expansion at $((\Lambda_{k,\mathcal{G}})_{k=2}^{K},(\bm{0})_{k=2}^{K},(\Omega_{k})_{k=1}^{K})$, we have $$\begin{split}&\displaystyle\quad\widehat{\tau}_{w}\left((\Lambda_{k,\mathcal{G% }}+\Delta\Lambda_{k,\mathcal{G}})_{k=2}^{K},(\bm{0})_{k=2}^{K},(\Omega_{k}+% \Delta\Omega_{k})_{k=1}^{K}\right)-\widehat{\tau}_{w}\left((\Lambda_{k,% \mathcal{G}})_{k=2}^{K},(\bm{0})_{k=2}^{K},(\Omega_{k})_{k=1}^{K}\right)\\ &\displaystyle=\sum_{k=2}^{K}\left(\left\langle\frac{\partial\widehat{\tau}_{w% }}{\partial\Lambda_{k,\mathcal{G}}},\Delta\Lambda_{k,\mathcal{G}}\right\rangle% +o(\|\Delta\Lambda_{k,\mathcal{G}}\|)\right)+\sum_{k=1}^{K}\left(\left\langle% \frac{\partial\widehat{\tau}_{w}}{\partial\Omega_{k}},\Delta\Omega_{k}\right% \rangle+o(\|\Delta\Omega_{k}\|)\right),\end{split}$$ which by Lemma 3 must equal $\tau_{w}((\Lambda_{k,\mathcal{G}}+\Delta\Lambda_{k,\mathcal{G}})_{k=2}^{K})-% \tau_{w}((\Lambda_{k,\mathcal{G}})_{k=2}^{K})$. The result then follows from the differentiability of $\tau_{w}(\cdot)$ and the definition of derivatives. ∎ Note that Corollary 4 is similar to the conditions imposed on influence functions in standard semiparametric efficiency theory; see, e.g., Tsiatis (2006, Corollary 1, §3.1). However, the gradients $\partial\widehat{\tau}_{w}/\partial\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,% \mathcal{G}^{c}}$ for $k=2,\dots,K$ are free to vary because $\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}\rightarrow_{p}\bm{0}$. That is, an estimator $\widehat{\tau}_{w}\in\mathcal{T}_{w}$ can take arbitrary values as its second argument varies in the vicinity of zero, as long as differentiability is maintained. 6.3 Asymptotic covariance of least squares coefficients We use this section to derive some asymptotic results that will be used to prove Theorem 3. Consider a vertex $j\in B_{k}$ for $k\in\{2,\dots,K\}$ and a set of vertices $C$ such that $\operatorname{Pa}(B_{k},\mathcal{G})\subseteq C\subseteq\operatorname{Pa}(B_{k% },\bar{\mathcal{G}})$. Let $\widehat{\lambda}_{C,j}^{(n)}\in\mathbb{R}^{|C|}$ be the least squares coefficients from regressing $X_{j}$ onto $X_{C}$ under sample size $n$. Let $\lambda_{C,j}$ be the corresponding true edge coefficient vector from $\Lambda$ in Proposition 1. Then $\lambda_{C,j}$ has non-zero coordinates only for those indices in $\operatorname{Pa}(B_{k},\mathcal{G})$. Because $X_{j}=\lambda_{C,j}^{\top}X_{C}+\varepsilon_{j}$ with $\varepsilon_{j}\mathrel{\text{\scalebox{1.07}{$\perp\mskip-10.0mu \perp$}}}X_{C}$ by Corollary 2, we have $\widehat{\lambda}_{C,j}^{(n)}\rightarrow_{p}\lambda_{C,j}$ under every $P\in\mathcal{P}_{\mathcal{G}}$. Moreover, we have the following asymptotic linear expansion. Lemma 4. Let $j$ be a vertex in bucket $B_{k}$ for $k\in\{2,\dots,K\}$. Let $C$ be a set of vertices such that $\operatorname{Pa}(B_{k},\mathcal{G})\subseteq C\subseteq\operatorname{Pa}(B_{k% },\bar{\mathcal{G}})$. Under any $P\in\mathcal{P}_{\mathcal{G}}$, it holds that $$\widehat{\lambda}_{C,j}^{(n)}-\lambda_{C,j}=\frac{1}{n}\sum_{i=1}^{n}(\Sigma_{% C})^{-1}X_{C}^{(i)}\varepsilon_{j}^{(i)}+O_{p}(n^{-1}),$$ where $\Sigma=\operatorname{\mathbb{E}}_{P}XX^{\top}$, $\widehat{\lambda}_{C,j}^{(n)}$ is the vector of least squares coefficients from regressing $X_{j}$ onto $X_{C}$ under sample size $n$, and $\lambda_{C,j}$ is the vector of true coefficients in Proposition 1. We now use Lemma 4 to obtain the covariance structure of $\bar{\mathcal{G}}$-regression coefficients $(\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}})_{k=2}^{K}$. Recall that $\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}}\in\mathbb{R}^{|B_{[k-1]}|\times|B_{k% }|}$ with $B_{[k-1]}=B_{1}\cup\dots\cup B_{k-1}$ and $$\left((\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}})_{k=2}^{K},(\widehat{\Omega}_% {k}^{\bar{\mathcal{G}}})_{k=1}^{K}\right)=\phi_{\bar{\mathcal{G}}}^{-1}\left(% \widehat{\Sigma}^{(n)}\right),$$ as given by Eqs. 23 and 24. For matrices $A\in\mathbb{R}^{m\times n},B\in\mathbb{R}^{p\times q}$, the Kronecker product $A\otimes B$ is an $mp\times nq$ matrix given by $$A\otimes B=\begin{pmatrix}a_{11}B&\cdots&a_{1n}B\\ \vdots&\ddots&\vdots\\ a_{m1}B&\cdots&a_{mn}B\end{pmatrix}.$$ Lemma 5. Let $(\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}})_{k=2}^{K}$ be the $\bar{\mathcal{G}}$-regression coefficients under sample size $n$. Under any $P\in\mathcal{P}_{\mathcal{G}}$, it holds that $$\sqrt{n}\begin{pmatrix}\operatorname{vec}(\widehat{\Lambda}_{2}^{\bar{\mathcal% {G}}}-\Lambda_{2})\\ \vdots\\ \operatorname{vec}(\widehat{\Lambda}_{K}^{\bar{\mathcal{G}}}-\Lambda_{K})\end{% pmatrix}\rightarrow_{d}\mathcal{N}\left(\bm{0},\,\operatorname{\mathrm{diag}}% \left\{\Omega_{2}\otimes\left(\Sigma_{B_{[1]}}\right)^{-1},\dots,\Omega_{K}% \otimes\left(\Sigma_{B_{[K-1]}}\right)^{-1}\right\}\right).$$ Remark 3. $\sqrt{n}\operatorname{vec}(\widehat{\Lambda}_{k}^{(n)}-\Lambda_{k})\rightarrow% _{d}\mathcal{N}\left(\bm{0},\,\Omega_{k}\otimes\left(\Sigma_{B_{[k-1]}}\right)% ^{-1}\right)$ is equivalent to $$\sqrt{n}(\widehat{\Lambda}_{k}^{(n)}-\Lambda_{k})\rightarrow_{d}\mathcal{MN}% \left(\bm{0},\left(\Sigma_{B_{[k-1]}}\right)^{-1},\Omega_{k}\right),$$ where the RHS is a centered matrix normal distribution with row covariance $(\Sigma_{B_{[k-1]}})^{-1}$ and column covariance $\Omega_{k}$; see Dawid (1981). Similarly, we can compute the asymptotic covariance of the $\mathcal{G}$-regression coefficients. To obtain the result below, we rely on the restrictive property of $\mathcal{G}$ (Corollary 1). Lemma 6. Let $(\widehat{\Lambda}_{k}^{\mathcal{G}})_{k=2}^{K}$ be the $\mathcal{G}$-regression coefficients as defined in Lemma 1 under sample size $n$. Under any $P\in\mathcal{P}_{\mathcal{G}}$, it holds that $$\sqrt{n}\begin{pmatrix}\operatorname{vec}(\widehat{\Lambda}_{2}^{\mathcal{G}}-% \Lambda_{2})\\ \vdots\\ \operatorname{vec}(\widehat{\Lambda}_{K}^{\mathcal{G}}-\Lambda_{K})\end{% pmatrix}\rightarrow_{d}\mathcal{N}\left(\bm{0},\,\operatorname{\mathrm{diag}}% \left\{\Omega_{2}\otimes\left(\Sigma_{\operatorname{Pa}(B_{2},\mathcal{G})}% \right)^{-1},\dots,\Omega_{K}\otimes\left(\Sigma_{\operatorname{Pa}(B_{K},% \mathcal{G})}\right)^{-1}\right\}\right).$$ 6.4 Efficiency bound We first notice a simple fact of the quadratic form and a property of the Kronecker product. Lemma 7. Let $S\in\mathbb{R}_{\text{PD}}^{{n}\times{n}},x\in\mathbb{R}^{n}$ and suppose that $(A,B)$ is a partition of the set $\{1,\dots,n\}$. For any fixed $x_{A}$, it holds that $$x^{\top}Sx\geq x_{A}^{\top}(S_{A\cdot B})x_{A},$$ where $S_{A\cdot B}=S_{A,A}-S_{A,B}S_{B,B}^{-1}S_{B,A}$. The equality holds if and only if $x_{B}=-S_{B,B}^{-1}S_{B,A}x_{A}$. Lemma 8 (Liu (1999, Theorem 1)). Let $A\in\mathbb{R}^{m\times m}$ and $C\in\mathbb{R}^{n\times n}$ be non-singular. Suppose $\alpha\subset[m]$, $\beta\subset[n]$. Let $\alpha^{c}$, $\beta^{c}$ denote their respective complements. Let $\gamma^{c}=\{n(i-1)+j:i\in\alpha^{c},j\in\beta^{c}\}$ and $\gamma=[mn]\setminus\gamma^{c}$. We have $$A_{\alpha^{c}\cdot\alpha}\otimes C_{\beta^{c}\cdot\beta}=(A\otimes C)_{\gamma^% {c}\cdot\gamma}.$$ Lemma 9. Suppose the assumptions of Theorem 3 hold. Fix $w\in\mathbb{R}^{|A|}$ and let $\tau_{w}=w^{\top}\tau_{AY}=\tau_{w}((\Lambda_{k,\mathcal{G}})_{k=2}^{K})$ as in Eq. 20. Consider any estimator $\widehat{\tau}_{w}\in\mathcal{T}_{w}$ given by Definition 2. Then under any $P\in\mathcal{P}_{\mathcal{G}}$, it holds that $$\operatorname{\mathrm{avar}}(\widehat{\tau}_{w})\geq\sum_{k=2}^{K}h_{k}^{\top}% \Omega_{k}\otimes(\Sigma_{\operatorname{Pa}(B_{k},\mathcal{G})})^{-1}h_{k},$$ (29) where $(\Omega_{k})_{k=2}^{K}$ and $\Sigma$ are determined by $P$, and the gradient vectors $h_{k}={\partial\tau_{w}((\Lambda_{k,\mathcal{G}})_{k})}/{\partial\Lambda_{k,% \mathcal{G}}}$ for $k=2,\dots,K$ evaluated at $(\Lambda_{k,\mathcal{G}})_{k}$ are determined by $\tau_{w}(\cdot)$ and $P$. Proof. By Lemma 2, estimator $\widehat{\tau}_{w}\in\mathcal{T}_{w}$ can be written as $$\widehat{\tau}_{w}=\widehat{\tau}_{w}\left((\widehat{\Lambda}^{\bar{\mathcal{G% }}}_{k,\mathcal{G}})_{k=2}^{K},\ (\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,% \mathcal{G}^{c}})_{k=2}^{K},\ (\widehat{\Omega}_{k}^{\bar{\mathcal{G}}})_{k=1}% ^{K}\right),$$ where the arguments correspond to the image of $\widehat{\Sigma}$ under $\phi_{\bar{\mathcal{G}}}^{-1}$; see Eq. 26. Estimator $\widehat{\tau}_{w}\in\mathcal{T}_{w}$ is asymptotically normal. By the delta method (Shorack, 2000, Sec 11.2), we have $$\operatorname{\mathrm{avar}}(\widehat{\tau}_{w})=\begin{pmatrix}\partial% \widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}^{c}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Omega}_{k})_{k=1}^{K}\end{pmatrix}^{\top% }\operatorname{\mathrm{acov}}\begin{Bmatrix}\operatorname{vec}\,(\widehat{% \Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}})_{k=2}^{K}\\ \operatorname{vec}\,(\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}% )_{k=2}^{K}\\ \operatorname{vec}\,(\widehat{\Omega}^{\bar{\mathcal{G}}}_{k})_{k=1}^{K}\end{% Bmatrix}\begin{pmatrix}\partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,% \mathcal{G}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}^{c}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Omega}_{k})_{k=1}^{K}\end{pmatrix},$$ where the partial derivatives of $\widehat{\tau}_{w}(\cdot)$ are evaluated at $\left(({\Lambda}_{k,\mathcal{G}})_{k=2}^{K},\,(\bm{0})_{k=2}^{K},\,({\Omega}_{% k})_{k=1}^{K})\right)$, the image of $\Sigma$ under $\phi_{\bar{\mathcal{G}}}^{-1}$. Using $\partial\widehat{\tau}_{w}/\partial\Omega_{k}=\bm{0}$ for $k=1,\dots,K$ from Corollary 4, it follows that $$\begin{split}\displaystyle\operatorname{\mathrm{avar}}(\widehat{\tau}_{w})&% \displaystyle=\begin{pmatrix}\partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,% \mathcal{G}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}^{c}})_{k=2}^{K}% \end{pmatrix}^{\top}\operatorname{\mathrm{acov}}\begin{Bmatrix}\operatorname{% vec}\,(\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}})_{k=2}^{K}\\ \operatorname{vec}\,(\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}% )_{k=2}^{K}\end{Bmatrix}\begin{pmatrix}\partial\widehat{\tau}_{w}/\partial({% \Lambda}_{k,\mathcal{G}})_{k=2}^{K}\\ \partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}^{c}})_{k=2}^{K}% \end{pmatrix}\\ &\displaystyle=\sum_{k=2}^{K}\begin{pmatrix}\partial\widehat{\tau}_{w}/% \partial{\Lambda}_{k,\mathcal{G}}\\ \partial\widehat{\tau}_{w}/\partial{\Lambda}_{k,\mathcal{G}^{c}}\end{pmatrix}^% {\top}\operatorname{\mathrm{acov}}\begin{Bmatrix}\widehat{\Lambda}^{\bar{% \mathcal{G}}}_{k,\mathcal{G}}\\ \widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}\end{Bmatrix}\begin{% pmatrix}\partial\widehat{\tau}_{w}/\partial{\Lambda}_{k,\mathcal{G}}\\ \partial\widehat{\tau}_{w}/\partial{\Lambda}_{k,\mathcal{G}^{c}}\end{pmatrix},% \end{split}$$ where we have used the block-diagonal structure of the asymptotic covariance from Lemma 5. Let $$S^{(k)}:=\operatorname{\mathrm{acov}}\begin{Bmatrix}\widehat{\Lambda}^{\bar{% \mathcal{G}}}_{k,\mathcal{G}}\\ \widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}\end{Bmatrix},\quad k% =2,\dots,K,$$ which equals $$S^{(k)}=\Omega_{k}\otimes\left(\Sigma_{B_{[k-1]}}\right)^{-1},\quad k=2,\dots,K,$$ by Lemma 5. From Corollary 4, note that $\partial\widehat{\tau}_{w}/\partial({\Lambda}_{k,\mathcal{G}})_{k}\equiv h_{k}$ is fixed for $k=2,\dots,K$. Then, Lemma 7 yields the lower bound $$\operatorname{\mathrm{avar}}(\widehat{\tau}_{w})\geq\sum_{k=2}^{K}h_{k}^{\top}% S^{(k)}_{\mathcal{G}\cdot\mathcal{G}^{c}}h_{k},$$ where indices $\mathcal{G}$ and $\mathcal{G}^{c}$ correspond to the coordinates in $\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}}$ and $\widehat{\Lambda}^{\bar{\mathcal{G}}}_{k,\mathcal{G}^{c}}$ respectively. Indices $\mathcal{G}$ correspond to $\{(i,j):j\in B_{k},i\in\operatorname{Pa}(B_{k},\mathcal{G})\}$; by construction of $\bar{\mathcal{G}}$, indices $\mathcal{G}^{c}$ correspond to $\{(i,j):j\in B_{k},i\in\operatorname{Pa}(B_{k},\bar{\mathcal{G}})\setminus% \operatorname{Pa}(B_{k},\mathcal{G})\}$. Now, to abuse the notation slightly, we apply Lemma 8 with $$A=\Omega_{k},\quad C=(\Sigma_{B_{[k-1]}})^{-1},\quad\alpha=\emptyset,\quad% \beta=\operatorname{Pa}(B_{k},\bar{\mathcal{G}})\setminus\operatorname{Pa}(B_{% k},\mathcal{G}),$$ such that $$\alpha^{c}=\{1,\dots,|B_{k}|\},\quad\beta^{c}=\operatorname{Pa}(B_{k},\mathcal% {G}),\quad\gamma=\mathcal{G}^{c},\quad\gamma^{c}=\mathcal{G}.$$ We obtain $$S^{(k)}_{\mathcal{G}\cdot\mathcal{G}^{c}}=\Omega_{k}\otimes\left[(\Sigma_{B_{[% k-1]}})^{-1}\right]_{\beta^{c}\cdot\beta}=\Omega_{k}\otimes\left(\Sigma_{% \operatorname{Pa}(B_{k},\mathcal{G})}\right)^{-1},$$ where the last step follows from $(H^{-1})_{\beta^{c}\cdot\beta}=(H_{\beta^{c},\beta^{c}})^{-1}$ (Horn and Johnson, 2012, §0.8). ∎ 6.5 Efficiency of $\mathcal{G}$-regression estimator In Section 5, we have seen that when the errors are Gaussian, the $\mathcal{G}$-regression plugin is the MLE and hence achieves the efficiency bound. Here, we show that this is still true relative to the class of estimators we consider, even though the errors are not necessarily Gaussian. We verify that $\widehat{\tau}_{w}^{\mathcal{G}}=w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}$ achieves the efficiency bound above. Lemma 10. Let $\widehat{\tau}_{w}^{\mathcal{G}}:=w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}$, where $\widehat{\tau}_{AY}^{\mathcal{G}}$ is the $\mathcal{G}$-regression estimator (Definition 1). Under the same assumptions as Lemma 9, it holds that $\widehat{\tau}_{w}^{\mathcal{G}}\in\mathcal{T}_{w}$ and $\widehat{\tau}_{w}^{\mathcal{G}}$ achieves the efficiency bound in Eq. 29 under every $P\in\mathcal{P}_{\mathcal{G}}$. Proof. By Definition 1, $\widehat{\tau}_{w}^{\mathcal{G}}\in\mathcal{T}_{w}$. Further, note that $$\widehat{\tau}_{w}^{\mathcal{G}}=\tau_{w}\left((\widehat{\Lambda}_{k}^{% \mathcal{G}})_{k=2}^{K}\right),$$ where $(\widehat{\Lambda}_{k}^{\mathcal{G}})_{k=2}^{K}$ are the $\mathcal{G}$-regression coefficients in Eq. 18. Under any $P\in\mathcal{P}_{\mathcal{G}}$, we now verify that $\operatorname{\mathrm{avar}}\widehat{\tau}_{\mathcal{G}}$ matches the RHS of Eq. 29. By the delta method (Shorack, 2000, Sec 11.2), we have $$\begin{split}\displaystyle\operatorname{\mathrm{avar}}\widehat{\tau}_{w}^{% \mathcal{G}}&\displaystyle=\left(\partial\tau_{w}/\partial\operatorname{vec}\,% (\Lambda_{k})_{k=2}^{K}\right)^{\top}\operatorname{\mathrm{acov}}\left\{% \operatorname{vec}\,(\widehat{\Lambda}_{k}^{\mathcal{G}})_{k=2}^{K}\right\}% \left(\partial\tau_{w}/\partial\operatorname{vec}\,(\Lambda_{k})_{k=2}^{K}% \right)\\ &\displaystyle\stackrel{{\scriptstyle\text{(i)}}}{{=}}\sum_{k=2}^{K}\left(% \partial\tau_{w}/\partial\operatorname{vec}\Lambda_{k}\right)^{\top}% \operatorname{\mathrm{acov}}\left\{\operatorname{vec}\widehat{\Lambda}_{k}^{% \mathcal{G}}\right\}\left(\partial\tau_{w}/\partial\operatorname{vec}\Lambda_{% k}\right)\\ &\displaystyle\stackrel{{\scriptstyle\text{(ii)}}}{{=}}\sum_{k=2}^{K}\left(% \partial\tau_{w}/\partial\operatorname{vec}\Lambda_{k}\right)^{\top}\Omega_{k}% \otimes\left(\Sigma_{\operatorname{Pa}(B_{k},\mathcal{G})}\right)^{-1}\left(% \partial\tau_{w}/\partial\operatorname{vec}\Lambda_{k}\right),\end{split}$$ which equals the RHS of Eq. 29. The partial derivatives of $\tau_{w}(\cdot)$ are evaluated at $(\Lambda_{k})_{k=2}^{K}$. Step (i) follows from the block-diagonal structure of the asymptotic covariance of $\widehat{\Lambda}_{\mathcal{G}}$ given by Lemma 6, and (ii) follows from the same lemma. ∎ Finally, we complete the proof of our main result. Proof of Theorem 3. Fix any $P\in\mathcal{P}_{\mathcal{G}}$. It suffices to show that for every $w\in\mathbb{R}^{|A|}$, $$w^{\top}\operatorname{\mathrm{acov}}(\widehat{\tau}_{AY})w\geq w^{\top}% \operatorname{\mathrm{acov}}(\widehat{\tau}_{AY}^{\mathcal{G}})w,$$ or equivalently $$\operatorname{\mathrm{avar}}\left(w^{\top}\widehat{\tau}_{AY}\right)\geq% \operatorname{\mathrm{avar}}\left(w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}% \right).$$ This is true because for every $\widehat{\tau}_{AY}$ in consideration, $\widehat{\tau}_{w}:=w^{\top}\widehat{\tau}_{AY}\in\mathcal{T}_{w}$ and hence $\widehat{\tau}_{w}$ is subject to the lower bound in Lemma 9. Meanwhile, by Lemma 10, such a lower bound is achieved by $\widehat{\tau}_{w}^{\mathcal{G}}=w^{\top}\widehat{\tau}_{AY}^{\mathcal{G}}$. The proof is complete because the choice of $w$ is arbitrary. ∎ Remark 4. For Theorem 3 to hold, the independence error assumption Eq. 3 of the underlying linear SEM cannot be relaxed to uncorrelated errors. This comes from inspecting the proof of Lemma 5 in Appendix A. To show that the $\bar{\mathcal{G}}$-regression coefficients are asymptotically independent across buckets, the independence of errors is used to establish that for $2\leq k<k^{\prime}\leq K$, $j\in B_{k}$, $j^{\prime}\in B_{k^{\prime}}$, $\operatorname{\mathrm{cov}}(\varepsilon_{j}X_{B_{[k-1]}},\varepsilon_{j^{% \prime}}X_{B_{[k^{\prime}-1]}})=\bm{0}$. Suppose for now $\{\epsilon_{i}:i\in V\}$ are only uncorrelated and hence $\{\varepsilon_{B_{k}}:k=1,\dots,K\}$ are only uncorrelated across buckets. Further, suppose $B_{1}=\{1\},B_{2}=\{2\},B_{3}=\{3\}$ with $j=k=2$ and $j^{\prime}=k^{\prime}=3$. Then, we have $$\begin{split}\displaystyle\operatorname{\mathrm{cov}}\left(\varepsilon_{j}X_{B% _{[k-1]}},\varepsilon_{j^{\prime}}X_{B_{[k^{\prime}-1]}}\right)&\displaystyle=% \operatorname{\mathrm{cov}}\left(\varepsilon_{2}\varepsilon_{1},\varepsilon_{3% }(\varepsilon_{1},\gamma_{12}\varepsilon_{1}+\varepsilon_{2})^{\top}\right)\\ &\displaystyle=\operatorname{\mathbb{E}}[\varepsilon_{1}\varepsilon_{2}(% \varepsilon_{1}\varepsilon_{3},\gamma_{12}\varepsilon_{1}\varepsilon_{3}+% \varepsilon_{2}\varepsilon_{3})^{\top}],\end{split}$$ which may be non-zero. 7 Numerical Results In this section, the finite-sample performance of $\mathcal{G}$-regression is evaluated against contending estimators. We use simulations and an in silico dataset for predicting expression levels in gene knockout experiments. All the numerical experiments were conducted with R v3.6, package pcalg v2.6 (Kalisch et al., 2012) and our package eff2 v0.1. 7.1 Simulations We compare the performance of $\mathcal{G}$-regression to several contending estimators under finite samples. We roughly follow the simulation setup of Henckel et al. (2019); Witte et al. (2020). First, we draw a random undirected graph from the Erdős-Rényi model with average degree $k$, where $k$ is drawn from $\{2,3,4,5\}$ uniformly at random. The graph is converted to a DAG $\mathcal{D}$ with a random causal ordering and the corresponding CPDAG $\mathcal{G}$ is recorded. Then we fix a linear SEM by drawing $\gamma_{ij}$ uniformly from $[-2,-0.1]\cup[0.1,2]$ and choosing the error distribution randomly at random from the following: 1. $\epsilon_{i}\sim\mathcal{N}(0,v_{i})$ with $v_{i}\sim\mathrm{unif}(0.5,6)$, 2. $\epsilon_{i}/\sqrt{v_{i}}\sim t_{5}$ with $v_{i}\sim\mathrm{unif}(0.5,1.5)$, 3. $\epsilon_{i}\sim\text{logistic}(0,s_{i})$ with $s_{i}\sim\mathrm{unif}(0.4,0.7)$, 4. $\epsilon_{i}\sim\mathrm{unif}(-a_{i},a_{i})$ with $a_{i}\sim\mathrm{unif}(1.2,2.1)$. We generate $n$ iid samples from the model. Treatments $A$ of a fixed size are randomly selected from the set of vertices with non-empty descendants, and $Y$ is selected randomly from their descendants; the drawing is repeated until $\tau_{AY}$ is identified from $\mathcal{G}$ according to the criterion of Theorem 1. Finally, the data and graph $\mathcal{G}$ are provided to each estimator of $\tau_{AY}$. We compare to the following three estimators: • adj.O: optimal adjustment estimator (Henckel et al., 2019), • IDA.M: joint-IDA estimator based on modifying Cholesky decompositions (Nandy et al., 2017), • IDA.R: joint-IDA estimator based on recursive regressions (Nandy et al., 2017). They are implemented in R package pcalg. The two joint-IDA estimators use the parents of treatment variables to estimate a causal effect. Both of them reduce to the IDA estimator of Maathuis et al. (2009) when $|A|=1$. Admittedly, compared to $\mathcal{G}$-regression and adj.O, the joint-IDA estimators require less knowledge about the graph, namely only $\operatorname{Pa}(i)$ for each $i\in A$. For each estimator $\widehat{\tau}_{AY}$, we compute its squared error $\|\widehat{\tau}_{AY}-\tau_{AY}\|_{2}^{2}$. Dividing $\|\widehat{\tau}_{AY}-\tau_{AY}\|_{2}^{2}$ by the squared error of $\mathcal{G}$-regression, we obtain the relative squared error of each contending estimator. We consider $|A|\in\{1,2,3,4\}$, $|V|\in\{20,50,100\}$ and $n\in\{100,1000\}$; each configuration of $(|A|,|V|,n)$ is replicated 1,000 times. Fig. 7.1 shows the distributions of relative squared errors. In Table 7.1, we summarize the relative errors with their geometric mean and median. Our estimator dominates all the contending estimators in all cases, and the improvement gets larger as $|A|$ gets bigger. Even though adj.O achieves the minimal asymptotic variance among all adjustment estimators, it can compare less favorably to our estimator by several folds. In general, the IDA estimators have very poor performances. Moreover, the results in Table 7.1 are computed only from the replications where a contending estimator exists. As mentioned in the Introduction, unlike $\mathcal{G}$-regression, none of the contending estimators is guaranteed to exist for every identified effect under joint intervention (adj.O always exists for point interventions); see Table 7.2 for the percentages of instances that are not estimable by contending estimators, even though the effect is identified by Theorem 1 and hence estimable by $\mathcal{G}$-regression. In Appendix D, we report additional simulation results where the CPDAG is estimated with the greedy equivalence search algorithm (Chickering, 2002) and provided to the estimators. The improvements are more modest but are still typically by several folds. 7.2 Predicting double knockouts in DREAM4 data The DREAM4 in silico network challenge dataset (Marbach et al., 2009b) provides a benchmark for evaluating the reverse engineering of gene regulation networks. Here we use the 5th Size10 dataset (Marbach et al., 2009a) as our example, which is a small network of 10 genes. Fig. 7.2 shows the true gene regulation network, which is constructed based on the networks of living organisms. A stochastic differential equation model was used to generate the data under wild type (steady state), perturbed steady state and knockout interventions. A task in the challenge is to use data under wild type and perturbed steady state (both are observational data) to predict the steady state expression levels under 5 different joint interventions, each of which knocks out a pair of genes. For our purpose, we also use the true network as input. However, the true network contains one cycle (other networks in DREAM4 contain more than one cycles). In the following, we remove one edge in the cycle and provide the resulting DAG to the estimators. Necessarily, the causal DAG is misspecified. Results are reported under 4 different edge removals. Unfortunately, the wild type data only consists of one sample. To estimate the observational covariance, we use the perturbed steady state data, which consists of 5 segments of time series. A sample covariance is computed from each segment, and the final estimate is taken as their average. For a double knockout of genes $(i,j)$, we use $\mathcal{G}$-regression to estimate the joint-intervention effect of $A=(i,j)$ on every other gene. The effect is identified because the DAG is given. For gene $k$, let $s_{k}$ and $s_{k}^{(ij)}$ respectively denote its expression level under wild type and double knockout of genes $(i,j)$. The expression level under double knockout is predicted as $$\widehat{s}_{k}^{(ij)}=\begin{cases}s_{k}-(s_{i},s_{j})^{\top}\widehat{\tau}_{% ij,k},&k\notin\{i,j\}\\ 0,&k\in\{i,j\}\end{cases}.$$ The performance is evaluated with normalized squared error $$\mathcal{E}=\frac{\sum_{(i,j)\in\mathcal{A}}\sum_{k=1}^{10}(\widehat{s}_{k}^{(% ij)}-s_{k}^{(ij)})^{2}}{\sum_{(i,j)\in\mathcal{A}}\sum_{k=1}^{10}(s_{k}^{(ij)}% )^{2}},$$ where $\mathcal{A}=\{(6,8),(7,8),(8,10),(8,5),(8,9)\}$ consists of 5 double knockouts available in the dataset. For comparison, we also evaluate the performance of adj.O (optimal adjustment, Henckel et al. (2019)) and IDA.R (joint-IDA based on recursive regressions, Nandy et al. (2017)); IDA.R is chosen because it outperforms IDA.M according to Section 7.1. Unfortunately, adj.O is not able to estimate the effect on every $k$ and a modified metric $\mathcal{E}^{\ast}$ is computed by only summing over those estimable $k$’s; the same metric $\mathcal{E}^{\ast}$ of $\mathcal{G}$-regression is also computed for comparison. As a baseline, we also compute $\mathcal{E}$ from naively estimating $s_{k}^{(ij)}$ with just $s_{k}$. Table 7.3 reports the results, where the column ‘$\nexists$ adj.O’ lists the percentage of effects not estimable by the adjustment estimator. In almost all the cases, $\mathcal{G}$-regression dominates all the contending estimators. In this example, even though both the causal graph and the linear SEM are misspecified, one can still witness some usefulness of our estimator. 8 Discussion We have proposed $\mathcal{G}$-regression based on recursive least squares to estimate a total causal effect from observational data, under linearity and causal sufficiency assumuptions. $\mathcal{G}$-regression is applicable to estimating every identified total effect, under either point intervention or joint intervention. Further, via a new semiparametric efficiency theory, we have shown that the estimator achieves the efficiency bound within a restricted, yet reasonably large, class of estimators, including covariate adjustment and other regular estimators based on the sample covariance. To construct confidence intervals and conduct hypothesis tests, bootstrap can be easily applied to estimate the asymptotic covariance of our estimator. This is implemented in R package eff2. One may wonder, within the class of all regular estimators, if a (globally) semiparametric efficient estimator can be constructed for this problem. Ignoring the conditional independence constraints Eq. 12 in the blockwise error distributions, the model Eq. 21 is a generalized, multivariate location-shift regression model; see also Tsiatis (2006, §5.1) and Bickel et al. (1993, §4.3). While it is theoretically possible to construct such an estimator by firstly estimating the error score and then solving the associated estimating equations (Bickel et al., 1993, §7.8), the resulting estimator tends to be too complicated and unstable for practical purposes unless $n$ is very large (Tsiatis, 2006, page 111). The proposal of Tsiatis (2006) is to instead develop a locally efficient estimator by postulating an error distribution. Our result justifies postulating Gaussian errors if only the first two sample moments are utilized. Another possibility of tackling the nuisance of multivariate error distribution is via the distribution-free multivariate ranks, namely R-estimation along the lines of Hallin et al. (2019), which we leave for future studies. We have seen that the conditional independence constraints in Eq. 12 play no role for the restricted class of estimators considered—a feature that holds under the restrictive property (Corollary 1). Recently, Rotnitzky and Smucler (2019) found that when estimating an effect of a point intervention, under certain graphical conditions, covariate adjustment can still achieve the semiparametric efficiency bound, although it ignores the additional conditional independences obeyed by the observed distribution. It is thus worth investigating if the aforementioned phenomenon continues to hold in semiparametric estimation beyond the linear SEM. Appendix A Proofs for asymptotic efficiency A.1 Proof of Lemma 4 Proof. For simplicity, we drop the superscripts in $\widehat{\Sigma}^{(n)}$ and $\widehat{\lambda}^{(n)}$. Since $j\in B_{k}$ and $\operatorname{Pa}(B_{k},\mathcal{G})\subseteq C\subseteq\operatorname{Pa}(B_{k% },\bar{\mathcal{G}})$, we have $$\begin{split}\displaystyle\widehat{\lambda}_{C,j}-\lambda_{C,j}&\displaystyle=% (\widehat{\Sigma}_{C})^{-1}\widehat{\Sigma}_{C,j}-(\Sigma_{C})^{-1}\Sigma_{C,j% }\\ &\displaystyle=\left(\Sigma_{C}+\widehat{\Sigma}_{C}-\Sigma_{C}\right)^{-1}% \left(\widehat{\Sigma}_{C,j}-\Sigma_{C,j}\right)+\left((\widehat{\Sigma}_{C})^% {-1}-(\Sigma_{C})^{-1}\right)\Sigma_{C,j}.\end{split}$$ We compute the two terms separately. The first term becomes $$\begin{split}\displaystyle\left(\Sigma_{C}+\widehat{\Sigma}_{C}-\Sigma_{C}% \right)^{-1}\left(\widehat{\Sigma}_{C,j}-\Sigma_{C,j}\right)&\displaystyle=% \left(\Sigma_{C}+O_{p}(n^{-1/2})\right)^{-1}\left(\widehat{\Sigma}_{C,j}-% \Sigma_{C,j}\right)\\ &\displaystyle=(\Sigma_{C})^{-1}\left(\widehat{\Sigma}_{C,j}-\Sigma_{C,j}% \right)+O_{p}(n^{-1}),\end{split}$$ where we used the fact that $\Sigma_{C}$ is positive definite (Lemma 1) and $\|\widehat{\Sigma}_{C,j}-\Sigma_{C,j}\|=O_{p}(n^{-1/2})$, $\|\widehat{\Sigma}_{C}-\Sigma_{C}\|_{2}=O_{p}(n^{-1/2})$ by the central limit theorem. In the second term, $$\begin{split}\displaystyle(\widehat{\Sigma}_{C})^{-1}-(\Sigma_{C})^{-1}&% \displaystyle=\left(\Sigma_{C}+\widehat{\Sigma}_{C}-\Sigma_{C}\right)^{-1}-(% \Sigma_{C})^{-1}\\ &\displaystyle=\left[I-\left(I-(\Sigma_{C})^{-1}\widehat{\Sigma}_{C}\right)% \right]^{-1}(\Sigma_{C})^{-1}-(\Sigma_{C})^{-1}.\end{split}$$ Since $\|I-(\Sigma_{C})^{-1}\widehat{\Sigma}_{C}\|_{2}=O_{p}(n^{-1/2})$, using Neumann series $(I-H)^{-1}=I+H+H^{2}+\dots$ for $H=I-(\Sigma_{C})^{-1}\widehat{\Sigma}_{C}$ with $\|H\|_{2}\rightarrow_{p}0<1$, we have $$\begin{split}\displaystyle(\widehat{\Sigma}_{C})^{-1}-(\Sigma_{C})^{-1}&% \displaystyle=\left[I+H+O_{p}(n^{-1})\right](\Sigma_{C})^{-1}-(\Sigma_{C})^{-1% }\\ &\displaystyle=H(\Sigma_{C})^{-1}+O_{p}(n^{-1})\\ &\displaystyle=\left[I-(\Sigma_{C})^{-1}\widehat{\Sigma}_{C}\right](\Sigma_{C}% )^{-1}+O_{p}(n^{-1}).\end{split}$$ Combining the two terms, we obtain $$\begin{split}\displaystyle\widehat{\lambda}_{C,j}-\lambda_{C,j}&\displaystyle=% (\Sigma_{C})^{-1}\left(\widehat{\Sigma}_{C,j}-\Sigma_{C,j}\right)+\left[I-(% \Sigma_{C})^{-1}\widehat{\Sigma}_{C}\right](\Sigma_{C})^{-1}\Sigma_{C,j}+O_{p}% (n^{-1})\\ &\displaystyle=(\Sigma_{C})^{-1}\widehat{\Sigma}_{C,j}-(\Sigma_{C})^{-1}\Sigma% _{C,j}+(\Sigma_{C})^{-1}\Sigma_{C,j}-(\Sigma_{C})^{-1}\widehat{\Sigma}_{C}(% \Sigma_{C})^{-1}\Sigma_{C,j}+O_{p}(n^{-1})\\ &\displaystyle\stackrel{{\scriptstyle\text{(i)}}}{{=}}(\Sigma_{C})^{-1}\left(% \widehat{\Sigma}_{C,j}-\widehat{\Sigma}_{C}\lambda_{C,j}\right)+O_{p}(n^{-1})% \\ &\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(\Sigma_{C})^{-1}\left[X_{j}^{(i)}X_{C% }^{(i)}-X_{C}^{(i)}X_{C}^{(i)\top}\lambda_{C,j}\right]+O_{p}(n^{-1})\\ &\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(\Sigma_{C})^{-1}X_{C}^{(i)}\left(X_{j% }^{(i)}-\lambda_{C,j}^{\top}X_{C}^{(i)}\right)+O_{p}(n^{-1})\\ &\displaystyle\stackrel{{\scriptstyle\text{(ii)}}}{{=}}\frac{1}{n}\sum_{i=1}^{% n}(\Sigma_{C})^{-1}X_{C}^{(i)}\left(X_{j}^{(i)}-\lambda_{\operatorname{Pa}(B_{% k},\mathcal{G}),j}^{\top}X_{\operatorname{Pa}(B_{k},\mathcal{G})}^{(i)}\right)% +O_{p}(n^{-1})\\ &\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(\Sigma_{C})^{-1}X_{C}^{(i)}% \varepsilon_{j}^{(i)}+O_{p}(n^{-1}),\end{split}$$ where (i) uses $\lambda_{C,j}=(\Sigma_{C})^{-1}\Sigma_{C,j}$ and (ii) follows from Proposition 1 and $\operatorname{Pa}(B_{k},\mathcal{G})\subseteq C\subseteq\operatorname{Pa}(B_{k% },\bar{\mathcal{G}})$. ∎ A.2 Proof of Lemma 5 Proof. For each $k=2,\dots,K$, note that for $C=\operatorname{Pa}(B_{k},\bar{\mathcal{G}})=B_{[k-1]}$, $\operatorname{vec}\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}}=(\widehat{\lambda}% ^{(n)}_{C,j})_{j\in B_{k}}$ by concatenation. By Lemma 4, we have the following asymptotic linear expansion $$\widehat{\lambda}_{B_{[k-1]},j}^{(n)}-\lambda_{B_{[k-1]},j}=\frac{1}{n}\sum_{i% =1}^{n}\left(\Sigma_{B_{[k-1]}}\right)^{-1}X_{B_{[k-1]}}^{(i)}\varepsilon_{j}^% {(i)}+O_{p}(n^{-1}).$$ (30) By the central limit theorem, $$\sqrt{n}\begin{pmatrix}\operatorname{vec}(\widehat{\Lambda}_{2}^{\bar{\mathcal% {G}}}-\Lambda_{2})\\ \vdots\\ \operatorname{vec}(\widehat{\Lambda}_{K}^{\bar{\mathcal{G}}}-\Lambda_{K})\end{pmatrix}$$ converges to a centered multivariate normal distribution. Further, we claim that the asymptotic covariance must be block-diagonal according to $k=2,\dots,K$. To see this, take $k<k^{\prime}$, $j\in B_{k}$, $j^{\prime}\in B_{k^{\prime}}$ and let $C=B_{[k-1]}$, $C^{\prime}=B_{[k^{\prime}-1]}$. Using Eq. 30, we have $$\begin{split}&\displaystyle\quad\lim_{n\rightarrow\infty}n^{-1}\operatorname{% \mathbb{E}}\left(\widehat{\lambda}_{C,j}^{(n)}-\lambda_{C,j}\right)\left(% \widehat{\lambda}_{C^{\prime},j}^{(n)}-\lambda_{C^{\prime},j}\right)^{\top}\\ &\displaystyle=(\Sigma_{C})^{-1}\operatorname{\mathrm{cov}}(\varepsilon_{j}X_{% C},\varepsilon_{j^{\prime}}X_{C^{\prime}})(\Sigma_{C^{\prime}})^{-1}\\ &\displaystyle=(\Sigma_{C})^{-1}\left\{\operatorname{\mathbb{E}}\left[% \varepsilon_{j}\varepsilon_{j^{\prime}}X_{C}X_{C^{\prime}}^{\top}\right]-% \operatorname{\mathbb{E}}\left[\varepsilon_{j}X_{C}\right]\operatorname{% \mathbb{E}}\left[\varepsilon_{j^{\prime}}X_{C^{\prime}}^{\top}\right]\right\}(% \Sigma_{C^{\prime}})^{-1}.\end{split}$$ In the expression above, because $\varepsilon_{B_{k}}\mathrel{\text{\scalebox{1.07}{$\perp\mskip-10.0mu \perp$}}% }X_{B_{[k-1]}}$ and $\varepsilon_{B_{k^{\prime}}}\mathrel{\text{\scalebox{1.07}{$\perp\mskip-10.0mu% \perp$}}}X_{B_{[k^{\prime}-1]}}$ by Corollary 2 and $j\in B_{k}$, $j^{\prime}\in B_{k^{\prime}}$ for $k<k^{\prime}$, we have $\operatorname{\mathbb{E}}\left[\varepsilon_{j}\varepsilon_{j^{\prime}}X_{C}X_{% C^{\prime}}^{\top}\right]=\operatorname{\mathbb{E}}\varepsilon_{j^{\prime}}% \operatorname{\mathbb{E}}\left[\varepsilon_{j}X_{C}X_{C^{\prime}}^{\top}\right% ]=\bm{0}$, $\operatorname{\mathbb{E}}\varepsilon_{j}X_{C}=\bm{0}$ and $\operatorname{\mathbb{E}}\varepsilon_{j^{\prime}}X_{C^{\prime}}=\bm{0}$. It follows that the display above evaluates to $\bm{0}$ and hence the asymptotic covariance matrix is block-diagonal. It remains to be shown that $\operatorname{\mathrm{acov}}\operatorname{vec}(\widehat{\Lambda}_{k}^{\bar{% \mathcal{G}}}-\Lambda_{k})=\Omega_{k}\otimes(\Sigma_{B_{[k-1]}})^{-1}$ for $k=2,\dots,K$. Fix $k$, take any two distinct $j,j^{\prime}\in B_{k}$ and let $C=B_{[k-1]}$. Again using Eq. 30, we have $$\operatorname{\mathrm{acov}}\begin{pmatrix}\widehat{\lambda}^{(n)}_{C,j}\\ \widehat{\lambda}^{(n)}_{C,j^{\prime}}\end{pmatrix}=\begin{pmatrix}H&F\\ F^{\top}&D\end{pmatrix},$$ where $$\begin{split}\displaystyle H&\displaystyle=(\Sigma_{C})^{-1}\operatorname{% \mathrm{cov}}(\varepsilon_{j}X_{C},\varepsilon_{j}X_{C})(\Sigma_{C})^{-1}=% \operatorname{\mathrm{var}}(\varepsilon_{j})(\Sigma_{B_{[k-1]}})^{-1},\\ \displaystyle F&\displaystyle=(\Sigma_{C})^{-1}\operatorname{\mathrm{cov}}(% \varepsilon_{j}X_{C},\varepsilon_{j^{\prime}}X_{C})(\Sigma_{C})^{-1}=% \operatorname{\mathrm{cov}}(\varepsilon_{j},\varepsilon_{j^{\prime}})(\Sigma_{% B_{[k-1]}})^{-1},\\ \displaystyle D&\displaystyle=(\Sigma_{C})^{-1}\operatorname{\mathrm{cov}}(% \varepsilon_{j^{\prime}}X_{C},\varepsilon_{j^{\prime}}X_{C})(\Sigma_{C})^{-1}=% \operatorname{\mathrm{var}}(\varepsilon_{j}^{\prime})(\Sigma_{B_{[k-1]}})^{-1}% .\end{split}$$ Noting that $\Omega_{k}=\operatorname{\mathrm{cov}}(\varepsilon_{B_{k}})$ and $\operatorname{vec}\widehat{\Lambda}_{k}^{\bar{\mathcal{G}}}=(\widehat{\lambda}% ^{(n)}_{C,j})_{j\in B_{k}}$, the result then follows from comparing the expressions above to the definition of Kronecker product for every pair $j,j^{\prime}\in B_{k}$. ∎ A.3 Proof of Lemma 6 Proof. Note that by the restrictive property of $\mathcal{G}$ (Corollary 1), we have $\operatorname{vec}\widehat{\Lambda}_{k}^{\mathcal{G}}=\left(\widehat{\lambda}^% {(n)}_{\operatorname{Pa}(B_{k},\mathcal{G}),j}\right)_{j\in B_{k}}$ for $k=2,\dots,K$. Using Lemma 4 with $C=\operatorname{Pa}(B_{k},\mathcal{G})$, we have the following asymptotic linear expansion $$\widehat{\lambda}_{\operatorname{Pa}(B_{k},\mathcal{G}),j}^{(n)}-\lambda_{% \operatorname{Pa}(B_{k},\mathcal{G}),j}=\frac{1}{n}\sum_{i=1}^{n}\left(\Sigma_% {\operatorname{Pa}(B_{k},\mathcal{G})}\right)^{-1}X_{\operatorname{Pa}(B_{k},% \mathcal{G})}^{(i)}\varepsilon_{j}^{(i)}+O_{p}(n^{-1}).$$ (31) The rest of computation follows similarly to the proof of Lemma 5. ∎ A.4 Proof of Lemma 7 Proof. Since $S\in\mathbb{R}_{\text{PD}}^{{n}\times{n}}$, by completing the square, we have $$\begin{split}\displaystyle x^{\top}Sx&\displaystyle=x_{A}^{\top}S_{A,A}x_{A}+x% _{B}^{\top}S_{B,A}x_{A}+x_{A}^{\top}S_{A,B}x_{B}+x_{B}^{\top}S_{B,B}x_{B}\\ &\displaystyle\qquad-x_{A}^{\top}S_{A,B}S_{B,B}^{-1}S_{B,A}x_{A}+x_{A}^{\top}S% _{A,B}S_{B,B}^{-1}S_{B,A}x_{A}\\ &\displaystyle=x_{A}^{\top}(S_{A,A}-S_{A,B}S_{B,B}^{-1}S_{B,A})x_{A}+(x_{B}+S_% {B,B}^{-1}S_{B,A}x_{A})^{\top}S_{B,B}(x_{B}+S_{B,B}^{-1}S_{B,A}x_{A})\\ &\displaystyle\geq x_{A}^{\top}(S_{A,A}-S_{A,B}S_{B,B}^{-1}S_{B,A})x_{A}=x_{A}% ^{\top}S_{A\cdot B}x_{A},\end{split}$$ where the equality holds if and only if $x_{B}=-S_{B,B}^{-1}S_{B,A}x_{A}$. ∎ Appendix B Proofs for graphical results B.1 Proof of Lemma 1 Proof. Let the undirected path between $j$ and $k$ be $p=\langle j=V_{1},\dots,V_{l}=k\rangle$ with $l>1$. First note that $i$ is not on $p$ because there is no undirected path between $i$ and $j$ in $\mathcal{G}$. Further, since $i\rightarrow j-V_{2}$ is in $\mathcal{G}$, by Meek rules R1 and R1 (Fig. C.1 in Appendix C), $i-V_{2}$ or $i\rightarrow V_{2}$ is in $\mathcal{G}$. Since, by assumption, there is no undirected path from $i$ to $j$ in $\mathcal{G}$, $i-V_{2}\notin U$. Hence, $i\rightarrow V_{2}\in E$ and if $l=2$, the statement of the lemma holds. If $l>2$, we can apply the above reasoning iteratively until we obtain $i\rightarrow V_{l}\in E$. ∎ B.2 Proof of Lemma 2 Proof. Let $l\in D_{k}$. Since $D_{k}\subseteq B_{k}$, $l\in B_{k}$. Then by Corollary 1, we have that $\operatorname{Pa}(B_{k})=\operatorname{Pa}(l)\setminus B_{k}$. Therefore, $\operatorname{Pa}(B_{k})\subseteq\cup_{j\in D_{k}}\operatorname{Pa}(j)% \setminus B_{k}$ and furthermore, $\operatorname{Pa}(B_{k})\subseteq\cup_{j\in D_{k}}\operatorname{Pa}(j)% \setminus D_{k}=\operatorname{Pa}(D_{k})$. Hence, it suffices to show $\operatorname{Pa}(D_{k})\subseteq\operatorname{Pa}(B_{k})$. We prove $\operatorname{Pa}(D_{k})\subseteq\operatorname{Pa}(B_{k})$ by contradiction. Suppose there exists $j\in\operatorname{Pa}(D_{k})\setminus\operatorname{Pa}(B_{k})$. By definition $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and $D=\cup_{r=1}^{K}D_{r}$. Therefore, if $k=1$, then $j\in A$; if $k>1$, $j$ must be contained in $\cup_{r=1}^{k-1}D_{r}$ or in $A$. If $j\in A$, this leads to a contradiction with Lemma 1 in Appendix C. Suppose $k>1$ and $j\in\cup_{r=1}^{k-1}D_{r}$. Because $\cup_{r=1}^{k-1}D_{r}\subseteq\cup_{r=1}^{k-1}B_{r}$ and buckets $\{B_{1},\dots,B_{K}\}$ are disjoint, we have $(\cup_{r=1}^{k-1}D_{r})\cap B_{k}=\emptyset$. However, this contradicts that $j\in B_{k}$. ∎ B.3 Proof of Proposition 3 Proof. By construction, the undirected component of $\bar{\mathcal{G}}$ remains the same as that of $\mathcal{G}$. Hence, $\bar{\mathcal{G}}$ has the same bucket decomposition as $\mathcal{G}$. We only need to show that $\bar{\mathcal{G}}$ is an MPDAG. It is enough to show that the edge orientations in $\bar{\mathcal{G}}$ are closed under rules R1–R4 of Meek (1995) that are displayed in Fig. C.1 of Appendix C. Note that since $\mathcal{G}$ an MPDAG it is closed under R1–R4. So if any of the left-hand-side graphs in Figure C.1 are induced subgraphs of $\bar{\mathcal{G}}$, then at least one of the directed edges in these induced subgraphs must have been added in the construction of $\bar{\mathcal{G}}$. Since the construction of $\bar{\mathcal{G}}$ does not involve adding directed edges within a bucket, the left-hand-side of rules R3 and R4 in Figure C.1 cannot appear as induced subgraphs of $\bar{\mathcal{G}}$. Hence, edge orientations in $\bar{\mathcal{G}}$ are complete under rules R3 and R4. Consider the left-hand-side of rule R1 in Figure C.1, $A\rightarrow B-C$, for some $A,B,C\in V$. For $A\rightarrow B-C$ to be an induced subgraph of $\bar{\mathcal{G}}$, $A\rightarrow B$ must have been added in the construction of $\bar{\mathcal{G}}$ from $\mathcal{G}$. Hence, $A$ and $B$ would need to be in different buckets in $V$ in $\mathcal{G}$. Since $B$ and $C$ are in the same bucket because of edge $B-C$, $A\rightarrow C$ would also be added to $\mathcal{G}$ in the construction of $\bar{\mathcal{G}}$. Hence, $A\rightarrow B-C$ will also not appear as an induced subgraph of $\bar{\mathcal{G}}$ and edge orientations in $\bar{\mathcal{G}}$ are also closed under R1. Consider the left-hand-side of R2 in Figure C.1, and suppose for a contradiction that $A\rightarrow B\rightarrow C$ and $A-C$ is an induced subgraph of $\bar{\mathcal{G}}$ for some $A,B,C\in V$. Then $A\rightarrow B$, $B\rightarrow C$, or both $A\rightarrow B$ and $B\rightarrow C$, were added to $\mathcal{G}$ in the construction of $\bar{\mathcal{G}}$. Because of $A-C$, suppose $A$ and $C$ are in the same bucket $B_{i}$ for some $i\in\{1,\dots,K\}$ in $\mathcal{G}$. Also, suppose $B\in B_{j}$. Because only directed edges between buckets are added, $i\neq j$. Now, $A\rightarrow B$ and $B\rightarrow C$ cannot be both added to $\mathcal{G}$ to construct $\bar{\mathcal{G}}$, because that would imply that $i<j$ and $j<i$. By R1, $B\rightarrow C-A$ cannot be an induced subgraph of MPDAG $\mathcal{G}$, so $A\rightarrow B$ alone also could not be added to $\mathcal{G}$. Therefore, $B\rightarrow C$ alone was added to $\mathcal{G}$. But $C-A\rightarrow B$ is an induced subgraph of $\mathcal{G}$, so $i<j$, which contradicts the direction of $B\rightarrow C$. ∎ Appendix C Graphical preliminaries Graphs, vertices, edges A graph $\mathcal{G}=(V,F)$ consists of a set of vertices (variables) $V$ and a set of edges $F$. The graphs we consider are allowed to contain directed ($\rightarrow$) and undirected ($-$) edges and at most one edge between any two vertices. We can thus partition the set of edges $F$ into a set of directed edges $E$ and undirected edges $U$ and denote graph $\mathcal{G}=(V,F)$ as $\mathcal{G}=(V,E,U)$. The corresponding undirected graph is simply $\mathcal{G}_{U}=(V,\emptyset,U)$. Subgraphs and skeleton An induced subgraph $\mathcal{G}_{{V^{\prime}}}=(V^{\prime},F^{\prime})$ of $\mathcal{G}=(V,F)$ consists of $V^{\prime}\subseteq V$ and $F^{\prime}\subseteq F$ where $F^{\prime}$ are all edges in $F$ between vertices in $V^{\prime}$. A skeleton of a graph $\mathcal{G}=(V,F)$ is an undirected graph $\mathcal{G}=(V,F^{\prime})$, such that $F^{\prime}$ are undirected versions of all edges in $F$. Paths. Directed, undirected, causal, non-causal, proper paths A path $p$ from $i$ to $j$ in $\mathcal{G}$ is a sequence of distinct vertices $p=\langle i,\dots,j\rangle$ in which every pair of successive vertices are adjacent. A path consisting of undirected edges is an undirected path. A directed path from $i$ to $j$ is a path from $i$ to $j$ in which all edges are directed towards $j$, that is, $i\to\dots\to j$. We will use causal path instead of directed path when talking about causal graphs. Let $p=\langle v_{1},\dots,v_{k}\rangle$, $k>1$ be a path in $\mathcal{G}$, $p$ is a possibly directed path (possibly causal path) if no edge $v_{i}\leftarrow v_{j},1\leq i<j\leq k$ is in $\mathcal{G}$. Otherwise, $p$ is a non-causal path in $\mathcal{G}$ (see Definition 3.1 and Lemma 3.2 of Perković et al., 2017). A path from ${A}$ to ${Y}$ is proper (w.r.t. ${A}$) if only its first vertex is in ${A}$. Directed cycles A directed path from $i$ to $j$ and the edge $j\to i$ form a directed cycle. Colliders, shields and definite status paths If a path $p$ contains $i\rightarrow j\leftarrow k$ as a subpath, then $j$ is a collider on $p$. A path $\langle i,j,k\rangle$ is an (un)shielded triple if $i$ and $k$ are (not) adjacent. A path is unshielded if all successive triples on the path are unshielded. A node $v_{j}$ is a definite non-collider on a path $p$ if there is at least one edge out of $v_{j}$ on $p$, or if $v_{j-1}-v_{j}-v_{j+1}$ is a subpath of $p$ and $\langle v_{j-1},v_{j},v_{j+1}\rangle$ is an unshielded triple. A node is of definite status on a path if it is a collider, a definite non-collider or an endpoint on the path. A path $p$ is of definite status if every node on $p$ is of definite status. Subsequences and subpaths A subsequence of a path $p$ is obtained by deleting some nodes from $p$ without changing the order of the remaining nodes. A subsequence of a path is not necessarily a path. For a path $p=\langle v_{1},v_{2},\dots,v_{m}\rangle$, the subpath from $v_{i}$ to $v_{k}$ ($1\leq i\leq k\leq m)$ is the path $p(v_{i},v_{k})=\langle v_{i},v_{i+1},\dots,a_{k}\rangle$. Ancestral relations If $i\to j$, then $i$ is a parent of $j$, and $j$ is a child of $i$. If there is a causal path from $k$ to $l$, then $k$ is an ancestor of $l$, and $l$ is a descendant of $k$. If there is a possibly causal path from $k$ to $l$, then $k$ is a possible ancestor of $l$, and $l$ is a possible descendant of $k$. We use the convention that every vertex is a descendant, ancestor, possible ancestor and possible descendant of itself. The sets of parents, ancestors, descendants and possible descendants of $i$ in $\mathcal{G}$ are denoted by $\operatorname{Pa}(i,\mathcal{G})$, $\operatorname{An}(i,\mathcal{G})$, $\operatorname{De}(i,\mathcal{G})$ and $\operatorname{PossDe}(i,\mathcal{G})$ respectively. For a set of vertices ${A}$, we let $\operatorname{Pa}({A},\mathcal{G})=(\cup_{i\in{A}}\operatorname{Pa}(i,\mathcal% {G}))\setminus{A}$, whereas, $\operatorname{An}({A},\mathcal{G})=\cup_{i\in{A}}\operatorname{An}(i,\mathcal{% G})$, $\operatorname{De}({A},\mathcal{G})=\cup_{i\in{A}}\operatorname{De}(i,\mathcal{% G})$ and $\operatorname{PossDe}({A},\mathcal{G})=\cup_{i\in{A}}\operatorname{PossDe}(i,% \mathcal{G})$ DAGs, PDAGs A directed graph contains only directed edges. A partially directed graph may contain both directed and undirected edges. A directed graph without directed cycles is a directed acyclic graph $(\operatorname{DAG})$. A partially directed acyclic graph $(\operatorname{PDAG})$ is a partially directed graph without directed cycles. Blocking and d-separation (See Definition 1.2.3 of Pearl (2009) and Lemma C.1 of Henckel et al. (2019)). Let $Z$ be a set of vertices in an PDAG $\mathcal{G}=(V,E,U)$. A definite status path $p$ is blocked by ${Z}$ if (i) $p$ contains a non-collider that is in ${Z}$, or (ii) $p$ contains a collider $C$ such that no descendant of $C$ is in ${Z}$. A definite status path that is not blocked by ${Z}$ is open given ${Z}$. If ${A},{B}$ and ${Z}$ are three pairwise disjoint sets of nodes in a PDAG $\mathcal{G}=(V,E,U)$, then ${Z}$ d-separates ${A}$ from ${B}$ in $\mathcal{G}$ if ${Z}$ blocks every definite status path between any node in ${A}$ and any node in ${B}$ in $\mathcal{G}$. CPDAGs, MPDAGs Several DAGs can encode the same d-separation relationships. Such DAGs form a Markov equivalence class which is uniquely represented by a completed partially directed acyclic graph (CPDAG) (Meek, 1995; Andersson et al., 1997). A PDAG $\mathcal{G}=(V,E,U)$ is a maximally oriented PDAG (MPDAG) if it is closed under orientation rules R1-R4 of (Meek, 1995), presented in Figure C.1. The MPDAG can then be alternatively defined as any PDAG that does not contain graphs on the left-hand side of each orientation rule as induced subgraphs. Both DAGs and CPDAGs are types of MPDAGs (Meek, 1995). Background knowledge and constructing MPDAGs A PDAG $\mathcal{G}^{\prime}$ is represented by another PDAG $\mathcal{G}$ (equivalently $\mathcal{G}$ represents $\mathcal{G}^{\prime}$) if $\mathcal{G}^{\prime}$ and $\mathcal{G}$ have the same adjacencies and unshielded colliders and every directed edge $i\rightarrow j$ in $\mathcal{G}$ is also in $\mathcal{G}^{\prime}$. Let ${R}$ be a set of directed edges representing background knowledge. Algorithm 1 of Meek (1995) describes how to incorporate background knowledge ${R}$ in an MPDAG $\mathcal{G}$. If Algorithm 1 does not return a FAIL, then it returns a new MPDAG $\mathcal{G}^{\prime}$ that is represented by $\mathcal{G}$. Background knowledge ${R}$ is consistent with MPDAG $\mathcal{G}$ if and only if Algorithm 1 does not return a FAIL (Meek, 1995). Remark 5. The MPDAG output by ConstructMPDAG($\mathcal{G}$, ${R}$) is the same independent of the ordering of edges in ${R}$. This stems from the fact that the orientation rules of Meek (1995) are necessary and sufficient for the construction of an MPDAG given a set of adjacencies and unshielded colliders. $\mathcal{G}$ and $[\mathcal{G}]$ If $\mathcal{G}$ is a MPDAG, then $[\mathcal{G}]$ denotes every DAG represented by $\mathcal{G}$. Causal and partial causal ordering of vertices A total ordering, $<$, of vertices ${V^{\prime}}\subseteq{V}$ is consistent with a DAG $\mathcal{D}=({V,E,\emptyset})$ and called a causal ordering of ${V^{\prime}}$ if for every $i,j\in{V^{\prime}}$, such that $i<j$ and such that $i$ and $j$ are adjacent in $\mathcal{D}$, $i\rightarrow j$ is in $\mathcal{D}$. There can be more than one causal ordering of ${V^{\prime}}$ in a DAG $\mathcal{D}=({V,E,\emptyset})$. For example, in DAG $i\leftarrow j\rightarrow k$ both orderings $j<i<k$ and $j<k<i$ are consistent. Since an MPDAG may contain undirected edges, there is generally no unique causal ordering of vertices in an MPDAG. Instead, we define a partial causal ordering, $<$, of a vertex set $V^{\prime}$, $V^{\prime}\subset V$ in an MPDAG $\mathcal{G}=({V,E,U})$ as a total ordering of pairwise disjoint vertex sets ${A_{1}},\dots,{A_{k}},$ $k\geq 1$, $\cup_{i=1}^{k}{A_{i}}={V^{\prime}}$, that satisfy the following: if ${A_{i}}<{A_{j}}$ and there is an edge between $i\in{A_{i}}$ and $j\in{A_{j}}$ in $\mathcal{G}$, then $i\rightarrow j$ is in $\mathcal{G}$. Buckets and bucket decomposition Algorithm 2 describes how to obtain an ordered bucket decomposition for a set of vertices ${V}$ in an MPDAG $\mathcal{G}=(V,E,U)$. By Perković (2020, Lemma 1), the ordered list of buckets output by Algorithm 2 is a partial causal ordering of $V$ in $\mathcal{G}$. Lemma 1. (see Lemma D.1 (i) of Perković, 2020) Let $A$ and $Y$ be disjoint node sets in MPDAG $\mathcal{G}=(V,E,U)$. Suppose that there is no proper possibly causal path from $A$ to $Y$ that starts with an undirected edge in $\mathcal{G}$, that is, suppose that the criterion in Theorem 1 is satisfied. Further, let $D=\operatorname{An}(Y,\mathcal{G}_{V\setminus A})$ and $D=\dot{\bigcup}_{i=1}^{K}D_{i}$ for $D_{i}=D\cap B_{i}$, $i=1,\dots,K$, where $B_{1},\dots B_{K}$ is the bucket decomposition of $V$. Then for all $i\in\{1,\dots,K\}$, there is no proper possibly causal path from $A$ to $B_{i}$ that starts with an undirected edge in $\mathcal{G}$. Appendix D Additional simulation results In this section, we report additional simulation results. The setup is the same as Section 7.1 of main text, but we replace the true CPDAG with the CPDAG estimated with the greedy equivalence search algorithm (Chickering, 2002) based on the same sample. The relative squared errors of the contending estimators are shown in Fig. D.1 and are summarized in Table D.1. Compared to the results with the true CPDAG, the performance improvement of $\mathcal{G}$-regression is more modest but still matters in practice. The reduced improvement is due to the error in estimating the graph, which diminishes as $n$ increases. Acknowledgements Authors thank Thomas Richardson for valuable comments and discussions. The first author was supported by ONR Grant N000141912446. References Amemiya [1985] Takeshi Amemiya. Advanced Econometrics. Harvard University Press, 1985. Anderson and Olkin [1985] Theodore Wilbur Anderson and Ingram Olkin. Maximum-likelihood estimation of the parameters of a multivariate normal distribution. Linear algebra and its applications, 70:147–171, 1985. Andersson et al. [1997] Steen A. Andersson, David Madigan, and Michael D. Perlman. A characterization of Markov equivalence classes for acyclic digraphs. The Annals of Statistics, 25:505–541, 1997. Bickel et al. [1993] Peter J. Bickel, Chris A. J. Klaassen, Ya’acov Ritov, and Jon A. Wellner. Efficient and Adaptive Estimation for Semiparametric Models, volume 4. Johns Hopkins University Press, Baltimore, 1993. Bollen [1989] Kenneth A. Bollen. Structural Equations with Latent Variables. Wiley, New York, 1989. Chen et al. [2019] Wenyu Chen, Mathias Drton, and Y. Samuel Wang. On causal discovery with an equal-variance assumption. Biometrika, 106(4):973–980, 2019. Chickering [2002] David Maxwell Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 3(Nov):507–554, 2002. Dawid [1981] A. Philip Dawid. Some matrix-variate distribution theory: notational considerations and a Bayesian application. Biometrika, 68(1):265–274, 1981. Drton [2006] Mathias Drton. Computing all roots of the likelihood equations of seemingly unrelated regressions. Journal of Symbolic Computation, 41(2):245–254, 2006. Drton [2018] Mathias Drton. Algebraic problems in structural equation modeling. In The 50th Anniversary of Gröbner Bases, pages 35–86. Mathematical Society of Japan, 2018. Drton and Eichler [2006] Mathias Drton and Michael Eichler. Maximum likelihood estimation in Gaussian chain graph models under the alternative Markov property. Scandinavian Journal of Statistics, 33(2):247–257, 2006. Drton and Maathuis [2017] Mathias Drton and Marloes H. Maathuis. Structure learning in graphical modeling. Annual Review of Statistics and Its Application, 4:365–393, 2017. Drton and Richardson [2004] Mathias Drton and Thomas S. Richardson. Multimodality of the likelihood in the bivariate seemingly unrelated regressions model. Biometrika, 91(2):383–392, 2004. Drton et al. [2008] Mathias Drton, Bernd Sturmfels, and Seth Sullivant. Lectures on algebraic statistics, volume 39. Springer Science & Business Media, 2008. Drton et al. [2009] Mathias Drton, Michael Eichler, and Thomas S. Richardson. Computing maximum likelihood estimates in recursive linear models with correlated errors. Journal of Machine Learning Research, 10(81):2329–2348, 2009. Drton et al. [2011] Mathias Drton, Rina Foygel, and Seth Sullivant. Global identifiability of linear structural equation models. The Annals of Statistics, 39(2):865–886, 2011. Eigenmann et al. [2017] Marco Eigenmann, Preetam Nandy, and Marloes H. Maathuis. Structure learning of linear Gaussian structural equation models with weak edges. In Proceedings of the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI-17), 2017. Fang and He [2020] Zhuangyan Fang and Yangbo He. IDA with background knowledge. In Proceedings of the 36th Annual Conference on Uncertainty in Artificial Intelligence (UAI-20), 2020. Gupta et al. [2020] Shantanu Gupta, Zachary C. Lipton, and David Childers. Estimating treatment effects with observed confounders and mediators. arXiv preprint arXiv:2003.11991, 2020. Hallin et al. [2019] Marc Hallin, Davide La Vecchia, and Hang Liu. Center-outward R-estimation for semiparametric VARMA models. arXiv preprint arXiv:1910.08442, 2019. Hansen [1982] Lars Peter Hansen. Large sample properties of generalized method of moments estimators. Econometrica: Journal of the Econometric Society, pages 1029–1054, 1982. Hauser and Bühlmann [2012] Alan Hauser and Peter Bühlmann. Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs. Journal of Maching Learning Research, 13:2409–2464, 2012. Henckel et al. [2019] Leonard Henckel, Emilija Perković, and Marloes H. Maathuis. Graphical criteria for efficient total effect estimation via adjustment in causal linear models. arXiv preprint arXiv:1907.02435, 2019. Horn and Johnson [2012] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 2 edition, 2012. Hoyer et al. [2008] Patrik O. Hoyer, Aapo Hyvarinen, Richard Scheines, Peter L. Spirtes, Joseph Ramsey, Gustavo Lacerda, and Shohei Shimizu. Causal discovery of linear acyclic models with arbitrary distributions. In Proceedings of the 24th Annual Conference on Uncertainty in Artificial Intelligence (UAI-08), pages 282–289, 2008. Kalisch et al. [2012] Markus Kalisch, Martin Mächler, Diego Colombo, Marloes H. Maathuis, and Peter Bühlmann. Causal inference using graphical models with the R package pcalg. Journal of Statistical Software, 47(11):1–26, 2012. Koopmans and Reiersøl [1950] Tjalling C. Koopmans and Olav Reiersøl. The identification of structural characteristics. The Annals of Mathematical Statistics, 21(2):165–181, 1950. Kuroki and Cai [2004] Manabu Kuroki and Zhihong Cai. Selection of identifiability criteria for total effects by using path diagrams. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 333–340, 2004. Kuroki and Miyakawa [2003] Manabu Kuroki and Masami Miyakawa. Covariate selection for estimating the causal effect of control plans by using causal diagrams. Journal of the Royal Statistical Society. Series B. Statistical Methodology, 65(1):209–222, 2003. Kuroki and Nanmo [2020] Manabu Kuroki and Hisayoshi Nanmo. Variance formulas for estimated mean response and predicted response with external intervention based on the back-door criterion in linear structural equation models. AStA Advances in Statistical Analysis, pages 1–19, 2020. Lauritzen [1996] Steffen L. Lauritzen. Graphical Models. Oxford University Press, New York, 1996. Liu [1999] Jianzhou Liu. Some Löwner partial orders of Schur complements and Kronecker products of matrices. Linear algebra and its applications, 291(1-3):143–149, 1999. Maathuis and Colombo [2015] Marloes H. Maathuis and Diego Colombo. A generalized back-door criterion. The Annals of Statistics, 43(3):1060–1088, 2015. Maathuis et al. [2009] Marloes H. Maathuis, Markus Kalisch, and Peter Bühlmann. Estimating high-dimensional intervention effects from observational data. The Annals of Statistics, 37(6A):3133–3164, 2009. Marbach et al. [2009a] Daniel Marbach, Thomas Schaffter, Dario Floreano, Robert J. Prill, and Gustavo Stolovitzky. The DREAM4 in-silico network challenge. Draft, version 0.3 http://gnw.sourceforge.net/resources/DREAM4%20in%20silico%20challenge.pdf, 2009a. Marbach et al. [2009b] Daniel Marbach, Thomas Schaffter, Claudio Mattiussi, and Dario Floreano. Generating realistic in silico gene networks for performance assessment of reverse engineering methods. Journal of Computational Biology, 16(2):229–239, 2009b. Meek [1995] Christopher Meek. Causal inference and causal explanation with background knowledge. In Proceedings of the 11th Annual Conference on Uncertainty in Artificial Intelligence (UAI-95), pages 403–410, 1995. Nandy et al. [2017] Preetam Nandy, Marloes H. Maathuis, and Thomas S. Richardson. Estimating the effect of joint interventions from observational data in sparse high-dimensional settings. The Annals of Statistics, 45(2):647–674, 2017. Pearl [1993] Judea Pearl. Comment: graphical models, causality and intervention. Statistical Science, 8(3):266–269, 1993. Pearl [1995] Judea Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669–688, 1995. Pearl [2009] Judea Pearl. Causality. Cambridge University Press, Cambridge, 2nd edition, 2009. Pearl and Verma [1995] Judea Pearl and Thomas S. Verma. A theory of inferred causation. In Studies in Logic and the Foundations of Mathematics, volume 134, pages 789–811. Elsevier, 1995. Perković [2020] Emilija Perković. Identifying causal effects in maximally oriented partially directed acyclic graphs. In Proceedings of the 36th Annual Conference on Uncertainty in Artificial Intelligence (UAI-20), 2020. Perković et al. [2015] Emilija Perković, Johannes Textor, Markus Kalisch, and Marloes H. Maathuis. A complete generalized adjustment criterion. In Proceedings of the 31st Annual Conference on Uncertainty in Artificial Intelligence (UAI-15) Annual Conference on Uncertainty in Artificial Intelligence (UAI-15), pages ID–155, 2015. Perković et al. [2017] Emilija Perković, Markus Kalisch, and Marloes H. Maathuis. Interpreting and using CPDAGs with background knowledge. In Proceedings of the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI-17), 2017. Perković et al. [2018] Emilija Perković, Johannes Textor, Markus Kalisch, and Marloes H. Maathuis. Complete graphical characterization and construction of adjustment sets in Markov equivalence classes of ancestral graphs. Journal of Machine Learning Research, 18(220):1–62, 2018. Peters and Bühlmann [2014] Jonas Peters and Peter Bühlmann. Identifiability of Gaussian structural equation models with equal error variances. Biometrika, 101(1):219–228, 2014. R Core Team [2020] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2020. URL https://www.R-project.org/. Robins [1986] James M. Robins. A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. Mathematical Modelling, 7:1393–1512, 1986. Rothenhäusler et al. [2018] Dominik Rothenhäusler, Jan Ernest, and Peter Bühlmann. Causal inference in partially linear structural equation models: identifiability and estimation. Annals of Statistics, 46:2904–2938, 2018. Rotnitzky and Smucler [2019] Andrea Rotnitzky and Ezequiel Smucler. Efficient adjustment sets for population average treatment effect estimation in non-parametric causal graphical models. arXiv preprint arXiv:1912.00306, 2019. Sargan [1958] John D. Sargan. The estimation of economic relationships using instrumental variables. Econometrica: Journal of the Econometric Society, pages 393–415, 1958. Scheines et al. [1998] Richard Scheines, Peter Spirtes, Clark Glymour, Christopher Meek, and Thomas Richardson. The TETRAD project: constraint based aids to causal model specification. Multivariate Behavioral Research, 33(1):65–117, 1998. Shimizu et al. [2006] Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(Oct):2003–2030, 2006. Shorack [2000] Galen R. Shorack. Probability for Statisticians. Springer, 2000. Shpitser et al. [2010] Ilya Shpitser, Tyler VanderWeele, and James M. Robins. On the validity of covariate adjustment for estimating causal effects. In Proceedings of the 26th Annual Conference on Uncertainty in Artificial Intelligence (UAI-10), pages 527–536, 2010. Smucler et al. [2020] Ezequiel Smucler, Facundo Sapienza, and Andrea Rotnitzky. Efficient adjustment sets in causal graphical models with hidden variables. arXiv preprint arXiv:2004.10521, 2020. Spirtes et al. [2000] Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. MIT Press, Cambridge, MA, 2nd edition, 2000. Strotz and Wold [1960] Robert H. Strotz and H. O. A. Wold. Recursive vs. nonrecursive systems: An attempt at synthesis (part I of a triptych on causal chain systems). Econometrica, 28(2):417–427, 1960. Sullivant et al. [2010] Seth Sullivant, Kelli Talaska, and Jan Draisma. Trek separation for Gaussian graphical models. The Annals of Statistics, 38(3):1665–1685, 2010. Tsiatis [2006] Anastasios Tsiatis. Semiparametric Theory and Missing Data. Springer, New York, 2006. van der Vaart [2000] Aad W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 2000. Wang et al. [2017] Yuhao Wang, Liam Solus, Karren Dai Yang, and Caroline Uhler. Permutation-based causal inference algorithms with interventions. In Advances in Neural Information Processing Systems 30, pages 5822–5831. 2017. Witte et al. [2020] Janine Witte, Leonard Henckel, Marloes H. Maathuis, and Vanessa Didelez. On efficient adjustment in causal graphs. arXiv preprint arXiv:2002.06825, 2020. Wright [1921] Sewall Wright. Correlation and causation. Journal of Agricultural Research, 20:557–585, 1921. Wright [1934] Sewall Wright. The method of path coefficients. The Annals of Mathematical Statistics, 5(3):161–215, 1934.
Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects Soheyla Amirian, PhD University of Georgia amirian@uga.edu    Luke A. Carlson, MS University of Pittsburgh lac249@pitt.edu    Matthew F. Gong, MD {@IEEEauthorhalign} University of Pittsburgh gongm2@upmc.edu    Ines Lohse, PhD University of Pittsburgh inl22@pitt.edu    Kurt R. Weiss, MD University of Pittsburgh krw13@pitt.edu    Johannes F. Plate, MD, PhD University of Pittsburgh johannes.plate@pitt.edu    Ahmad P. Tafti, PhD University of Pittsburgh tafti.ahmad@pitt.edu Abstract While artificial intelligence (AI) has made many successful applications in various domains, its adoption in healthcare lags a little bit behind other high-stakes settings. Several factors contribute to this slower uptake, including regulatory frameworks, patient privacy concerns, and data heterogeneity. However, one significant challenge that impedes the implementation of AI in healthcare, particularly in orthopedics, is the lack of explainability and interpretability around AI models. Addressing the challenge of explainable AI (XAI) in orthopedics requires developing AI models and algorithms that prioritize transparency and interpretability, allowing clinicians, surgeons, and patients to understand the contributing factors behind any AI-powered predictive or descriptive models. The current contribution outlines several key challenges and opportunities that manifest in XAI in orthopedic practice. This work emphasizes the need for interdisciplinary collaborations between AI practitioners, orthopedic specialists, and regulatory entities to establish standards and guidelines for the adoption of XAI in orthopedics. Index Terms: Explainable AI, XAI, Explainable Machine Learning, AI-Powered Healthcare, Health Informatics I Introduction While AI has shown promise in various real-world scenarios, its adoption in healthcare, particularly in orthopedics, is hindered by its lack of explainability and interpretability. Explainability mainly refers to the ability to uncover the AI black box in a way that allows end users, including clinicians, physicians, healthcare practitioners, and patients to clearly understand how AI algorithms are making decisions. This helps the end users to trust and comprehend the reasoning behind AI systems, unlocking the full potential of AI in orthopedic care and patient and clinical outcomes [1]. Interpretability, similarly, refers to the ability to interpret the underlying mechanisms and features that contribute to AI-generated results [1]. From a technical perspective, the taxonomy of XAI methods can be categorized into three mechanisms, including (1) local explanation, (2) global explanation, and (3) counterfactual explanation [2, 3, 4, 5]. Local explanation methods explain the decisions of AI models by focusing on individual data points. This can be helpful for understanding why a particular patient is classified in a certain way. For example, a local explanation method could show which features of a patient’s EHRs were most important in leading the AI model to make a particular diagnosis (e.g., infection) [2, 3, 6]. Global explanation methods explain the decisions of an AI model by looking at the model as a whole. This can be helpful for understanding how the model is making decisions and identifying potential biases in the model and data. For example, a global explanation method could illustrate which features are most important in the model’s decision-making process, and whether these features are distributed evenly across different groups of patients [6, 7]. Counterfactual explanation methods generate explanations by providing alternative scenarios that would lead to different outcomes. This can be helpful for understanding how the model is sensitive and robust to different feature sets. For example, a counterfactual explanation for a patient’s diagnosis could show how the diagnosis would have changed if the patient’s gender, race, ethnicity, BMI, and/or blood type had been different. There are a number of compelling reasons to demonstrate why XAI is important in orthopedics. First, XAI plays a pivotal role in establishing trust between patients and healthcare providers. When patients can understand the reasoning behind an AI model’s decision regarding their care, it enhances transparency and fosters trust in the healthcare process. By engaging patients in the decision-making process, XAI empowers them to actively participate in their own healthcare management and adhere to the recommended therapy plans. Second, XAI aligns with regulatory requirements and ethical considerations in healthcare. Many regulatory frameworks, such as the General Data Protection Regulation (GDPR) in Europe [8], emphasize the need for transparent AI and XAI systems. By incorporating XAI, healthcare organizations ensure compliance with these regulations and adhere to ethical principles, such as providing justifications for the decisions made by AI models. Third, XAI facilitates the validation and adoption of AI models in clinical practices. By providing interpretable explanations, it allows healthcare professionals to validate the model’s recommendations and verify that they are consistent with their own clinical knowledge and experience. This validation process helps build confidence in the AI model’s capabilities and supports its integration and uptake into existing clinical workflows. Fourth, XAI helps to identify biases by providing explanations for the model’s decisions. This can make it easier to trace the factors contributing to biased outcomes, such as the data point used to train and validate the model or the way the model is built. With this knowledge, healthcare practitioners will take corrective measures to mitigate bias, ensuring fair and equitable healthcare for all patients across different gender and racial groups for example. Finally, XAI enables healthcare providers and AI developers to gain a deeper understanding of how AI models are functioning and identify areas for improvement. By unraveling the inner workings of the model, researchers can identify strengths, weaknesses, and limitations, allowing them to refine the model’s performance, availability, accuracy, and reliability. The integration of XAI in healthcare holds significant potential for enhancing trust, mitigating bias, optimizing model performance, ensuring regulatory compliance, improving patient and clinical outcomes, facilitating clinical validation, and disseminating research findings. By implementing XAI, orthopedic centers can foster transparency, fairness, and effectiveness in the utilization of AI systems, thereby driving improved patient outcomes and advancing care. In this contribution, an attempt was made to outline several key challenges and opportunities of XAI in orthopedics (see figure 1). The organization of this work is as follows. Section II reviews the literature and highlights the recent advances in AI explainability and its applications in healthcare and orthopedics. Sections III and IV list potential opportunities and challenges in this domain, respectively. Further discussions are presented in Section V. II Background In this section, we explore the burgeoning field of XAI. With the rapid progress and growing interest in XAI, it has become imperative to provide thorough overviews of the latest advancements in state-of-the-art XAI. To address this need, we meticulously examined a selection of research papers that delve into the intricacies of XAI, shedding light on the methodologies, techniques, and applications in various domains. Karim et al. [9] presented DeepKneeExplainer, a novel method for explainable knee osteoarthritis (OA) diagnosis using radiographs and MRIs. Through experiments on multicenter osteoarthritis study (MOST) cohorts, their approach demonstrates remarkable classification accuracy, outperforming comparable state-of-the-art techniques. The use of deep-stacked transformation chains ensures robustness against potential noise and artifacts in test cohorts. Additionally, WeightWatcher is applied to address model selection bias. The authors proposed integrating this approach into clinical settings to enhance domain generalization for biomedical images. Although promising, further clinical experiments and improvements are necessary. The paper encourages the adoption of explainable methods and DNN-based analytic pipelines in clinical practice to promote AI-assisted applications. Subsequently, Kokkotis et al. [10] focused on Knee Osteoarthritis (KOA), aiming to reduce diagnostic errors by providing reliable tools for diagnosis. Using multidimensional data from the Osteoarthritis Initiative database, the researchers proposed a robust Feature Selection (FS) methodology based on fuzzy logic. They emphasized the need for explainability analysis using SHAP to understand the model’s decision-making process and the impact of selected features. Overall, the proposed methodology offered an approach for identifying informative risk factors in KOA diagnosis, with potential applications in other medical domains. Mohseni et al. [11] presented a comprehensive survey and framework aimed at facilitating the sharing of knowledge and experiences in the design and evaluation of XAI systems across various disciplines. The framework provides a categorization of XAI design goals and evaluation methods, mapping them to different user groups of XAI systems. They proposed a step-by-step design process and evaluation guidelines to assist multidisciplinary XAI teams in their iterative design and evaluation cycles. They emphasize the importance of using appropriate evaluation measures for different design goals and advocate for a balance between qualitative and quantitative methods during the design process. The framework addresses the overlap of XAI goals among different research disciplines and highlights the significance of considering user interactions and long-term evaluation in XAI system design. Wen Loh et al. [12] discussed the importance of XAI in healthcare to build trust in AI models and encourage their practical use. They focused on various XAI techniques used in healthcare applications and surveyed 99 articles from highly credible journals (Q1) covering techniques such as SHAP, LIME, GradCAM, LRP, and others. They identified areas in healthcare that require more attention from the XAI research community, specifically detecting abnormalities in 1D biosignals and identifying key text in clinical notes. The most widely used XAI techniques are SHAP for explaining clinical features and GradCAM for providing visual explanations of medical images. The work concluded that a holistic cloud system for smart cities can significantly advance healthcare by promoting the use and improvement of XAI in the industry. As the application of AI continues to expand, particularly in critical domains like healthcare and medical imaging diagnosis, the demand for interpretable and transparent AI models becomes paramount. Patrício et al. [13] provided a comprehensive survey of XAI applied to medical imaging diagnosis. They addressed the lack of interpretability in deep learning models and the need for XAI to explain the decision-making process. Their study covered various XAI techniques, including visual, textual, example-based, and concept-based methods. They also discussed existing medical imaging datasets and evaluation metrics for explanations. They emphasized the importance of inherently interpretable models and textual explanations. Challenges in medical image interpretability were identified, including the need for larger datasets and the use of self-supervised learning. They highlighted the importance of objective metrics for evaluating explanations and the potential of using Transformers for report generation in medical imaging. Agarwal et al. [14] introduced OpenXAI. OpenXAI is an open-source framework designed to evaluate and benchmark post hoc explanation methods systematically. It provides a collection of real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, along with twenty two quantitative metrics to assess the faithfulness, stability, and fairness of these methods. The framework includes XAI leaderboards, enabling easy comparison of explanation methods across various datasets and models. OpenXAI promotes transparency, reproducibility, and standardization in benchmarking and simplifying the evaluation process for researchers and practitioners. It supports extensibility, allowing users to incorporate custom methods and datasets. The framework aims to ensure reliable post hoc explanations for decision-makers in critical applications. As a case study, we reference the work of Littlefield et al. [15], which introduced an explainable deep few-shot learning model. This model successfully identified and delineated the knee joint area in plain knee radiographs by utilizing a limited number of manually annotated radiographs. In this section, we provided a comprehensive exploration of XAI and its applications in various domains, particularly in medical imaging diagnosis and healthcare. The reviewed papers highlighted the importance of XAI in enhancing model interpretability, promoting trust, and encouraging the practical use of AI systems. The advancements in XAI showcased in the reviewed papers demonstrate the growing interest and potential of this field in improving decision-making processes and facilitating AI adoption in critical applications. III Opportunities The incorporation of XAI in orthopedics has several benefits. Here, we discuss potential opportunities for using XAI in orthopedics. III-A Diagnostic Imaging AI models have provided promising results in atrribution-based diagnostics of orthopedic conditions through the analysis of medical images, including X-rays, CT scans, and MRIs. These models are trained and validated on extensive imaging datasets and patient data, enabling them to learn and recognize patterns associated with various complications, such as bone lucency, bone loosening, and bone deformity. By offering interpretable justifications for AI decisions, XAI can assist in making these models more transparent and intelligible. These methods aid in bridging the comprehension gap between model output and end users by offering explanations, such as identifying the precise region of interest (ROI) or elements of the image and/or imaging biomarkers that contributed to the diagnosis. This can facilitate more informed and shared decision-making about patient care by assisting healthcare providers, patients, and physicians in better understanding of the rationale behind AI models [16, 17, 18]. III-B Imaging Registry XAI is of paramount importance in establishing an imaging registry in orthopedics by providing explainable analysis of medical imaging, standardized data analysis, and making the data more accessible and usable. XIA can help with automatically and objectively extracting relevant information from imaging data, identifying patterns, and categorizing various orthopedic complications in a consistent manner, which will ensure uniformity and reliability in the analysis of imaging data within the imaging registry [19, 20, 21]. Moreover, XAI algorithms can also be used to detect and correct errors in imaging data, ensuring that the data is accurate and consistent. III-C Surgical Guidance XAI can assist surgical guidance in orthopedics by providing surgeons with interpretable explanations for the decisions made by AI-powered surgical guidance systems. This capability enables surgeons to comprehend the underlying rationale behind the system’s recommendations, fostering trust and ultimately leading to improved surgical outcomes. For instance, XAI may be used to explain why a surgical strategy is recommended by an AI-powered model. A summary of the considerations that went into a surgeon’s choice, such as the patient’s medical history and imaging biomarkers, might be given via the system. This would allow surgeons to understand the system’s reasoning, making informed decisions about whether to follow AI recommendations [22, 23]. Furthermore, XAI could be used to identify potential risks associated with a surgical plan. The system could highlight areas of the patient’s anatomy that are at risk of injury or it could identify potential complications (e.g., fracture) that could arise from the surgery. Presumably, this would allow surgeons to take steps to better reduce these risks and to improve the safety of the procedure. III-D Rehabilitation and Therapy For orthopedic patients, XAI can be used to create individualized rehabilitation and therapy plans. XAI can help therapists customize treatment plans and track the success of interventions by offering interpretable insights into the patient’s progress, areas that need improvement, and suggested exercises. This enables personalized treatment plans, open progress monitoring, increased patient engagement, and decision support for therapists. Therapists can maximize treatment results and raise the standard of care given to those patients by implementing XAI into orthopedic rehabilitation techniques [24, 25, 26]. III-E Precision Medicine By providing explanations regarding the contributions of discrete variables, including patient demographics, patient medical history, EHRs, and imaging data, XAI empowers orthopedic specialists in understanding the reasons behind treatment recommendations, considering various patient-specific factors, thus it assists orthopedic surgeons in creating personalized treatment plans tailored to individual patients, optimizing the chances of successful patient outcomes [27, 28]. By augmenting the physician’s expertise with transparent explanations, XAI helps toward more informed and confident diagnosis and treatment decisions, ultimately leading to improved patient and clinical outcomes. III-F Health Equity In orthopedics, XAI can help discover and treat health inequities. XAI can assist in identifying systemic problems, socioeconomic determinants of health, and obstacles to fair care by utilizing large-scale datasets and generating explanations for differences in treatment outcomes across various populations [29]. This knowledge can inform interventions and policies aimed at reducing disparities and improving health equity, ensuring that decisions are made with fairness, equity, and optimal patient well-being [30, 31, 32]. III-G Standardization XAI can improve the standardization and consistency of orthopedic care. By clarifying pre- and post-operative recommendations, XAI helps ensure that consistent decisions are made for similar cases regardless of healthcare provider, computer system, or facility. By explaining the rationale behind treatment recommendations, XAI models share information and decision-making processes gathered by leading experts. This helps disseminate best practices and standardize care across healthcare settings, minimizing unnecessary variation in care and ensuring consistent access to care regardless of provider expertise [33, 34, 35]. III-H Orthopedics Research XAI contributes to orthopedic research by analyzing large-scale datasets and scientific literature, identifying patterns, and generating explanations for observed correlations. This will help researchers in uncovering new insights, understanding disease mechanisms, treatment strategies, and discovering potential risk factors or treatment approaches. III-I Administrative Processes and Clinical Documentation XAI can be used to automate administrative tasks, improve the accuracy and quality of clinical documentation, and optimize administrative workflows. Using AI with explainable capabilities, XAI systems can handle routine administrative processes, freeing up valuable time for healthcare professionals to focus on more critical patient care tasks. They can also provide real-time recommendations and clarifications to clinicians, promoting completeness, accuracy, and adherence to coding and documentation guidelines. In addition, XAI models generate explanations for the underlying factors that affect workflow efficiency, enabling healthcare managers to identify areas for improvement, streamline processes and allocate resources efficiently [36, 37, 38, 39]. III-J Resource Allocation XAI assists in resource allocation and capacity planning by analyzing large-scale historical data and generating explanations for resource utilization patterns. This helps healthcare administrators make precise decisions about staffing levels, beds, equipment allocation, and facility utilization, ensuring efficient use of resources and maintaining optimal patient care standards [40]. XAI can also be utilized to analyze data on patient wait times and to identify areas where wait times can be reduced. This information can then be used to make changes to the clinical workflow or to reallocate resources. IV Challenges This section discusses the most notable challenges that can be a barrier to the adoption of XAI in orthopedics. IV-A AI Model Explainability AI explainability may come as a challenge by itself. For example, advanced deep neural network models can be difficult to explain. Extracting explanations and interpretation from those advanced AI models poses a challenge, as their internal workings may involve numerous hyper-parameters, several layers, and intricate interactions, which can make it harder for orthopedic specialists to understand how the models make decisions [7, 41, 6]. On the other hand, there is often a trade-off between the interpretability, transparency, and performance of AI models. More intuitive models, such as decision trees or rule-based systems, may sacrifice some predictive accuracy and precision compared to complex models. In healthcare, where accurate predictions are important, but understanding the rationale behind predictions is equally important, striking the right balance between explainability and model performance is critical. It should be emphasized here that the development of methods for accurate interpretation of complex AI models in orthopedics is still an active area of research. IV-B Data Availability and Quality The quality, quantity, and availability of data is a major challenge for XAI in orthopedics. Orthopedic data is often complex, very large-scale, and heterogeneous, and it can be difficult to obtain accurate and complete data. Ensuring diverse, unbiased, and high-quality data is a critical challenge to train, validate, and test XAI systems for orthopedic healthcare. IV-C Regulatory Frameworks Regulatory frameworks and policy-makers may not yet have established clear guidelines or standards specifically addressing XAI in healthcare [42], including orthopedics. Since the use of XAI in healthcare is poorly regulated by now, it will be difficult to develop and deploy XAI models in orthopedics. Even existing regulations, such as data protection and privacy, may need to be applied to the unique context of XAI in orthopedics. Furthermore, different countries may have disparate regulatory frameworks governing AI in healthcare. This can make it difficult for organizations operating across borders, as they have to comply with multiple regulations, which can lead to uneven enforcement. This will certainly create uncertainty for healthcare organizations and developers about compliance requirements and best practices. IV-D Ethical and Legal Considerations Implementing XAI in orthopedics requires addressing a number of ethical and legal considerations, such as patient privacy, informed consent, and the potential for unintended consequences or misuse of AI technology [33, 43, 44]. For example, XAI models often utilize sensitive patient data, such as medical images, EHRs, and clinical notes. It is important to ensure that this data is protected and that patients’ privacy is fully respected. Additionally, XAI models can be complex and it is important to be aware of the potential for unintended consequences or misuse. For example, these AI models could be used to discriminate against certain patient populations or to make biased decisions. IV-E Precision Medicine While XAI can advance personalized medicine and care, AI models trained on very large-scale datasets can struggle to provide personalized explanations for individual patients. Each patient’s unique characteristics, demographics, medical history, and comorbidities may not fully overlap with the training data, making it difficult to develop meaningful explanations at the instance level. This is because AI models are trained on large datasets of patient data, but this data may not be representative of all patients. For example, the training data may not include patients with rare diseases (e.g., Osteosarcoma), or patients from different cultures or marginal groups. As a result, the AI model may not be able to make accurate predictions or provide relevant explanations for some patients. IV-F Implementation and Uptake Integrating XAI into the current clinical workflow presents a significant challenge. Healthcare providers ask for user-friendly interfaces and toolsets that deliver explanations in a clear and comprehensible fashion, without imposing additional complexity on their already demanding daily routines. The integration process entails addressing multiple challenges, including developing intuitive interfaces, providing appropriate training to healthcare professionals, and ensuring the availability of necessary resources for successful implementation and long-term maintenance of the technology. V Discussion, Conclusion, and Outlook In orthopedics, artificial intelligence (AI) is expanding quickly and has a lot of potential to help with diagnosis, prognosis, therapy, and rehabilitation. Explainable AI (XAI), which offers justifications and interpretations for AI models, is now one of the most promising areas of AI research in orthopedics. This is crucial for orthopedics because it enables surgeons, medical professionals, stakeholders, and -most importantly- patients to comprehend how AI-powered mechanisms make decisions, and to trust the AI’s predictive and descriptive results. In this paper, we have identified the most notable opportunities and challenges associated with implementing XAI in orthopedics. First, we believe multidisciplinary collaboration is essential for the successful uptake of XAI in orthopedics. Healthcare professionals, researchers, AI scientists, industry stakeholders, policy-makers, and regulatory entities should work collaboratively to address challenges, share expertise, and implement best practices, to mainly develop clear and adaptable frameworks that address the unique aspects of XAI in orthopedics. In this way, healthcare professionals and industry stakeholders will provide insights into the clinical needs of patients and the challenges of using XAI in clinical settings, while policy-makers will establish policies that support the utilization and development of XAI in healthcare, and in particular in orthopedics. On the other side, AI scientists will build XAI-powered systems that are trustworthy, reliable, and explainable, and regulatory entities will be developing guidelines and instructions for the use of XAI in orthopedics. To promote the adoption of XAI in orthopedics, it is essential to illustrate its value to stakeholders. This can be done by showcasing successful use cases and providing evidence of improved diagnostic and prognostic accuracy, treatment outcomes, and patient satisfaction resulting from the adoption of XAI. User-centric design methodologies will be essential to ensure seamless integration of XAI systems into the workflow of healthcare providers. By involving healthcare providers and end-users, such as physicians and patients in the design process, it will be possible to establish computational frameworks that meet their needs, provide clear and actionable explanations, and enhance shared decision-making. To achieve this, we recommend an incremental approach, starting with small-scale pilot projects, to first assess the feasibility, effectiveness, and user acceptance of XAI in specific orthopedic settings (e.g., pain progression analysis using multi-modal data). Then we recommend regularly collecting feedback from end-users and addressing identified issues or concerns to make continuous improvements. Gradually, by expanding XAI implementation based on lessons learned, we will ensure a well-organized transition and a higher success rate. We also believe in comprehensive and continuous training for healthcare professionals adopting XAI. Equipping them with a clear understanding of the XAI concepts, benefits, and limitations will help them to effectively interpret and utilize the explanations generated automatically and objectively by AI models. This training builds confidence, promotes acceptance, and facilitates the successful integration of XAI into clinical practice. Furthermore, the safe and ethical deployment of XAI in orthopedic environments requires careful consideration. This entails adhering to patient privacy and data protection laws, continuous supervision, ongoing monitoring, ensuring equality and fairness in AI models, valuing patient autonomy and informed consent, giving patients the confidence to make informed health choices, and setting up principles and procedures for ethical XAI usage. In conclusion, XAI is rapidly transforming the healthcare industry, and orthopedics is no exception. XAI-powered tool sets have the potential to improve patient and clinical outcomes in a number of ways. While implementing XAI in orthopedics presents various opportunities and challenges, adopting a collaborative approach, emphasizing user-centric methodology, offering comprehensive training, addressing ethical considerations, highlighting value, conducting small-scale pilot projects, and enabling continuous evaluation and monitoring are key components for success. Our future work focuses on developing advanced techniques for AI explainability in orthopedics, including exploring novel AI explainability methods for interpreting complex AI models, enhancing the transparency of deep learning-powered algorithms, and designing visualization strategies to provide more actionable explanations in different orthopedic settings, benchmarking AI explanation methods using evaluation metrics. Acknowledgment The authors declare that they have no competing interests. References [1] Mayuri Mehta, Vasile Palade, and Indranath Chatterjee, Explainable AI: Foundations, Methodologies and Applications, vol. 232, Springer Nature, 2022. [2] Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee, “From local explanations to global understanding with explainable ai for trees,” Nature machine intelligence, vol. 2, no. 1, pp. 56–67, 2020. [3] Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and Franco Turini, “Meaningful explanations of black box ai decision systems,” in Proceedings of the AAAI conference on artificial intelligence, 2019, vol. 33, pp. 9780–9784. [4] Arash Shaban-Nejad, Martin Michalowski, John S Brownstein, and David L Buckeridge, “Guest editorial explainable ai: towards fairness, accountability, transparency and trust in healthcare,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 7, pp. 2374–2375, 2021. [5] Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu, “Explainable ai: A brief survey on history, research areas, approaches and challenges,” in Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8. Springer, 2019, pp. 563–574. [6] Julia Amann, Alessandro Blasimme, Effy Vayena, Dietmar Frey, and Vince I Madai, “Explainability for artificial intelligence in healthcare: a multidisciplinary perspective,” BMC medical informatics and decision making, vol. 20, no. 1, pp. 1–9, 2020. [7] Deepti Saraswat, Pronaya Bhattacharya, Ashwin Verma, Vivek Kumar Prasad, Sudeep Tanwar, Gulshan Sharma, Pitshou N Bokoro, and Ravi Sharma, “Explainable ai for healthcare 5.0: opportunities and challenges,” IEEE Access, 2022. [8] General Data Protection Regulation, “General data protection regulation (gdpr),” Intersoft Consulting, Accessed in October, vol. 24, no. 1, 2018. [9] Md Rezaul Karim, Jiao Jiao, Till Doehmen, Michael Cochez, Oya Beyan, Dietrich Rebholz-Schuhmann, and Stefan Decker, “Deepkneeexplainer: explainable knee osteoarthritis diagnosis from radiographs and magnetic resonance imaging,” IEEE Access, vol. 9, pp. 39757–39780, 2021. [10] Christos Kokkotis, Charis Ntakolia, Serafeim Moustakidis, Giannis Giakas, and Dimitrios Tsaopoulos, “Explainable machine learning for knee osteoarthritis diagnosis based on a novel fuzzy feature selection methodology,” Physical and Engineering Sciences in Medicine, vol. 45, no. 1, pp. 219–229, 2022. [11] Sina Mohseni, Niloofar Zarei, and Eric D Ragan, “A multidisciplinary survey and framework for design and evaluation of explainable ai systems,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3-4, pp. 1–45, 2021. [12] Hui Wen Loh, Chui Ping Ooi, Silvia Seoni, Prabal Datta Barua, Filippo Molinari, and U Rajendra Acharya, “Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022),” Computer Methods and Programs in Biomedicine, p. 107161, 2022. [13] Cristiano Patrício, João C Neves, and Luís F Teixeira, “Explainable deep learning methods in medical diagnosis: a survey,” arXiv preprint arXiv:2205.04766, 2022. [14] Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju, “Openxai: Towards a transparent evaluation of post hoc model explanations,” arXiv preprint arXiv:2206.11104, 2022. [15] Nickolas Littlefield, Hamidreza Moradi, Soheyla Amirian, Hilal Maradit Kremers, Johannes F. Plate, and Ahamd P. Tafti, “Enforcing explainable deep few-shot learning to analyze plain knee radiographs: Data from the osteoarthritis initiative,” IEEE ICHI, 2023. [16] Robert JH Miller, Keiichiro Kuronuma, Ananya Singh, Yuka Otaki, Sean Hayes, Panithaya Chareonthaitawee, Paul Kavanagh, Tejas Parekh, Balaji K Tamarappoo, Tali Sharir, et al., “Explainable deep learning improves physician interpretation of myocardial perfusion imaging,” Journal of Nuclear Medicine, vol. 63, no. 11, pp. 1768–1774, 2022. [17] Mehmet A Gulum, Christopher M Trombley, and Mehmed Kantardzic, “A review of explainable deep learning cancer detection models in medical imaging,” Applied Sciences, vol. 11, no. 10, pp. 4573, 2021. [18] Amitojdeep Singh, Sourya Sengupta, and Vasudevan Lakshminarayanan, “Explainable deep learning models in medical image analysis,” Journal of imaging, vol. 6, no. 6, pp. 52, 2020. [19] Sajid Nazir, Diane M Dickson, and Muhammad Usman Akram, “Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks,” Computers in Biology and Medicine, p. 106668, 2023. [20] Guang Yang, Qinghao Ye, and Jun Xia, “Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond,” Information Fusion, vol. 77, pp. 29–52, 2022. [21] Pascal Bourdon, Olfa Ben Ahmed, Thierry Urruty, Khalifa Djemal, and Christine Fernandez-Maloigne, “Explainable ai for medical imaging: Knowledge matters,” Multi-faceted Deep Learning: Models and Data, pp. 267–292, 2021. [22] Yiming Zhang, Ying Weng, and Jonathan Lund, “Applications of explainable artificial intelligence in diagnosis and surgery,” Diagnostics, vol. 12, no. 2, pp. 237, 2022. [23] S O’sullivan, M Janssen, Andreas Holzinger, Nathalie Nevejans, O Eminaga, CP Meyer, and Arkadiusz Miernik, “Explainable artificial intelligence (xai): closing the gap between image analysis and navigation in complex invasive diagnostic procedures,” World Journal of Urology, vol. 40, no. 5, pp. 1125–1134, 2022. [24] Luiz Felipe de Camargo, Diego Roberto Colombo Dias, and José Remo Ferreira Brega, “Implementation of explainable artificial intelligence: Case study on the assessment of movements to support neuromotor rehabilitation,” in International Conference on Computational Science and Its Applications. Springer, 2023, pp. 564–580. [25] Marialuisa Gandolfi, Ilaria Boscolo Galazzo, Rudy Gasparin Pavan, Federica Cruciani, Nicola Vale, Alessandro Picelli, Silvia Francesca Storti, Nicola Smania, and Gloria Menegaz, “explainable ai allows predicting upper limb rehabilitation outcomes in sub-acute stroke patients,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 1, pp. 263–273, 2022. [26] Dragan Misic and Milan Zdravkovic, “Overview of ai-based approaches to remote monitoring and assistance in orthopedic rehabilitation,” in Personalized Orthopedics: Contributions and Applications of Biomedical Engineering, pp. 535–553. Springer, 2022. [27] Maged N Kamel Boulos and Peng Zhang, “Digital twins: from personalised medicine to precision public health,” Journal of personalized medicine, vol. 11, no. 8, pp. 745, 2021. [28] Marian Gimeno, Edurne San José-Enériz, Sara Villar, Xabier Agirre, Felipe Prosper, Angel Rubio, and Fernando Carazo, “Explainable artificial intelligence for precision medicine in acute myeloid leukemia,” Frontiers in Immunology, vol. 13, pp. 977358, 2022. [29] Carl Thomas Berdahl, Lawrence Baker, Sean Mann, Osonde Osoba, and Federico Girosi, “Strategies to improve the impact of artificial intelligence on health equity: Scoping review,” JMIR AI, vol. 2, pp. e42936, 2023. [30] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al., “Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai,” Information fusion, vol. 58, pp. 82–115, 2020. [31] Kiana Alikhademi, Brianna Richardson, Emma Drobina, and Juan E Gilbert, “Can explainable ai explain unfairness? a framework for evaluating explainable ai,” arXiv preprint arXiv:2106.07483, 2021. [32] Arun Rai, “Explainable ai: From black box to glass box,” Journal of the Academy of Marketing Science, vol. 48, pp. 137–141, 2020. [33] AS Albahri, Ali M Duhaim, Mohammed A Fadhel, Alhamzah Alnoor, Noor S Baqer, Laith Alzubaidi, OS Albahri, AH Alamoodi, Jinshuai Bai, Asma Salhi, et al., “A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion,” Information Fusion, 2023. [34] Giovanni Cinà, Tabea Röber, Rob Goedhart, and Ilker Birbil, “Why we do need explainable ai for healthcare,” arXiv preprint arXiv:2206.15363, 2022. [35] Subrato Bharati, M Rubaiyat Hossain Mondal, and Prajoy Podder, “A review on explainable artificial intelligence for healthcare: Why, how, and when?,” IEEE Transactions on Artificial Intelligence, 2023. [36] Erico Tjoa and Cuntai Guan, “A survey on explainable artificial intelligence (xai): Toward medical xai,” IEEE transactions on neural networks and learning systems, vol. 32, no. 11, pp. 4793–4813, 2020. [37] Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti, and Salvatore Rinzivillo, “Co-design of human-centered, explainable ai for clinical decision support,” ACM Transactions on Interactive Intelligent Systems, 2023. [38] Milad Moradi and Matthias Samwald, “Deep learning, natural language processing, and explainable artificial intelligence in the biomedical domain,” arXiv preprint arXiv:2202.12678, 2022. [39] Tin Kam Ho, Yen-Fu Luo, and Rodrigo Capobianco Guido, “Explainability of methods for critical information extraction from clinical documents: A survey of representative works,” IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 96–106, 2022. [40] Lichin Chen, Yu Tsao, and Ji-Tian Sheu, “Using deep learning and explainable artificial intelligence in patients’ choices of hospital levels,” arXiv preprint arXiv:2006.13427, 2020. [41] Mobeen Nazar, Muhammad Mansoor Alam, Eiad Yafi, and Mazliham Mohd Su’ud, “A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques,” IEEE Access, vol. 9, pp. 153316–153348, 2021. [42] Marzyeh Ghassemi, Luke Oakden-Rayner, and Andrew L Beam, “The false hope of current approaches to explainable artificial intelligence in health care,” The Lancet Digital Health, vol. 3, no. 11, pp. e745–e750, 2021. [43] Thomas Ploug and Søren Holm, “The four dimensions of contestable ai diagnostics-a patient-centric approach to explainable ai,” Artificial Intelligence in Medicine, vol. 107, pp. 101901, 2020. [44] Amina Adadi and Mohammed Berrada, “Explainable ai for healthcare: from black box to interpretable models,” in Embedded Systems and Artificial Intelligence: Proceedings of ESAI 2019, Fez, Morocco. Springer, 2020, pp. 327–337.
A closed form scale bound for the $(\epsilon,\delta)$-differentially private Gaussian Mechanism valid for all privacy regimes††thanks: This work has been funded in part by Innlandet Fylkeskommune, as well as Research Council of Norway grants 308904 and 288856. Thanks to Stephan Dreiseitl for helpful discussions. Staal A. Vinterbo111Department of Information Security and Communication Technology, Norwegian University of Science and Technology. Abstract The standard closed form lower bound on $\sigma$ for providing $(\epsilon,\delta)$-differential privacy by adding zero mean Gaussian noise with variance $\sigma^{2}$ is $\sigma>\Delta\sqrt{2}\epsilon^{-1}\sqrt{\log\left(5/4\delta^{-1}\right)}$ for $\epsilon\in(0,1)$. We present a similar closed form bound $\sigma\geq\Delta(\sqrt{2}\epsilon)^{-1}\left(\sqrt{z}+\sqrt{z+\epsilon}\right)$ for $z=-\log\left(\delta\left(2-\delta\right)\right)$ that is valid for all $\epsilon>0$ and is always lower (better) for $\epsilon<1$ and $\delta\leq 0.946$. Both bounds are based on fulfilling a particular sufficient condition. For $\delta<1$, we present an analytical bound that is optimal for this condition and is necessarily larger than $\Delta/\sqrt{2\epsilon}$. \PassOptionsToPackage unicodehyperref \PassOptionsToPackagehyphensurl \PassOptionsToPackagedvipsnames,svgnames*,x11names*xcolor \addbibresourcebibliography.bib 1 Introduction Differential privacy \autociteDwork2006a is an emerging standard for individual data privacy. In essence, differential privacy is a bound on any belief update about an individual on receiving a result of a differentially private randomized computation. Critical for the utility of such results is minimizing the random perturbation required for a given level of privacy. In Theorem 5 we present a closed form bound on the amount of perturbation needed for privacy in the Gaussian Mechanism. This bound improves on the current closed form standard in two ways: smaller perturbation and wider applicability. Both our and the current standard bound are based on fulfilling a certain sufficient condition for privacy. In Lemma 3 we also describe the smallest possible perturbation for this condition. Formally, let a database $d$ be a collection of record values from some set $V$. Two databases $d$ and $d^{\prime}$ are neighboring if one can be obtained from the other by adding one record. Let $\mathcal{N}$ be the set of all pairs of neighboring databases. Then following Dwork et al. \autociteDwork2006a,our-data-ourselves-privacy-via-distributed-noise-generation we define differential privacy as follows. Definition 1 ($(\epsilon,\delta)$-differential privacy [Dwork2006a, our-data-ourselves-privacy-via-distributed-noise-generation]). A randomized algorithm $M$ is called $(\epsilon,\delta)$-differentially private if for any measurable set $S$ of possible outputs and all $(d,d^{\prime})\in\mathcal{N}$ $$\Pr(M(d)\in S)\leq e^{\epsilon}\Pr(M(d^{\prime})\in S)+\delta,$$ where the probabilities are over randomness used in $M$. By $\epsilon$-differential privacy we mean $(\epsilon,0)$-differential privacy. A standard mechanism for achieving $(\epsilon,\delta)$-differential privacy is that of adding zero mean Gaussian noise to a statistic, called the Gaussian Mechanism. A primary reason for the popularity of the Gaussian Mechanism is that the Gaussian distribution is closed under addition. However, Gaussian noise requires $\delta>0$, which represents a relaxation of the stronger $(\epsilon,0)$-differential privacy that is not uncontroversial \autocitemcsherryHowManySecrets2017. On the positive side, a non-zero $\delta$ allows, among others, for better composition properties than $(\epsilon,0)$-differential privacy \autocite7883827. The exploitation of the composition benefits of using Gaussian noise can be observed in an application to deep learning by Abadi et al.\autociteabadiDeepLearningDifferential2016. To achieve $(\epsilon,\delta)$-differential privacy, the variance $\sigma^{2}$ is carefully tuned taking into account the sensitivity $\Delta$ of the statistic, i.e., the maximum change in the statistic resulting from adding or removing any individual record from any database. Of prime importance is to minimize $\sigma$ while still achieving $(\epsilon,\delta)$-differential privacy as higher $\sigma$ generally decreases the utility of the now noisy statistic. A well known sufficient condition for $(\epsilon,\delta)$-differential privacy when adding Gaussian noise that has been described as folklore \autocitedworkAlgorithmicFoundationsDifferential2014 is $$\displaystyle\Pr\left(|Z|>\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{2\sigma}% \right)\leq\delta.$$ (1) In the following, we are interested in closed form expressions relating $\sigma$, to $\epsilon$, $\delta$, and $\Delta$ that fulfil the criterion above. The simple closed form relationships we are after can be implemented using simple algorithms with low implementation and computational complexity. The benefit of this is a lower potential for errors, as well as decreasing power consumption in low power devices whenever the alternative is using iterative numerical algorithms to compute analytical solutions. Furthermore, closed form relationships can be potentially useful in the analysis of larger systems where the Gaussian Mechanism is a component. In their Theorem A.1 \autocitedworkAlgorithmicFoundationsDifferential2014, Dwork and Roth derive a closed form relationship between $\sigma$, $\epsilon$, $\delta$, and $\Delta$ by substituting the tail bound $\Pr(|Z|>x)\leq 2\phi(x)/x$, where $\phi$ is the standard Gaussian density, into (1) and subsequently manipulating the result to determine that $(\epsilon,\delta)$-differential privacy is achieved for $\epsilon\in(0,1)$ if $$\displaystyle\sigma>s(\epsilon,\delta,\Delta)={\frac{{\Delta}\,\sqrt{2}}{{% \epsilon}}\sqrt{\log\left({\frac{5}{4\,{\delta}}}\right)}}.$$ (2) The above bound (2) is essentially the standard closed form used for the Gaussian Mechanism, and we will refer to it as such in the following. Notably, the restriction $\epsilon\in(0,1)$ can present non-obvious pitfalls in addition to the explicit restriction to privacy regimes with $\epsilon<1$. For example consider the representation of $\epsilon$ as a function derived from the standard bound (2) (ignoring strict inequalities) $$\displaystyle\epsilon(\delta,\sigma,\Delta)$$ $$\displaystyle={\frac{{\Delta}\,\sqrt{2}}{{\sigma}}\sqrt{\log\left({\frac{5}{4% \,{\delta}}}\right)}}.$$ (3) A use of the above function can, for example, be found in \autocitepmlr-v89-wang19b, Section 4. As the magnitude of $\delta$ is associated with the failure of guaranteeing strong $\epsilon$-differential privacy, it is usually stated that $\delta$ should be cryptographically small. Now, the function in (3) increases as $\delta>0$ decreases, and for fixed $\sigma$ and $\Delta$, even a relatively large $\delta$ could result in $\epsilon(\delta,\sigma,\Delta)\geq 1$, which might not be obvious. For example, $\epsilon(10^{-1},1,1)>2.24$. Below in Theorem 5, we present the bound (12) that holds for all $\epsilon>0$, $1\geq\delta>0$, $\Delta>0$. Like the standard bound (2), this bound is closed form, very simple, and is based on fulfilling condition (1). We analyze this condition, present an optimal analytical bound, and show that any bound satisfying this condition must satisfy $\sigma>\frac{\Delta}{\sqrt{2\epsilon}}$ (Lemma 3). Restricted to $\epsilon\in(0,1)$, our bound (12) allows smaller $\sigma$ than the standard bound (2) whenever $\delta\leq 0.946$. 2 A few more preliminaries We briefly recapitulate known results. In the following, we will let $\Phi$ and $\phi$ denote the standard Gaussian distribution function and density, respectively. For completeness, we provide proofs in Section A. Definition 2. The global sensitivity of a real-valued function $q$ on databases is $$\Delta_{q}=\max_{(d,d^{\prime})\in\mathcal{N}}|q(d)-q(d^{\prime})|.$$ Theorem 1. Let $X$ be a random variable distributed according to density $f$, and let $q$ be a real valued function on databases with global sensitivity $\Delta$. The algorithm that outputs a variate of $q(d)+X$ is $\epsilon$-differentially private if $$\displaystyle f(x)\leq e^{\epsilon}f(y)$$ (4) for all $x,y$ such that $|x-y|\leq\Delta$. Applying the above theorem requires us to check that (4) holds for all $|x-y|\leq\Delta$. For certain densities we need only check for $|x-y|=\Delta$. Corollary 1. If $f$ in Theorem 1 is strictly positive everywhere and log-concave, then the algorithm that outputs a variate of $q(d)+X$ is $\epsilon$-differentially private for $\epsilon>0$ if for all $x$ and $s\in\{-1,+1\}$ $$\displaystyle\frac{f(x)}{f(x+s\Delta)}\leq e^{\epsilon}.$$ (5) Remark 1. The density of a Gaussian distribution is strictly positive everywhere and log-concave. Lemma 2. Let $Z$ be a random variable distributed according to the standard Gaussian distribution. Then for a real-valued function $q$ on databases with global sensitivity $\Delta$ and a database $d$, the mechanism returning a variate of $q(d)+\sigma Z$ is $(\epsilon,\delta)$-differentially private if $$\displaystyle\Pr\left(|Z|>\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{2\sigma}% \right)\leq\delta.$$ (1) 3 The theorem We are now ready to present our main contributions. Lemma 3. Let $Z$ be a random variable distributed according to the standard Gaussian distribution. Then for $\epsilon>0$, $\Delta>0$, and $\delta<1$ $$\displaystyle\Pr\left(|Z|>\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{2\sigma}% \right)\leq\delta,$$ (1) holds if and only if $\sigma\geq b$ for $$\displaystyle b={\frac{{\Delta}}{2\,{\epsilon}}\left(\Phi^{-1}\left(1-{\frac{{% \delta}}{2}}\right)+\sqrt{\left(\Phi^{-1}\left(1-{\frac{{\delta}}{2}}\right)% \right)^{2}+2\,{\epsilon}}\right)}>\frac{\Delta}{\sqrt{2\epsilon}}$$ (6) where $\Phi^{-1}$ is the standard Gaussian quantile function. Proof. Let $$v(\sigma)=\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{2\sigma}.$$ Then, requirement (1) can be written $\Pr(|Z|>v(\sigma))\leq\delta$. Since $\delta<1$, we must have that $\Pr(|Z|>v(\sigma))<1$ which is only the case if $v(\sigma)>0$. Therefore, we need only consider this case in the remainder of this proof. Then $$\displaystyle l(\sigma)$$ $$\displaystyle=\Pr\left(|Z|>v(\sigma)\right)=2(1-\Phi(v(\sigma)))$$ $$\displaystyle\iff$$ $$\displaystyle\Phi(v(\sigma))$$ $$\displaystyle=1-\frac{l(\sigma)}{2}$$ $$\displaystyle\iff$$ $$\displaystyle v(\sigma)$$ $$\displaystyle=\Phi^{-1}\left(1-\frac{l(\sigma)}{2}\right).$$ (7) Recall that we want to find a lower bound for $\sigma$ such that $l(\sigma)\leq\delta<1$. We note that $l(\sigma)$ is decreasing in $\sigma$ if $v$ is increasing in $\sigma$. This is the case since $$v^{\prime}(\sigma)={\frac{2\,{{\sigma}}^{2}{\epsilon}+{{\Delta}}^{2}}{2\,{% \Delta}\,{{\sigma}}^{2}}}$$ is positive for all $\sigma$ and $\Delta>0$, $\epsilon>0$. Hence, we can find the sought lower bound by solving $l(\sigma)=\delta$ for $\sigma$. We do this by substituting $\delta$ for $l(\sigma)$ in (7) and solving for $\sigma>0$, yielding $$\displaystyle\sigma$$ $$\displaystyle={\frac{{\Delta}}{2\,{\epsilon}}\left(\Phi^{-1}\left(1-{\frac{{% \delta}}{2}}\right)+\sqrt{\left(\Phi^{-1}\left(1-{\frac{{\delta}}{2}}\right)% \right)^{2}+2\,{\epsilon}}\right)}.$$ (8) We now conclude the proof by showing that the right-hand side of the equation above is larger than $\frac{\Delta}{\sqrt{2\epsilon}}$. First, we note that $z+\sqrt{z^{2}+2\alpha}$ is monotonically increasing in $z\geq 0$. This means that for $z\geq 0$ $$\displaystyle\frac{\Delta(z+\sqrt{z^{2}+2\epsilon})}{2\epsilon}$$ (9) achieves its minimum $\frac{\Delta}{\sqrt{2\epsilon}}$ at $z=0$. Noting that substituting $\Phi^{-1}(1-\delta/2)$ for $z$ in (9) yields the right-hand side of (8) and that $\delta<1$ yields $1-\delta/2>1/2$ and therefore $\Phi^{-1}(1-\delta/2)>0$. Hence, this right hand side is always larger than $\frac{\Delta}{\sqrt{2\epsilon}}$. ∎ Remark 2. A $\delta\geq 1$ eliminates any protection of privacy as the release of the original data is $(\epsilon,1)$-differentially private. In this light, Lemma 3 is a generalization and sharpening of Theorem 4 in \autocitepmlr-v80-balle18a that claims $\sigma\geq\frac{\Delta}{\sqrt{2\epsilon}}$ for the standard bound. As $\delta\to 1$ from below we have that the right side of (8) approaches $\frac{\Delta}{\sqrt{2\epsilon}}$ from above. Lemma 4. Let $\Phi^{{-1}}$ be the standard Gaussian quantile function. Then for $p\geq 1/2$ $$\displaystyle\Phi^{-1}(p)\leq\sqrt{2}\sqrt{-\log(-(2p-1)^{2}+1)}.$$ (10) Proof. It is well known that $\operatorname{erf}(x)=\operatorname{sign}(x)P(\frac{1}{2},x^{2})$, where $P$ is the regularized gamma function $P(s,x)=\frac{\gamma(s,x)}{\Gamma(s)}$ in which $\Gamma$ and $\gamma$ are the Gamma and lower incomplete Gamma functions, respectively (see, e.g., \autociteolverNISTDigitalLibrary2020 7.11.1). From \autociteolverNISTDigitalLibrary2020 (8.10.11) we have that $$(1-e^{-\alpha_{a}x})^{a}\leq P\left(a,x\right)\leq(1-e^{-\beta_{a}x})^{a}$$ for $$\displaystyle\alpha_{a}$$ $$\displaystyle=\begin{cases}1,&0<a<1,\\ d_{a},&a>1,\end{cases}$$ $$\displaystyle\beta_{a}$$ $$\displaystyle=\begin{cases}d_{a},&0<a<1,\\ 1,&a>1,\end{cases}$$ $$\displaystyle d_{a}$$ $$\displaystyle=(\Gamma\left(1+a\right))^{-1/a}.$$ Since $a=1/2$ in our case, get that for $x\geq 0$ $$\displaystyle\operatorname{erf}(x)\geq(1-e^{-x^{2}})^{1/2},$$ and consequently $$\displaystyle\operatorname{erf}^{-1}(x)\leq\sqrt{-\log(-x^{2}+1)}$$ when $x\geq 0$. As $\Phi^{-1}(p)=\sqrt{2}\operatorname{erf}^{-1}(2p-1)$, the Lemma follows by substituting the upper bound for $\operatorname{erf}^{-1}$. ∎ Theorem 5 (Gaussian mechanism $(\epsilon,\delta)$-differential privacy). Let $q$ be a real valued function on databases with global sensitivity $\Delta$, and let $Z$ be a standard Gaussian random variable. Then for $\delta\leq 1$ and $\epsilon>0$, the mechanism that returns a variate of $q(d)+\sigma Z$ is $(\epsilon,\delta)$-differentially private if $\sigma\geq b$ where $$\displaystyle b$$ $$\displaystyle={\frac{{\Delta}}{2\,{\epsilon}}\left(\Phi^{-1}\left(1-{\frac{{% \delta}}{2}}\right)+\sqrt{\left(\Phi^{-1}\left(1-{\frac{{\delta}}{2}}\right)% \right)^{2}+2\,{\epsilon}}\right)}$$ (11) $$\displaystyle\leq\frac{\Delta\sqrt{2}}{2\epsilon}\left(\sqrt{\log\left(\frac{1% }{\delta\left(2-\delta\right)}\right)}+\sqrt{\log\left(\frac{1}{\delta\left(2-% \delta\right)}\right)+\epsilon}\right)$$ (12) $$\displaystyle\leq\frac{\Delta\sqrt{2}}{\epsilon}\sqrt{\log\left(\frac{1}{% \delta\left(2-\delta\right)}\right)}+\frac{\Delta}{\sqrt{2\epsilon}},$$ (13) where $\Phi^{-1}$ is the standard Gaussian quantile function. Proof. Note that for $\delta=1$, any mechanism fulfils $(\epsilon,\delta)$-differential privacy, and so does the one with $\sigma\geq b$ in particular. We also achieve equality for the upper bounds. Now let $\delta<1$. Then, differential privacy and (11) follows from Lemmas 2 and 3. As $1-\delta/2\geq 1/2$, we apply Lemma 4 to get $\Phi^{-1}(1-\delta/2)\leq\sqrt{-2\log\left(\delta\left(2-\delta\right)\right)}$. Substituting this bound for $\Phi^{-1}(1-\delta/2)$ in (11), yields the bound (12) in the Theorem. Bound (13) is achieved by applying the fact that $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$ for non-negative $a,b$ to the right hand side of (12). ∎ Remark 3. Theorem 5 can be extended unchanged to the multidimensional case using the exact argument Dwork and Roth use in their proof of Theorem A.1. in their monograph \autocitedworkAlgorithmicFoundationsDifferential2014. 4 Illustrating constraints of the standard bound Here we graphically illustrate that constraining $\epsilon$ from above for the standard bound is indeed needed, both for meeting condition (1) and providing $(\epsilon,\delta)$-differential privacy. Recall from Lemma 2 that the sufficient condition for adding Gaussian noise to achieve $(\epsilon,\delta)$-differential privacy is $$\displaystyle\Pr(|Z|>v(\sigma,2))\leq\delta$$ for the standard Gaussian variable $Z$ and $$v(\sigma,y)=\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{y\sigma}.$$ We further have that for $s$ defined in (2), $$\displaystyle w(\epsilon,\delta)=v(s(\epsilon,\delta,\Delta),2)$$ $$\displaystyle={\frac{\sqrt{2}\left(4\,\log\left({\frac{5}{4\,\delta}}\right)-% \epsilon\right)}{4\sqrt{\log\left({\frac{5}{4\,\delta}}\right)}}}$$ which does not depend on $\Delta$. Let $$g(\epsilon,\delta)=\delta-2(1-\Phi(w(\epsilon,\delta)))=\delta-\Pr(|Z|>w(% \epsilon,\delta)).$$ Now, the sign of $g$ determines whether the condition (1) in Lemma 2 is met. A plot of $g(\epsilon,\delta)$ can be seen in Figure 1. Interestingly, there exist $0<\delta<1$ and $0<\epsilon<1$ such that (1) is violated as $g(0.97,0.97)<-0.005$, suggesting that technically a constraint on $\delta$ is needed to avoid violating (1). However, as Balle et al. \autocitepmlr-v80-balle18a point out, violating (1) is is not the same as violating $(\epsilon,\delta)$-differential privacy. They show that $(\epsilon,\delta)$-differential privacy is achieved if and only if $$\displaystyle\Phi\left(\frac{\Delta}{2\sigma}-\frac{\epsilon\sigma}{\Delta}% \right)-e^{\epsilon}\Phi\left(-\frac{\Delta}{2\sigma}-\frac{\epsilon\sigma}{% \Delta}\right)\leq\delta.$$ (14) They do not provide a closed form bound based on (14) but provide a numerical algorithm to compute the smallest $\sigma>0$ for which the above holds. Substituting $s(\epsilon,\delta,\Delta)$ for $\sigma$ in the left side of (14), and subtracting this from $\delta$ yields $$d(\epsilon,\delta)=\delta-\left(\Phi(-v(s(\epsilon,\delta,\Delta),2))-e^{% \epsilon}\Phi(-v(s(\epsilon,\delta,\Delta),-2))\right),$$ which does not depend on $\Delta$. Analogous to $g$ above, the sign of $d$ determines whether (14) and $(\epsilon,\delta)$-differential privacy is violated. A plot of $d(\epsilon,\delta)$ can be seen in Figure 1. Negative values indicate failure to be $(\epsilon,\delta)$-differential privacy. The plot suggests that even if the inequality of the standard bound (2) is strict, it is safe to consider it non-strict for $\epsilon\in(0,1)$. What the plot also shows, is that the standard bound does not yield $(\epsilon,\delta)$-differential privacy for all $\epsilon>0$. 5 Comparing the two bounds Dwork and Roth developed the bound by substituting the Cramér–Chernoff style tail bound $\Pr(|Z|>x)\leq 2\phi(x)/x$ into (1) and subsequently manipulating the result to determine the bound. This differs from the approach above resulting in our bound that is based on bounding the (inverse) error function. Furthermore, the standard bound is constrained to $\epsilon\in(0,1)$, while our bound is valid for all $\epsilon>0$. We now compare these for the common interval $\epsilon\in(0,1)$. The ratio of the standard bound (2) and our bound (12) is $$\displaystyle r(\epsilon,\delta)=\frac{2\sqrt{\log\left(\frac{5}{4\delta}% \right)}}{\sqrt{\log\left(\frac{1}{\delta\left(2-\delta\right)}\right)}+\sqrt{% \log\left(\frac{1}{\delta\left(2-\delta\right)}\right)+\epsilon}}.$$ (15) A value for $r>1$ means that the standard bound is larger than ours. A plot of the ratio $r$ can be seen in Figure 2. Furthermore, the above ratio is 1 when $$\displaystyle\epsilon=\epsilon(\delta)=4\log\left(\frac{5}{4\delta}\right)-4% \sqrt{\log\left(\frac{5}{4\delta}\right)}\sqrt{\log\left(\frac{1}{\delta\left(% 2-\delta\right)}\right)}.$$ (16) A plot of $\epsilon(\delta)$ can be seen in Figure 2. The partial derivative of $r$ in (15) with respect to $\epsilon$ is $$\displaystyle-\frac{\sqrt{\log\left(\frac{5}{4\delta}\right)}}{\left(\sqrt{% \log\left(\frac{1}{\delta\left(2-\delta\right)}\right)}+\sqrt{\log\left(\frac{% 1}{\delta\left(2-\delta\right)}\right)+\epsilon}\right)^{2}\sqrt{\log\left(% \frac{1}{\delta\left(2-\delta\right)}\right)+\epsilon}}.$$ (17) This derivative is negative for $\delta>0$ and $\epsilon>0$, meaning that the ratio $r$ decreases as $\epsilon$ increases, which in turn means that the shaded area strictly under the curve of $\epsilon(\delta)$ in Figure 2 represents values $(\delta,\epsilon)$ for which $r>1$, indicating that our bound (12) allows a smaller $\sigma$ than the standard bound (2). Numerical calculation yields that $\epsilon(0.946)>1$, and looking at the curve in Figure 2 we see that $r>1$ for $\epsilon\in(0,1)$ and $\delta\in(0,0.946)$. Inspecting $r$, we see that as $\epsilon\to 0$ we get that $r\to\rho(\delta)$ where $$\rho(\delta)=\sqrt{\frac{\log\left(\frac{5}{4\delta}\right)}{\log\left(\frac{1% }{\delta\left(2-\delta\right)}\right)}}.$$ Since $r$ is decreasing in $\epsilon$, the function $\rho(\delta)$ provides an upper bound on $r$ for a given value of $\delta$. The function $\rho$ is increasing in $0<\delta<1$ and as $\delta\to 0$ we have that $\rho\to 1$. A plot of $\rho$ can be seen in Figure 2. As $\rho(10^{-8})<1.026$, we see that for small $\delta$, the ratio $r$ is not that big. In other words, while our bound (12) is better than the standard bound for $\delta\in(0,0.946)$, it is only slightly better for $\delta$ that can be considered small. 6 Conclusion In Theorem 5 and by Remark 3 we presented a closed form lower bound (12) on $\sigma$ in terms of $\epsilon$ and $\delta$ needed to achieve $(\epsilon,\delta)$-differential privacy using the Gaussian Mechanism. Compared to the standard bound (2), our bound (12) has the following benefits: a. it is valid for all $\epsilon>0$, and b. it allows a smaller $\sigma$ whenever $\epsilon\in(0,1)$ and $\delta\in(0,0.946)$. While our bound is better for the above ranges of $\epsilon$ and $\delta$, we suggest that the main advantage of our bound is that it is valid for all $\epsilon>0$ and that it can effectively be used without loss respective to the standard bound. Like the standard bound (2), our bound (12) is based on the sufficient condition (1). Under this condition, the best possible $\sigma$ for $\delta<1$ is given by (11) and must be larger than $\frac{\Delta}{\sqrt{2\epsilon}}$ (Lemma 3). As Balle et al. \autocitepmlr-v80-balle18a demonstrate, the above condition is not necessary, and smaller $\sigma$ can be gotten through numerically optimizing (14). A question we leave unaddressed for now is whether suitable closed form bounds on $\Phi$ can be substituted into (14) to find an even better closed form bound on $\sigma$. \printbibliography [title=References] Appendix A Proofs from Section 2 Proof of Theorem 1. Let $T=(a,b)\subseteq\mathbb{R}$ for some $a<b$, let $f(x)\leq e^{\epsilon}f(y)$ for all $|x-y|\leq\Delta$, and let $|v-w|\leq\Delta$. Then, $\Pr(X+v\in T)=\int_{x\in T}f(x-v)dx\leq\int_{x\in T}e^{\epsilon}f(x-w)dy=e^{% \epsilon}\int_{x\in T}f(x-w)dy=e^{\epsilon}\Pr(X+w\in T)$ since $|v-w|\leq\Delta$ implies $|(x-v)-(x-w)|\leq\Delta$ for any $x\in T$. Since we can decompose any measurable $S$ into a countable union of disjoint open intervals $T$, we get $\Pr(X+x\in S)\leq e^{\alpha}\Pr(X+y\in S)$ for any $|x-y|\leq\Delta$. The theorem then follows from $|q(d)-q(d^{\prime})|\leq\Delta$ for any $(d,d^{\prime})\in\mathcal{N}$. ∎ Proof of Corollary 1. Since $f$ is strictly positive (and therefore also defined) everywhere, $f(x)/f(x+d)$ is well defined for any $x,y,d\in\mathbb{R}$. Furthermore, since $f$ is also log-concave, $f$ is unimodal, continuous, and for any $d>0$ ($d<0$) we have that $f(x)/f(x+d)$ is monotone and non-decreasing (non-increasing) in $x$ (see, e.g., \autocitesaumard2014logconcavity). Since $f$ is log-concave and positive everywhere, it decreases away from the mode $x_{m}$ on both sides, and $f(x+d)$ does the same for its mode $x_{m}-d$. Let $0\leq z\leq\Delta$. We first show that $f(x)/f(x+\Delta)\leq e^{\epsilon}$ implies $f(x)\leq e^{\epsilon}f(x+z)$. Let $z=0$, then since $e^{\epsilon}>1$, we have that $f(x)\leq e^{\epsilon}f(x+z)$ for all $x$. Therefore, assume $z>0$. Now, there exists $x^{*}\in(x_{m}-z,x_{m})$ such that $f(x)\leq f(x+z)$ for $x\leq x^{*}$ and $f(x)\geq f(x+z)$ for $x\geq x^{*}$. For $x\leq x^{*}$, it follows that $f(x)\leq e^{\epsilon}f(x+z)$. Now assume that $x>x^{*}$. For $x\geq x_{m}>x^{*}$, we have that $f$ is non-increasing. This means that $f(x+z)\geq f(x+\Delta)$, and consequently $f(x)\leq e^{\epsilon}f(x+\Delta)$ implies $f(x)\leq e^{\epsilon}f(x+z)$. Also, $f(x_{m}+z)\geq f(x_{m}+\Delta)$, which means that $f(x_{m})/f(x_{m}+z)\leq f(x_{m})/f(x_{m}+\Delta)$. Since $f(x)/f(x+z)$ is non-decreasing and for $x^{*}<x\leq x_{m}$ attains its maximum at $f(x_{m})/f(x_{m}+z)\leq f(x_{m})/f(x_{m}+\Delta)\leq e^{\epsilon}$, we must have that $f(x)\leq e^{\epsilon}f(x+z)$ also for $x^{*}<x<x_{m}$. We now established that $f(x_{m})/f(x+\Delta)\leq e^{\epsilon}$ implies $f(x)\leq e^{\epsilon}f(x+z)$. The case for $f(x)/f(x-\Delta)\leq e^{\epsilon}$ implying $f(x)\leq e^{\epsilon}f(x-z)$ follows from a mirrored argument. Collecting the above, we now have $f(x)/f(x+s\Delta)\leq e^{\epsilon}$ implies $f(x)\leq e^{\epsilon}f(x+sz)$ for $0\leq z\leq\Delta$ and all $x$. Noting that $|x-y|\leq\Delta$ if and only if $y=x+sz$ for some $s\in\{-1,1\}$ and $z\in[0,\Delta]$, we obtain that $f(x)/f(x+s\Delta)\leq e^{\epsilon}$ implies $f(x)\leq e^{\epsilon}f(y)$ for all $x$ and $y$ such that $|x-y|\leq\Delta$. The corollary now follows from Theorem 1.∎ Proof of Lemma 2. Following Corollary 1 and Remark 1, we investigate $$\displaystyle\frac{\phi(x/\sigma)}{\phi((x+s\Delta)/\sigma)}\leq e^{\epsilon},$$ for $s\in\{-1,1\}$ and $\phi=\Phi^{\prime}$. The above holds as long as $|x|\leq\frac{\sigma^{2}\epsilon}{\Delta}-\frac{\Delta}{2}$. Define $S_{\text{bad}}=\{x\mid|x|>\frac{\sigma^{2}\epsilon}{\Delta}-\frac{\Delta}{2}\}$ and $S_{\text{good}}=\mathbb{R}-S_{\text{bad}}$. We now show that $\Pr(\sigma Z\in S_{\text{bad}})\leq\delta$ implies $(\epsilon,\delta)$-differential privacy of $q(d)+\sigma Z$. First, define $$\displaystyle S^{+}$$ $$\displaystyle=q(d)+S_{\text{good}}$$ $$\displaystyle S^{-}$$ $$\displaystyle=q(d)+S_{\text{bad}}.$$ Now, for any $S^{\prime}\subseteq S^{+}$ and neighboring database $d^{\prime}$ we have $$\Pr(q(d)+\sigma Z\in S^{\prime})\leq e^{\epsilon}\Pr(q(d^{\prime})+\sigma Z\in S% ^{\prime}).$$ Then for fixed measurable $S\subset\mathbb{R}$ $$\displaystyle\Pr(q(d)+\sigma Z\in S\cap S^{-})$$ $$\displaystyle\leq\Pr(q(d)+\sigma Z\in S^{-})=\Pr(\sigma Z\in S_{\text{bad}})\leq\delta$$ $$\displaystyle\Pr(q(d)+\sigma Z\in S\cap S^{+})$$ $$\displaystyle\leq e^{\epsilon}\Pr(q(d^{\prime})+\sigma Z\in S\cap S^{+})\leq e% ^{\epsilon}\Pr(q(d^{\prime})+\sigma Z\in S)$$ and $$\displaystyle\Pr(q(d)+\sigma Z\in S)$$ $$\displaystyle=\Pr(q(d)+\sigma Z\in S\cap S^{+})+\Pr(q(d)+\sigma Z\in S\cap S^{% -})$$ $$\displaystyle\leq\Pr(q(d)+\sigma Z\in S\cap S^{+})+\delta$$ $$\displaystyle\leq e^{-\epsilon}\Pr(q(d^{\prime})+\sigma Z\in S)+\delta.$$ For $\sigma>0$ $$\Pr\left(|\sigma Z|\leq\frac{\sigma^{2}\epsilon}{\Delta}-\frac{\Delta}{2}% \right)=\Pr\left(|Z|\leq\frac{\sigma\epsilon}{\Delta}-\frac{\Delta}{2\sigma}% \right),$$ from which the Lemma follows.∎
A knee cannot have lung disease: out-of-distribution detection with in-distribution voting using the medical example of chest X-ray classification Alessandro Wollek alessandro.wollek@tum.de Munich Institute of Biomedical Engineering, Technical University of Munich Department of Informatics, Technical University of Munich Theresa Willem Institute for History and Ethics in Medicine, Technical University of Munich Munich School of Technology in Society, Technical University of Munich Michael Ingrisch Department of Radiology, University Hospital, Ludwig-Maximilians-Universität Bastian Sabel Department of Radiology, University Hospital, Ludwig-Maximilians-Universität Tobias Lasser Munich Institute of Biomedical Engineering, Technical University of Munich Department of Informatics, Technical University of Munich Abstract Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? To test a model, a specific cleaned data set is assembled. However, when deployed in the real world, the model will face unexpected, out-of-distribution (OOD) data. In this work, we show that the so-called “radiologist-level” CheXnet model fails to recognize all OOD images and classifies them as having lung disease. To address this issue, we propose in-distribution voting, a novel method to classify out-of-distribution images for multi-label classification. Using independent class-wise in-distribution (ID) predictors trained on ID and OOD data we achieve, on average, 99 % ID classification specificity and 98 % sensitivity, improving the end-to-end performance significantly compared to previous works on the chest X-ray 14 data set. Our method surpasses other output-based OOD detectors even when trained solely with ImageNet as OOD data and tested with X-ray OOD images. Keywords out-of-distribution detection, chest x-ray classification, outlier detection, anomaly detection 1 Introduction Modern machine learning models are achieving great successes in real world medical applications, such as diabetic retinopathy diagnosis Gulshan et al. (2016), skin cancer classification Esteva et al. (2017), or lung disease assessment Rajpurkar et al. (2017, 2018a); Majkowska et al. (2019). Due to the early and profound digitization of imaging techniques, machine learning in radiology can already show convincing successes, such as the detection of certain critical pathologies of the lung on X-ray images non-inferior to radiologists Rajpurkar et al. (2017). Considering the increasing demand for imaging while the number of radiologists remains insufficient, such and similar models can help improve medical patient care Ali et al. (2015); Idowu and Okedere (2020); Rosman et al. (2015); Rosenkrantz et al. (2016); Rimmer (2017) , e.g., by screening acquired radiographs for critical findings prior to radiologist interpretation. Then, patients with time sensitive illnesses will receive treatment earlier, potentially saving their lives. What all of these chest X-ray classifiers have seen, once trained, validated and tested, are chest X-rays of a certain type, the in-distribution (ID) images. Consequently, the features learned depend on the assumption that the input is ID. But despite the advanced level of digitization, individual workflows for creating and archiving radiological images and linking them to other patient data are subject to manual intervention by staff and are consequently prone to human error, breaking this assumption. Just one example would be that mixed-up labelling arises of patients for whom X-ray images of several body parts have been taken. Consequently, images of a knee joint, for example, would be fed to a model for detecting pulmonary pathologies. Hence, in the aforementioned example, the presentation of out-of-distributon (OOD) images, erroneous and potentially patient-harming events are possible. A major problem of current deep learning models is that they make high confidence predictions when facing unexpected, OOD, data Nguyen et al. (2015); Nalisnick et al. (2019); Hendrycks et al. (2021), like a knee X-ray. In our example, prioritization based on false, high-confidence, OOD X-rays can lead to longer waiting times for other patients with time critical conditions, like a pneumothorax, potentially risking their life until the error is discovered and resolved. Moreover, repeated instances of such misreporting will - if not balanced with transparency measures sufficiently - quickly lead physicians to distrust the model, eventually leading them refrain of using it (Robinette et al., 2017; Vayena et al., 2018; Nov et al., 2021). Therefore, in recent years, several methods Hendrycks and Gimpel (2017); Hendrycks et al. (2020); Wang et al. (2021); Hendrycks et al. (2019) have been proposed to detect OOD samples. Commonly, the OOD detector converts the output of a model to an ID probability. For example, Max. Probability Hendrycks et al. (2020) uses the highest class probability as ID probability. Another approach, proposed by Lee et al. Lee et al. (2018) models OOD based on the the smallest Mahalanobis distance between the input and a class conditional Gaussian distribution in the latent space. So far, the problem caused by OOD data has been investigated only on toy data sets, e.g. a model trained on the CIFAR-10 data set Krizhevsky and Hinton (2009), learning to classify automobiles and trucks, is tested on the SVHN data set Netzer et al. (2011) containing house numbers. This raises the question if the test performance of proposed OOD detectors translate to an existing model trained on chest X-rays. Figure 1 motivates this problem: as the real world data consists of more then frontal chest X-rays a classifier like CheXnet Rajpurkar et al. (2017) must handle OOD images safely. In this work, we are addressing the practical consequences of OOD data by examining the impact of non chest radiographs on the so-called “radiologist-level” chest X-ray classifier CheXnet. The major contributions of our work are: we systematically explore the OOD detection performance of the CheXnet chest X-ray classifier on three realistic OOD data sets; we show that the benchmark performance of current OOD detection methods mostly do not translate to this domain; and we demonstrate that our proposed method in-distribution voting (IDV) improves OOD detection and generalizes to other data sets. 2 Results CheXnet We investigate the effect of X-ray OOD data on the performance of an existing, “radiologist-level” chest X-ray classifier. We trained the CheXnet model Rajpurkar et al. (2017) on the Chest X-ray 14 (CXR14) data set Wang et al. (2017). The data set consists of 112,120 frontal chest radiographs with 14 annotated chest pathologies (see Figure 3 c for a list of labels). The model achieved a mean area the under the receiver operating characteristic curve (AUC) of 83 % when tested without OOD images, as shown in Table Appendix 3. As an image might display signs of multiple pathologies, classifying these images is modeled as a multi-label classification task, where each class is predicted independently. Images that do not show any signs of these 14 diseases are labeled as “no finding”, which has been modeled as not predicting any of the 14 classes. The consequence is that OOD predictions are indistinguishable from “no finding” predictions, as the CheXnet model must predict zero probability for all 14 classes in both cases. OOD Data Sets Not every OOD sample is equally likely in a real-world scenario. The CheXnet model can encounter OOD X-ray images, as the distinction between ID and OOD X-ray images is based on manual, error-prone, tagging. Photographs on the other hand are not part of the image processing pipeline in a radiology department and can thus be assumed not to be found in a real-world scenario. For our experiments, we selected three publicly available radiographic data sets, IRMA Deserno and Ott (2009), MURA Rajpurkar et al. (2018b), and BoneAge Halabi et al. (2019), containing images of various body parts as realistic OOD test data sets to test cross-data set generalization Torralba and Efros (2011). While the CheXnet model has been pre-trained on predicting the ImageNet classes, they are OOD regarding the target task of chest X-ray classification, as the data set does not include chest X-rays. Therefore, we use it as additional non chest X-ray OOD data set, allowing us to investigate the performance of our propoosed method “In-Distribution Voting” (IDV) when trained with a large-scale unrealistic OOD data set. This is relevant for use cases where no or only few realistic OOD images are available. The OOD data sets are illustrated in Figure 2: • IRMA: the image retrieval in medical applications data set Deserno and Ott (2009) consists of 14,410 diverse radiographic images; 12,677 are annotated according to the anatomical category, 1733 are test images without annotation. • MURA: the musculoskeletal radiographs data set Rajpurkar et al. (2018b) consists of 40,561 radiographic images, displaying different upper extremity bones. • BoneAge: the Bone Age data set Halabi et al. (2019) consists of 12,811 hand radiographs of children. • ImageNet: the ImageNet data set Russakovsky et al. (2015) contains over one million web scraped photographs. The data set is often used for pre-training computer vision models. We specifically chose publicly available data sets to ensure reproducability of our findings, and selected multiple OOD data sets to test how the different OOD detection methods generalize to images from various sources. Further data set details are listed in Table 1 and described in Section 5.1. Our corresponding code is available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease. To investigate the effect of OOD images on the CheXnet predictions we measured how many OOD images are incorrectly classified as ID, i.e. the model predicts the presence of a disease while there is none. Here we found, that the standard CheXnet model fails to reject any OOD image across all three test data sets, achieving an ID specificity of 0 % on all three X-ray OOD test data sets, as shown in Figure 3 a. These results provide evidence that the model’s prediction is (unsurprisingly) conditioned on the assumption that the input image is a chest X-ray image. An acclaimed “radiologist-level” model that cannot distinguish between a chest and a knee will be experienced as unreliable and untrustworthy by physicians and therefore potentionally not used at all. The ID sensitivity is 97.8 % although the model’s prediction of “no finding” ID and OOD images are indistinguishable. Accounting for it by removing “no finding” samples increases the sensitivity from 97.8 % to 99.7 %, as shown in Figure 3 b. OOD Detection Several OOD detection methods have been proposed to predict an ID probability explicitly. Hendrycks and Gimpel were one of the first to address this problem Hendrycks and Gimpel (2017). They used the maximum value of the softmax prediction as ID probability (Max. Softmax). In their work, they motivate this choice by observing that the highest prediction is lower for OOD samples than ID samples. As the softmax is commonly used for single-label classification problems, they extended the method to multi-label classification problems by taking the maximum prediction of the sigmoid (Max. Prediction) Hendrycks et al. (2020). Alternatively, they propose to use the maximum of the underlying logits, the input to the sigmoid layer (Max. Logit). Wang et al. use the label-wise energy function Liang et al. (2018) instead of the sigmoid to transform the output of the model to an ID probability score (Max. Energy) Wang et al. (2021). In the OOD literature, the OOD detection performance is typically reported using the threshold independent area under the receiver operating curve (AUC) on an OOD data set Hendrycks and Gimpel (2017); Hendrycks et al. (2019); Wang et al. (2021). However, without specifying a threshold and reporting the OOD performance on a held-out test data set, the reported results do not answer how these methods would perform in a real-world scenario. Similar to Lee et al. we pick the OOD threshold at an ID classification sensitivity of 95 % Lee et al. (2018). In contrast to previous works, we do so on the validation set (CXR14 + OOD) and report the performance on the test set. This allows us to measure the chest X-ray classification performance in a more realistic scenario: after removing predicted OOD samples on the test set, as shown in Figure 3 c . We tested several state-of-the-art OOD detection methods (Max. Softmax Max. Energy, Max. Prediction and Max. Logit) to improve the ID classification performance compared to the baseline CheXnet model. Throughout our OOD experiments we use Max. Prediction representatively for the other methods, as they are all output-based and performed similarly (see Table Appendix 2 for a detailed comparison). We found that all methods failed to detect OOD images, as seen in Figure 3 a: the specificity is 0 %. Although predicting the ID probability enables the differentiation of “no finding” ID and OOD images, the ID sensitivity is only 95.6 % compared to 97.8 % without OOD detection (see Figure 3 b). Removing “no finding” samples increases the ID sensitivity to 99.3 %. Due to the lower ID classification performance, the end-to-end performance compared to the CheXnet model is marginally lower (81.6 % AUC without OOD compared to 81.5 % AUC with Max. Prediciton), as seen in Figure 3 c. Instead of converting the model’s output to an ID prediction Lee et al. use the activations of the model to generate class-conditional Gaussian distributions Lee et al. (2018) (Mahalanobis). Doing so, they model OOD images as unlikely activations, i.e. having a large Mahalanobis distance to the modeled class means. They motivate using the Mahalanobis distance between the mean representation of a class and the input in the feature space, instead of performing OOD detection in the label space due to “label overfitting”, i.e. that the model predictions are conditioned on the training labels. Detecting OOD images using the Mahalanobis distance results in a much higher ID specificity compared to the output-based methods: 98 % - 100 % across the three chest X-ray OOD test data sets with a sensitivity of 95.5 %, as seen in Figure 3 a. Removing “no finding” samples increases the sensitivity barely (96 %). As with the output-based methods, the ID threshold was set to have 95 % sensitivity on the validation set. The strong specificity difference between the output based methods and Mahalanobis provides evidence for the “label overfitting” hypothesis. When measuring the end-to-end performance, i.e. measuring the AUC of the disease predictions after removing predicted OOD images, the results were mixed compared to the baseline CheXnet model: for some classes the model performed better (e.g. consolidation, improvement from 73.9 % to 81.6 %) for others the performance decreased significantly (e.g. emphysema from 88.2 % to 81.9 %). One reason for this could be that this method chooses the highest class-wise distance as OOD score instead of considering all classes. In-Distribution Voting To improve the robustness of the model’s prediction and the OOD detection performance we propose In-Distribution Voting (IDV): a sample is classified as ID if at least one class-wise prediction is above the class-wise ID threshold, as illustrated in Figure 4. As the ID threshold is determined using ID and OOD data, we can expect to have some OOD data at hand and leverage it to break the “closed world” assumption during training, i.e. force the model not to condition the prediction on the chest X-ray input assumption. To do so we adapt approaches proposed in the literature Hendrycks et al. (2019); Bevandić et al. (2019) and include OOD data into the training data set, so called outlier exposure Hendrycks et al. (2019) or negative data Torralba and Efros (2011). Note, while “no finding” samples have no labeled diseases, we consider them as ID, as they are chest X-rays. The ID thresholds are set for each class independently to have 95 % sensitivity on the validation data set (containing ID and OOD images). When trained with OOD data from ImageNet and IRMA our method achieves an ID specificity of 98 % - 100 % across the X-ray OOD test data sets while having an ID sensitivity of 98.2 %, as seen in Figure 3 a and b. Removing “no finding” samples improved the sensitivity further to 99.8 %. Compared to Mahalanobis and the other methods, IDV also improves class-wise end-to-end prediction across all classes (see Figure 3 c) with a mean AUC of 85.5 % compared to the baseline of 81.6 %. The improvement in both sensitivity and specificity compared to other output-based methods like Max. Prediction suggests that training with OOD data strongly improves the model’s OOD detection performance. This is true even when trained with mostly photographs from the ImageNet data set and tested with OOD X-ray images, as seen in Figure 3. We interpret these results as indicating that the model incorporates the fact that OOD images exist into its output. The consistent end-to-end prediction improvements compared to Mahalanobis mixed results demonstrate the importance of querying all classes regarding OOD detection instead of focusing on a single class. To investigate the effect of training with OOD data on chest disease classification performance we measured the performance on the ID test data set alone, without OOD samples (see Table Appendix 3). We can conclude that training with OOD data, as proposed in our method IDV, does not affect chest disease classification performance. In the subsequent section we analyse how different OOD data sets affect ID classification performance. Training with OOD Data To investigate the importance of OOD training data set type and size we trained the model with different configurations: with each OOD data set and with each of the OOD data sets limited to the smallest OOD training data set (IRMA, 3088 training images, see Table 1). Furthermore, we trained with ImageNet and IRMA to have both a large OOD data set and diverse OOD X-rays. We found that training with any OOD data, even the photographs of ImageNet, improved the ID classification specificity compared to the baseline CheXnet model by a large margin, as seen in Figure 5 a. When OOD training and test data stem from the same data set, we see in Figure 5 a that all models achieved an ID specificity of over 98 %. On average, training with ImageNet alone performs better than training with Bone Age. Notably, the ID specificity decreases when tested on OOD data sets that contain more categories than the OOD training data set (e.g. lower extremities in IRMA are not part of MURA, BoneAge contains only hands), highlighting the importance of a diverse OOD data set, except for training with IRMA. Our results suggest that while image diversity has a strong effect on OOD detection, using any OOD data for training, e.g. ImageNet, is an improvement to training without any OOD data. We therefore conclude that using a generic OOD data set alone could improve a model’s OOD detection performance. Including domain specific OOD images improves the ID specificity even further (cf. ImageNet vs. ImageNet + IRMA in Figure 5 a). Regarding the ID sensitivity, we found no difference across training with chest X-Ray OOD data, as seen in Figure 5 b, d. The ID sensitivity when trained with ImageNet dropped from 99.8 % to 97.2 % when trained with reduced data (3,088 samples). As the ID classification threshold was determined at 95 % validation sensitivity, this shows that all models generalize to the test data. Accounting for the OOD training data set size (fixing the training set to 3,088 samples) reduces ID specificity, as seen in Figure 5 c. Specifically, we see that reducing the amount of ImageNet training images from 217,818 to 3,088 drastically reduces the OOD detection performance across all OOD test sets, e.g. from 94.4 % to 0.2 % on the Bone Age test data set. Still, training with IRMA and ImageNet achieved a specificity of over 90 % on all data sets, outperforming the model trained with IRMA alone by over 5 %. We conclude, while training the model with OOD images from the test data distribution achieved the highest specificity in general, training with many ImageNet OOD images improved the specificity considerably (from 0 % to 84.1 % - 94.4 %). Furthermore, as application specific OOD samples can be hard to obtain and the sensitivity is affected by OOD training set size, training with a general large OOD data set (ImageNet) and few application specific OOD samples (IRMA) achieves a better performance than training with only a small specific OOD data set. 3 Discussion Assessing whether the tested model performance in a benchmark translates to an intended production setting, including potential OOD data is a necessary step before deploying a machine learning model. This is particularly important in safety critical applications, e.g. when classifying chest X-rays to assist radiologists in diagnosing patients. Our results show that the so-called “radiologist-level” CheXnet model Rajpurkar et al. (2017) cannot handle OOD samples. A model that cannot handle OOD images, making confident predictions based on wrong evidence, will lead to worse quality of care, eroding the trust of physicians into the model’s predictions when facing ID images, and impede the potential benefits of computer assisted diagnosis. In this work, we investigated the effect of OOD images on a “radiologist-level” chest X-ray classifier. We showed that the model, reportedly performing as good as radiologists Rajpurkar et al. (2017, 2018a), was not able to filter OOD images, leading to obvious false positives to the human observer. We assume its predictions are conditioned on chest X-rays, because the model was only trained on chest X-rays, leading to overconfident predictions given OOD images. As hypothesises by Lee et al. (Lee et al., 2018), this leads to an ID-overfitted output space. This interpretation explains why established output-based OOD detection methods failed in our experiments, when compared to detecting OOD samples in the feature space. Our solution, ID voting and training with OOD images, regularizes the output space and expands the model’s knowledge horizon, leading to a 100 % ID sensitivity and 98 % specificity. One reason why OOD data are rarely considered is their dependency on the intended application. We showed that including a small OOD training data set from the same data set as the OOD test data resulted in a higher specificity than a general OOD data set. While this suggests that there is no ideal application independent OOD data set, we found that training with any OOD data improved the baseline performance considerably. Furthermore, we showed that even a few thousand OOD samples from the intended application boosted the specificity considerably. Therefore, when creating a data set to train and evaluate a model in a production setting, we recommend to remove anomalies, outliers and other OOD with caution. Instead, including this “real-world” data not only in the training process, but also into the model validation, will lead to more robust ML models and ultimately improve clinical acceptance 4 Conclusion In summary, we showed that training only on ID data results in incorrectly classifying all OOD images as ID with the example of chest X-ray classification, resulting in increased false positive rates. We demonstrated, that our method, IDV, improves the model’s ID classification performance substantially even when trained with data that will not occur in the intended use case. Thus, making the final model more robust and improving the predictive performance in a real-world scenario significantly. 5 Methods 5.1 Data Sets Besides the ID Chest X-ray 14 data setWang et al. (2017) we used three OOD X-ray data sets (IRMA Deserno and Ott (2009), MURA Rajpurkar et al. (2018b), Bone Age Halabi et al. (2019)) and the ImageNet Russakovsky et al. (2015) data set. The different training, validation, and testing splits are shown in Table 1. All data sets are publicly available. 5.1.1 Chest X-ray 14 Data Set We use the train-test split provided by the authors of the Chest X-ray 14 data set, having non-overlapping patients. We further randomly split the provided training data set into training and validation sets, again with non-overlapping patients resulting in 78,468 training, 11,219 validation, and 22,433 test images (see also Table 1). All three splits have a similar prevalence of class labels. In summary, the original data set is split into 70 % training, 10 % validation, and 20 % test data. We use the images labelled as “no finding” for training, as 46 % of the images are labelled as such. For these images, the model must predict the absence of all 14 pathologies. 5.1.2 Out-of-Distribution Data Sets Because the IRMA data set is the smallest data set, we sample every OOD data set so that their test and validation split size matches the IRMA splits. IRMA We only use the provided training images, as we require the IRMA labels to exclude ID chest radiographs from the data set. We remove all chest X-rays from the data set according to their anatomical code and exclude images with an anatomical code starting with 57, 75, 05, or 150, resulting in 7,720 images. We split the remaining images randomly into training, validation, and testing using a 30 % / 20 % / 50 % split to ensure enough images in the test split. Bone Age We randomly sample the test and validation images according to the data split sizes of the IRMA data set (772 validation images, 3,860 test images, see Table 1). The remaining 8,179 images are used either all or sampled according to the IRMA training set size (3,088 images) for the training split. MURA We split the MURA data set, containing 40,561 images, similar to the Bone Age data set: the validation and test partitions are randomly sampled, matching the size of the IRMA validation/test splits, listed in Table 1. Either all remaining images or sampled according to the IRMA training set size (3,088 images) are used for training. ImageNet Due to the size of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set compared to the Chest X-ray 14 data set we use only 50 % of the 544,546 images provided in the “LOC_train_solution.csv” file for training and another 20 % for validation, see Table 1. When accounting for OOD data set sizes, we sample the training and validation sets according to the size of the IRMA splits. For both cases we sample the test set according to the IRMA test split size. All train/validation/test splits were created with non-overlapping images. 5.2 CheXnet Following (Rajpurkar et al., 2017), we fine-tune a DenseNet-121 Huang et al. (2017) on the CXR14 data set. The model was pre-trained on ImageNet pre-trained and is available on pytorch.org. For fine-tuning, we replace the last layer with a fully-connected layer with 14 outputs, matching the 14 classes of the CXR14 data set. The outputs are converted to a probability by applying the sigmoid function. The class-wise predictive thresholds were determined by setting a chest pathology classification sensitivity of 95 % only on the CXR14 validation data set, without any OOD data. We use binary cross entropy as a loss function and train the model using ADAM Kingma and Ba (2015) optimization with default parameters ($\beta_{1}=0.9,\quad\beta_{2}=0.999$) and an initial learning rate of 0.0003. We divide the learning rate by a factor of ten if the validation loss did not improve over the last two epochs. We apply weight decay with a value of 0.0001. We train the model for eight epochs and select the best model based on the validation loss. The input images are resized to 256 x 256 pixels and normalized according to the ImageNet mean and standard deviation. Then we apply 224 x 224 ten crop, i.e., we take crops from each corner and the centre of the image and repeat the process for the horizontally flipped image: producing ten 224 x 224 pixel images per sample. The model predictions of the ten images are averaged before calculating the loss. Training With Out-of-Distribution Images When including OOD images into the training data, the model must predict the absence of any pathology for the images. Like the ID images, the OOD images are normalised according to the ImageNet mean and standard deviation and passed to the model in the same fashion as the ID images. 5.3 Out-of-Distribution Detection The goal of OOD detection is to classify each image as either ID or OOD. As a baseline, we used the default CheXnet model. Furthermore, we used Max. Softmax Hendrycks and Gimpel (2017), Max. Energy Wang et al. (2021), Max. Prediction and Max. Logit Hendrycks et al. (2020), and Mahalanobis Lee et al. (2018). Training with OOD Images When training with OOD images both training and validation splits are extended to include the OOD training/validation splits. For every OOD image, the model must predict the absence of any pathology. Overall Prediction Classifying OOD images as negative samples due to the absence of any pathology requires a per-class classification threshold. For every pathology we select the ID classification threshold at 95 % ID sensitivity on the validation data set (containing ID and OOD data). If every class-probability for a sample is below the respective threshold the sample is classified as OOD. Output-Based OOD Detection We convert the model’s 14 class predictions into an ID probability using the highest class-prediction (Max. Softmax Hendrycks and Gimpel (2017) and Max. Prediction Hendrycks et al. (2020)), the highest logit (Max. Logit Hendrycks et al. (2020), Max. Energy Wang et al. (2021)). We select the ID classification threshold at 95 % ID sensitivity on the validation data set (containing ID and OOD data). Mahalanobis For Mahalanobis-based OOD detection Lee et al. (2018) we use the output of the penultimate layer to determine the Mahalanobis scores. We select the ID classification threshold at 95 % ID sensitivity on the validation data set (containing ID and OOD data). Declarations 6 Funding The research for this article received funding from the German federal ministry of health’s program for digital innovations for the improvement of patient-centered care in healthcare [grant agreement no. 2520DAT920]. 7 Conflict of interest The authors report no conflict of interest. 8 Availability of data and materials All data sets are publicly available. The Chest X-ray 14 data set can be accessed at https://nihcc.app.box.com/v/ChestXray-NIHCC/, the IRMA data set at https://doi.org/10.18154/RWTH-2016-06143, Bone Age data set at https://www.rsna.org/education/ai-resources-and-training/ai-image-challenge/rsna-pediatric-bone-age-challenge-2017 and the ImageNet data set at https://www.kaggle.com/c/imagenet-object-localization-challenge. 9 Code availability The code and the trained models are available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease 10 Authors’ contributions AW designed the methodology, implemented the experiments, and carried out the analysis, with contributions of TW, MI, BS, and TL; AW wrote the manuscript with contributions of TW, MI, BS and TL; TL supervised the study. Appendix A Comparison of Output-Based OOD Detectors We test several state-of-the art out-of-distribution detection methods: Max. Softmax Hendrycks and Gimpel (2017), Max. Prediction and Max. Logit Hendrycks et al. (2020), and Max. Energy Wang et al. (2021) on the three OOD test data sets IRMA, MURA and Bone Age. All methods failed to identify the OOD samples, as seen in Table 2. Appendix B Effect of Training With OOD Data on ID Classification To investigate the effect of training with OOD data on ID classification we measured the AUC on the test ID data set (CXR14). As shown in Table 3, training with OOD images did not affect ID classification performance, except when trained only with ImageNet. References Gulshan et al. [2016] Varun Gulshan, Lily Peng, Marc Coram, Martin C. Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, Ramasamy Kim, Rajiv Raman, Philip C. Nelson, Jessica L. Mega, and Dale R. Webster. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22):2402–2410, December 2016. ISSN 0098-7484. doi:10.1001/jama.2016.17216. Esteva et al. [2017] Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115–118, February 2017. ISSN 0028-0836, 1476-4687. doi:10.1038/nature21056. Rajpurkar et al. [2017] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, and Katie Shpanskaya. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017. Rajpurkar et al. [2018a] Pranav Rajpurkar, Jeremy Irvin, Robyn L. Ball, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis P. Langlotz, Bhavik N. Patel, Kristen W. Yeom, Katie Shpanskaya, Francis G. Blankenberg, Jayne Seekins, Timothy J. Amrhein, David A. Mong, Safwan S. Halabi, Evan J. Zucker, Andrew Y. Ng, and Matthew P. Lungren. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLOS Medicine, 15(11):e1002686, November 2018a. ISSN 1549-1676. doi:10.1371/journal.pmed.1002686. Majkowska et al. [2019] Anna Majkowska, Sid Mittal, David F. Steiner, Joshua J. Reicher, Scott Mayer McKinney, Gavin E. Duggan, Krish Eswaran, Po-Hsuan Cameron Chen, Yun Liu, Sreenivasa Raju Kalidindi, Alexander Ding, Greg S. Corrado, Daniel Tse, and Shravya Shetty. Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation. Radiology, 294(2):421–431, December 2019. ISSN 0033-8419. doi:10.1148/radiol.2019191293. Ali et al. [2015] Farah S. Ali, Samantha G. Harrington, Stephen B. Kennedy, and Sarwat Hussain. Diagnostic radiology in Liberia: A country report. Journal of Global Radiology, 1(2):6, 2015. Idowu and Okedere [2020] Bukunmi Idowu and Tolulope Okedere. Diagnostic Radiology in Nigeria: A Country Report. Journal of Global Radiology, 6(1), June 2020. ISSN 2372-8418. doi:10.7191/jgr.2020.1072. Rosman et al. [2015] David A. Rosman, Jean Jacques Nshizirungu, Emmanuel Rudakemwa, Crispin Moshi, Jean de Dieu Tuyisenge, Etienne Uwimana, and Louise Kalisa. Imaging in the land of 1000 hills: Rwanda radiology country report. Journal of Global Radiology, 1(1):5, 2015. Rosenkrantz et al. [2016] Andrew B. Rosenkrantz, Danny R. Hughes, and Richard Duszak Jr. The US radiologist workforce: An analysis of temporal and geographic variation by using large national datasets. Radiology, 279(1):175–184, 2016. Rimmer [2017] Abi Rimmer. Radiologist shortage leaves patient care at risk, warns royal college. BMJ: British Medical Journal (Online), 359, 2017. Nguyen et al. [2015] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015. Nalisnick et al. [2019] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don’t know? In International Conference on Learning Representations, 2019. Hendrycks et al. [2021] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262–15271, 2021. Robinette et al. [2017] Paul Robinette, Ayanna M. Howard, and Alan R. Wagner. Effect of robot performance on human–robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4):425–436, 2017. doi:10.1109/THMS.2017.2648849. Vayena et al. [2018] Effy Vayena, Alessandro Blasimme, and I Glenn Cohen. Machine learning in medicine: addressing ethical challenges. PLoS medicine, 15(11):e1002689, 2018. Nov et al. [2021] Oded Nov, Yindalon Aphinyanaphongs, Yvonne W Lui, Devin Mann, Maurizio Porfiri, Mark Riedl, John-Ross Rizzo, and Batia Wiesenfeld. The transformation of patient-clinician relationships with ai-based medical advice. Communications of the ACM, 64(3):46–48, 2021. Hendrycks and Gimpel [2017] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. Hendrycks et al. [2020] Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling Out-of-Distribution Detection for Real-World Settings. arXiv:1911.11132 [cs], December 2020. Wang et al. [2021] Haoran Wang, Weitang Liu, Alex Bocchieri, and Yixuan Li. Can multi-label classification networks know what they don’t know? Advances in Neural Information Processing Systems, 34, 2021. Hendrycks et al. [2019] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. ICLR, 2019. Lee et al. [2018] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018. Krizhevsky and Hinton [2009] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Netzer et al. [2011] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Wang et al. [2017] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3462–3471, July 2017. doi:10.1109/CVPR.2017.369. Deserno and Ott [2009] Thomas Deserno and B. Ott. 15.363 IRMA Bilder in 193 Kategorien für ImageCLEFmed 2009. 2009. Rajpurkar et al. [2018b] Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs. In Medical Imaging with Deep Learning, Amsterdam, 2018b. Halabi et al. [2019] Safwan S. Halabi, Luciano M. Prevedello, Jayashree Kalpathy-Cramer, Artem B. Mamonov, Alexander Bilbily, Mark Cicero, Ian Pan, Lucas Araújo Pereira, Rafael Teixeira Sousa, Nitamar Abdala, Felipe Campos Kitamura, Hans H. Thodberg, Leon Chen, George Shih, Katherine Andriole, Marc D. Kohli, Bradley J. Erickson, and Adam E. Flanders. The RSNA Pediatric Bone Age Machine Learning Challenge. Radiology, 290(2):498–503, February 2019. ISSN 0033-8419. doi:10.1148/radiol.2018180736. Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211–252, 2015. Torralba and Efros [2011] Antonio Torralba and Alexei A. Efros. Unbiased look at dataset bias. In CVPR 2011, pages 1521–1528, June 2011. doi:10.1109/CVPR.2011.5995347. Liang et al. [2018] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In 6th International Conference on Learning Representations, ICLR 2018, 2018. Bevandić et al. [2019] Petra Bevandić, Ivan Krešo, Marin Oršić, and Siniša Šegvić. Simultaneous semantic segmentation and outlier detection in presence of domain shift. In German Conference on Pattern Recognition, pages 33–47. Springer, 2019. Huang et al. [2017] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4700–4708, 2017. Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR (Poster), 2015.
September 28, 1994 TAUP–2199–94 hep-th/9409175 Kähler spinning particles Neil Marcus*** Work supported in part by the US-Israel Binational Science Foundation, the German-Israeli Foundation for Scientific Research and Development and the Israel Academy of Science. E–Mail: NEIL@HALO.TAU.AC.IL School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Sciences Tel-Aviv University Ramat Aviv, Tel-Aviv 69978, ISRAEL. We construct the $U(N)$ spinning particle theories, which describe particles moving on Kähler spaces. These particles have the same relation to the $N=2$ string as usual spinning particles have to the NSR string. We find the restrictions on the target space of the theories coming from supersymmetry and from global anomalies. Finally, we show that the partition functions of the theories agree with what is expected from their spectra, unlike that of the $N=2$ string in which there is an anomalous dependence on the proper time. 1 Introduction The “spinning particle” [1] describes a free Dirac particle, moving in some $D$–dimensional space. Historically, the particle action led to that of the NSR string. Conversely, one can obtain the spinning particle by dimensionally reducing the NSR string [2] or the heterotic string [3] to one dimension. The particle can be generalized to an $N$–spinning particle with a gauged $O(N)$ symmetry, which in four dimensions describes a spin $N/2$ particle [4, 5]. The string can also be generalized, to the $N=2$ [6] and $N=4$ [7] strings. However, the dimensional reduction of these extended string theories does not give the $O(N)$ spinning particles. The (ungauged) $O(2)$ particle can be obtained by the dimensional reduction of the NSR string, but the $N>2$ theories can not be derived from string theories. In this paper we shall construct the $U(N)$ spinning particles, which have the same relation to the $N=2$ string as the $O(N)$ particles have to the NSR string. As with the $N=2$ string, these theories are not directly relevant to the real world, since they always have an even number of time coordinates. Our original motivation for studying them was that since particle theories are so much simpler than string theories, the $U(N)$ particles could provide us with an insight into some of the puzzles posed by $N=2$ strings. These include the Lorentz-invariance and supersymmetry of the string [8], and the conflict between loop calculations in the string and in the corresponding field-theories [9, 10]. (For a review see [11].) A separate motivation is that spinning particle theories are one-dimensional supergravity theories***This is not the most general definition of a particle theory. For example, one has the superparticle [12] which is invariant under target-space instead of world line supersymmetries, as well as hybrid theories [13] that are combinations of the two. We shall not consider such theories further., and since most supergravity theories turn out to be useful in one way or another, they are interesting to consider in their own right. Thus, truncations of the $U(1)$ and $U(2)$ theories of this paper have already been used to give a particle description of the open and closed B–twisted topological sigma models, respectively [14]. In this regard, it is useful to recall some results in the classification of supergravity theories: first, we should perhaps stress the difference between supergravity theories in three to eleven dimensions, and those in one and two dimensions. Supergravity theories in $D>2$ are related by dimensional reduction and truncation. The most beautiful—although possibly the least useful—ones are the “larger” supergravities ($N>2$ in four dimensions). They are essentially unique, with their scalar fields living in various homogeneous spaces [15]. (References on supergravity theories in various dimensions can be found in [16].) The smaller theories are more complicated, since matter supermultiplets can be coupled to them in various ways. Thus $N=1$ theories in three dimensions can be written with the scalar fields describing a sigma model on an arbitrary Riemann space [17]. The scalars of $N=1$ theories in four dimensions describe a Hodge manifold [18], while $N=2$ theories in four dimensions lead to a quaternionic sigma model [18]. The supergravity theories can exist on any spacetime, as long as it is a spin manifold. Two-dimensional supergravity theories are not the dimensional reduction of those in three dimensions. While it is not necessary to do so, their main interpretation is as string theories, with the scalar fields interpreted as coordinates on a target space which is spacetime. Thus one might expect the larger ($N>4$) supergravity theories to live on particular spacetimes. Even if these theories exist, they would be rather esoteric, and they have not been constructed. Classically $N=1$, 2 and 4 strings live on Riemann [19], Kähler [7] and either hyperkähler or quaternionic [7] spaces†††Of course demanding conformal invariance of the string theory restricts the spaces to be essentially Ricci flat, and phenomenological constraints may lead one to further restrict the theories, for example to Calabi–Yau spaces.. (Recall that global $N=1$, 2 and 4 sigma models live on Riemann [20], Kähler [21] and hyperkähler [22] spaces, respectively.) In addition, one also has the various heterotic strings. This certainly is not a general classification of string theories—for example one can introduce torsions into the string—but it does give an overview of the basic types of string theories. The two-dimensional theories can be reduced to one dimension, where they become particle theories. The reduction of the NSR string can be generalized to give the usual $O(N)$ spinning particles, which exist on Riemann spaces (sometimes on spin manifolds only). The $U(N)$ theories to be considered here can be derived from the $N=2$ string, and they can be defined only on Kähler target spaces. One can also compactify the $N=4$ string, which we expect to give $USp(N)$ spinning particles living on hyperkähler or quaternionic spaces; however we shall not consider these theories further in this work. The rest of this paper is organized as follows: In section two we give a brief summary of the known spinning particle actions, in order to compare them to the Kähler spinning particles. Most of this section is a restatement of results in [5] in our notation. In section three we construct the $U(N)$ theories. We discuss the restrictions on the target space of the theories coming from supersymmetry and, after introducing a Chern–Simons term, from anomalies. We then find the spectra of the theories, and discuss their space-time conformal invariance. In section four we calculate the one-loop partition function of the particle, and see that it gives the result expected from its spectrum, unlike the corresponding calculation in the string. We end with some conclusions. 2 Summary of the $O(N)$ spinning particle The simplest particle action is simply that of an (unspinning) scalar particle with mass $m$ moving in a $D$–dimensional Minkowski space. It can be described by the action [23]: $${\cal L}=\,\frac{1}{2e}\;\dot{X}^{M}\dot{X}^{N}+\frac{e}{2}\;m^{2}\mskip 6.0mu ,$$ (1) where $X^{M}$ is a map from the world line of the particle to the target space, and $e$ is an einbein on the world line. Canonical quantization shows that $X^{M}$ and $P^{N}=\dot{X}^{N}/e$ have the usual commutation relations of coordinates and momenta, and the equation of motion of the einbein gives the constraint $P^{2}=m^{2}$. The spinning particle has a one-dimensional supersymmetry, which is made local by the introduction of a gravitino $\psi$. The supersymmetric partners of the $X^{M}$’s are the spinors $\chi^{M}$’s, and the action in the massless case is given by [1]: $${\cal L}=\,\frac{1}{2e}\;\dot{X}^{M}\dot{X}^{N}+\frac{i}{e}\dot{X}^{M}\,\psi\,% \chi^{M}+\frac{i}{2}\;\chi^{M}\dot{\chi}^{M}\mskip 6.0mu .$$ (2) Canonically quantizing (2), one sees that the $\chi^{M}$’s become gamma matrices, so one is describing a Dirac particle***In the massive case, one needs to introduce an extra spinor which becomes $\gamma^{5}$ upon quantization [1].. The importance of the local supersymmetry of the action is seen from the fact that the constraint coming from the equation of motion of the gravitino is the massless Dirac equation. This construction can be generalized to the $N=2$ case [24, 4, 5], where one has two gravitini $\psi_{I}$ and $2D$ spinors $\chi_{I}^{M}$. This theory has a gauged $SO(2)$ symmetry, and in four dimensions it describes the field equations of a Maxwell field. One can continue generalizing to the $N$–extended spinning particle [4, 25, 5], with gravitini $\psi_{I}$ and spinors $\chi_{I}^{M}$ in the $N$ of a local $O(N)$. The lagrangian of the massless “$O(N)$ spinning particle” in a $D$–dimensional Riemann space is [5]: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{{\cal L}=\,\frac{1}{2e}% \;G_{MN}}$&$\displaystyle{{}\left(\,\dot{X}^{M}+i\,\psi\cdot\chi^{M}\,\right)% \left(\,\dot{X}^{N}+i\,\psi\cdot\chi^{N}\,\right)+\frac{i}{2}\;G_{MN}\;\chi_{I% }^{M}\,{\cal D}\,\chi_{I}^{N}}$\\ $\displaystyle{}$&$\displaystyle{{}-\,{\displaystyle{\vphantom{1}\smash{\lower 2% .15pt\hbox{\small$e$}}\over\vphantom{1}\smash{\raise 1.075pt\hbox{\small$8$}}}% }\,R_{MNPQ}\;\chi^{M}\cdot\chi^{N}\;\chi^{P}\cdot\chi^{Q}\mskip 6.0mu ,}$}}\,$$ (3) where the “dots” denote contractions over $O(N)$ indices. Here ${\cal D}$ is the covariantized time derivative, improved with a connection for the $O(N)$ group and the pullback of the Christoffel connection: $${\cal D}\,\chi_{I}^{M}\equiv\dot{\chi}_{I}^{M}-i\,A_{IJ}\,\chi_{J}^{M}+\Gamma_% {PQ}^{M}\,\dot{X}^{P}\chi_{I}^{Q}\mskip 6.0mu .$$ (4) At this stage the metric $G_{MN}$ appears to be arbitrary, but one can see that for $N>2$ supersymmetry forces the theory to be in flat space. Using the Noether procedure, one finds the local supersymmetry transformations: $${\displ@y\halign to 0.0pt{\cr}$\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}% $\delta\,e=-2\,i\,\alpha\cdot\psi$}$\qquad&$\@lign\displaystyle{$$}$&$\@lign% \displaystyle{{}$\delta\,A_{IJ}=0$}$\\ $\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}$\delta\,\psi_{I}={\cal D}\,% \alpha_{I}$}$\qquad&$\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}$$}$\\ $\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}$\delta\,X^{M}=-i\,\alpha% \cdot\chi^{M}$}$\qquad&$\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}$% \delta\,\chi_{I}^{M}={\displaystyle{\vphantom{1}\smash{\lower 2.15pt\hbox{% \small$1$}}\over\vphantom{1}\smash{\raise 1.075pt\hbox{\small$e$}}}}\left(\,% \dot{X}^{M}+i\,\psi\cdot\chi^{M}\,\right)\alpha_{I}+i\,\Gamma_{PQ}^{M}\;\alpha% \cdot\chi^{P}\chi_{I}^{Q}\mskip 6.0mu .$}$&\hbox to 0.0pt{\@lign$(5)$}}$$ Under these, the lagrangian is invariant up to 3–fermi terms (and total derivatives), but one is left with the 5–fermion terms: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{\delta{\cal L}=\frac{i}% {4}}$&$\displaystyle{{}R_{MNPQ}\,\left(\alpha\cdot\psi\chi^{M}\cdot\chi^{N}-2% \,\psi\cdot\chi^{M}\alpha\cdot\chi^{N}\right)\chi^{P}\cdot\chi^{Q}}$\\ $\displaystyle{+}$&$\displaystyle{{}\,\frac{i\,e}{8}\;R_{MNPQ\mskip 1.0mu ;% \mskip 1.0mu R}\;\chi^{M}\cdot\chi^{N}\chi^{P}\cdot\chi^{Q}\alpha\cdot\chi^{R}% \mskip 6.0mu .}$}}\,$$ (6) For the Dirac particle ($N=1$) these extra terms vanish due to the symmetries of the Riemann tensor, so the action is supersymmetric. (In fact, in this case both of the 4-fermi terms in the lagrangian (3) vanish identically.) Similarly, for $N=2$ the 5–fermion terms vanish using the symmetries of the Riemann tensor and the Bianchi identity [5]. However, for $N>2$ the lagrangian is supersymmetric only for a flat target space, with $R_{MNPQ}=0$, so the $O(N>2)$ spinning particle is relatively uninteresting. Without going into further detail, the $O(N)$ spinning particle has the following properties: $\bullet$ Spectrum The “$O(0)$” theory describes a scalar moving in any Riemann space. The “$O(1)$” theory describes a Dirac spinor. This theory has a global anomaly unless the target space is a spin manifold [26, 27]. The $O(2)$ theory describes an antisymmetric tensor field with $D/2$ indices—a photon if $D=4$. The theory has a global anomaly in odd dimensions. The $O(N>2)$ theory can be written only in flat space. In four dimensions it describes a spin $N/2$ particle [4, 5]. In $D$ (even) dimensions, it describes a particle whose representation is described by the rectangular Young tableaux with $D/2$ rows and $N/2$ columns [28, 5], with half a column representing a spinor index. $\bullet$ Chern–Simons term In the case $N=2$, the gauge group of the theory is an $SO(2)\simeq U(1)$. Thus, one can add the term $\epsilon^{IJ}A_{IJ}={\rm Tr}A$ to the lagrangian. This is the simplest example of a Chern–Simons term. Note that this term breaks the $O(2)$ of the theory to an $SO(2)$. With the addition of the Chern–Simons term with coefficient $q-D/2$, the theory describes an antisymmetric $q$–tensor field in any $D$–dimensional Riemann space. ($D$ can now be arbitrary, but $q$ must be an integer to avoid the global anomaly [5].) Thus the $N=2$ theory can describe any antisymmetric tensor particle on any Riemann space. $\bullet$ Conformal invariance The massless $O(N)$ theory is invariant under target-space dilations. In fact the theory is even invariant under conformal transformations of the target-space [28, 5], and all conformal representations in all dimensions can be obtained from the $O(N)$ theory [29]. In the $SO(2)$ theory conformal invariance is spoiled if the Chern–Simons term is added, and indeed the theory of a general antisymmetric tensor field in $D$ dimensions is not conformally invariant. $\bullet$ Supersymmetry algebra The supersymmetry algebra in the $O(N)$ theory closes into field-dependent diffeomorphisms, supersymmetry transformations and gauge transformations. If $N>1$, one has to use the fermion equations of motion, so the algebra closes only on shell. Finally, note that the gauge field $A_{IJ}$ does not transform under the supersymmetry transformations in (2). This means that if one’s interest is in writing the most general one-dimensional supergravity theory—rather than a particle theory—one is free to gauge any subgroup of the $O(N)$ symmetry, from the full $O(N)$ to the trivial identity group. This is unlike the case of the string or of supergravity in higher dimensions. If one does not gauge the full group, the theory will not describe a particle in an irreducible representation of the Lorentz (or conformal) group. For example, if one does not introduce the gauge field in the $O(2)$ theory, the theory describes all antisymmetric tensor fields simultaneously. 3 The $U(N)$ spinning particle 3.1 Lagrangian We have argued that the $O(N)$ particles of the previous section are related to the NSR string. To carry out the dimensional reduction, one first fixes the Weyl, super-Weyl and Lorentz transformations, as well as the spatial diffeomorphisms of the string by the gauge choices $e_{\alpha}^{a}={\rm diag}\left(e,1\right)$ and $\psi_{1}=0$. Then one gets the (ungauged) $O(2)$ particle, which can be truncated to the usual spinning particle***One might have expected to have obtained the $O(2)$ gauge field from the $e_{0}^{1}$ component of the zweibein, but this drops out of the action.. The $O(2)$ particle is not related to the $N=2$ string, as one might have expected from its $U(1)$ gauge symmetry. This follows simply by counting fields: In the reduction of the $N=2$ theory, there are twice as many spinors as scalars, as in the $O(2)$ particle, but there are 2 complex instead of 2 real gravitini. Thus reducing the $N=2$ string to one dimension leads to a new family of particle theories. The $N=2$ string lives on a $d$–complex dimensional Kähler space ($D=2d$), so the coordinates $X^{M}$ are split into $X^{\mu}$ and their complex conjugates $X^{*\mskip 1.0mu {\bar{\mu}}}$. In analogy to the $SO(N)$ particle, we introduce an index $i$ which will transform under a local $U(N)$. Then the spinors become $\chi_{i}^{\mu}$ and $\chi^{*\mskip 1.0mu i\mskip 1.0mu {\bar{\mu}}}$, and the gravitini become $\psi_{i}$ and $\psi^{*\mskip 1.0mu i}$. The lagrangian can be found by reducing the $N=2$ string (in the $U(2)$ case with only a $U(1)$ gauging), or simply by using the Noether procedure: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{{\cal L}=\,\frac{1}{e}% \;G_{\mu{\bar{\mu}}}}$&$\displaystyle{{}\left(\,\dot{X}^{*\mskip 1.0mu {\bar{% \mu}}}+i\,\psi\cdot\chi^{*\mskip 1.0mu {\bar{\mu}}}\,\right)\left(\,\dot{X}^{% \mu}+i\,\psi^{*}\cdot\chi^{\mu}\,\right)+i\,G_{\mu{\bar{\mu}}}\,\chi^{*\mskip 1% .0mu i\mskip 1.0mu {\bar{\mu}}}\,{\cal D}\,\chi_{i}^{\mu}}$\\ $\displaystyle{}$&$\displaystyle{{}-\,{\displaystyle{\vphantom{1}\smash{\lower 2% .15pt\hbox{\small$e$}}\over\vphantom{1}\smash{\raise 1.075pt\hbox{\small$2$}}}% }\,R_{\mu{\bar{\mu}}\mskip 1.0mu \nu{\bar{\nu}}}\,\chi^{\mu}\cdot\chi^{*\mskip 1% .0mu {\bar{\mu}}}\,\chi^{\nu}\cdot\chi^{*\mskip 1.0mu {\bar{\nu}}}\mskip 6.0mu% .}$}}\,$$ (7) Here the derivative ${\cal D}$ is again covariantized with respect to diffeomorphisms and with respect to the local $U(N)$: $${\cal D}\,\chi_{i}^{\mu}\equiv\dot{\chi}_{i}^{\mu}-i\,A_{\mskip 1.0mu i}{}^{j}% \,\chi_{j}^{\mu}+\Gamma_{\rho\sigma}^{\mu}\dot{X}^{\rho}\chi_{i}^{\sigma}% \mskip 6.0mu .$$ (8) Recall that in a Kähler space, the only nonvanishing components of the Christoffel symbol are $\Gamma_{\mu\nu}^{\rho}$ and its complex conjugate. 3.2 Supersymmetry The supersymmetry transformations are given by $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{}$&$\displaystyle{{}% \delta\,e=-i\,\alpha\cdot\psi^{*}\,-i\,\alpha^{*}\cdot\psi}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,\psi_{i}={\cal D}\,\alpha_{i}}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,A_{\mskip 1.0mu i}{}^{j}=0}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,X^{*\mskip 1.0mu {\bar{\mu}}}=-i\,% \alpha\cdot\chi^{*\mskip 1.0mu {\bar{\mu}}}}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,\chi_{i\mskip 1.0mu {\bar{\mu}}}={% \displaystyle{\vphantom{1}\smash{\lower 2.15pt\hbox{\small$1$}}\over\vphantom{% 1}\smash{\raise 1.075pt\hbox{\small$e$}}}}\,G_{\mu{\bar{\mu}}}\left(\,\dot{X}^% {\mu}+i\,\psi^{*}\cdot\chi^{\mu}\,\right)\alpha_{i}-i\,\Gamma_{{\bar{\mu}}{% \bar{\rho}},\sigma}\;\alpha\cdot\chi^{*\mskip 1.0mu {\bar{\rho}}}\,\chi_{i}^{% \sigma}\mskip 6.0mu .}$\\ $\displaystyle{}$}}\,$$ (9) We have chosen to write the transformation of $\chi_{i\mskip 1.0mu {\bar{\mu}}}$ with its spacetime index lowered, since the transformation of $\chi_{i}^{\mu}$ involves both $\alpha$ and $\alpha^{*}$. As in the $O(N)$ case, one finds 5–fermi terms in the supersymmetry variation of the lagrangian: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{\delta{\cal L}=\frac{i}% {2}}$&$\displaystyle{{}R_{\mu{\bar{\mu}}\nu{\bar{\nu}}}\,\left(\alpha\cdot\psi% ^{*}\,\chi^{\mu}\cdot\chi^{*\mskip 1.0mu {\bar{\mu}}}-2\,\psi^{*}\cdot\chi^{% \mu}\,\alpha\cdot\chi^{*\mskip 1.0mu {\bar{\mu}}}\right)\chi^{\nu}\cdot\chi^{*% \mskip 1.0mu {\bar{\nu}}}}$\\ $\displaystyle{+}$&$\displaystyle{{}\,\frac{i\,e}{2}\;R_{\mu{\bar{\mu}}\nu{% \bar{\nu}}\mskip 1.0mu ;\mskip 1.0mu {\bar{\rho}}}\;\chi^{\mu}\cdot\chi^{*% \mskip 1.0mu {\bar{\mu}}}\,\chi^{\nu}\cdot\chi^{*\mskip 1.0mu {\bar{\nu}}}\,% \alpha\cdot\chi^{*\mskip 1.0mu {\bar{\rho}}}\,+\,\hbox{h.c.}}$}}\,$$ (10) In the $U(1)$ theory the curvature terms vanish both here and in the lagrangian (7), since in a Kähler space $R_{\mu{\bar{\mu}}\nu{\bar{\nu}}}$ is symmetric in each set of indices. In the $U(2)$ theory the extra terms in (10) vanish because of the symmetries of $R_{\mu{\bar{\mu}}\nu{\bar{\nu}}}$ and because of the Bianchi identity, which in a Kähler space states that $R_{\mu{\bar{\mu}}\nu{\bar{\nu}}\mskip 1.0mu ;\mskip 1.0mu {\bar{\rho}}}$ is totally symmetric in the “barred” indices. Thus the $U(1)$ and $U(2)$ theories can be written in any Kähler space. As in the $O(N)$ case, the $U(N>2)$ theories are supersymmetric only in flat space, and are again of limited interest. Note that since $A_{\mskip 1.0mu i}{}^{j}$ does not transform under supersymmetry transformations, one can again restrict the gauging of the theory to any subgroup of the $U(N)$ symmetry. For example, the open and closed $B$–particles of ref. [14] are given by the $U(1)$ and $U(2)$ lagrangians of (7), with no gauging whatsoever and with the $\psi_{i}$’s set to zero. (Thus in these theories only the $\alpha^{*\mskip 1.0mu i}$ supersymmetry transformations are local; those of the $\alpha_{i}$’s are not.) Finally, upon commuting two supersymmetry transformations with parameters $\alpha_{i}$ and $\beta_{j}$, one finds a diffeomorphism by $\xi\equiv i/e\left(\alpha^{*}\cdot\beta+\alpha\cdot\beta^{*}\right)$, a supersymmetry transformation with parameter $-\xi\,\psi_{i}$ and a $U(N)$ transformation with parameter $-\xi\,A_{\mskip 1.0mu i}{}^{j}$. For $N>1$, the algebra closes only with the use of the equation of motion of the $\chi_{i}^{\mu}$’s. 3.3 Chern–Simons term, the spectrum, and anomalies. The spectrum of the particle can be changed by adding the Chern–Simons term $${\cal L}_{CS}=\left(\frac{d}{2}-q\right)\,A_{\mskip 1.0mu i}{}^{i}$$ (11) to the lagrangian of the particle, where for now $q$ is an arbitrary parameter. Note that the Chern–Simons term exists for all $N$, and that it is consistent with supersymmetry, since $A_{\mskip 1.0mu i}{}^{j}$ is totally invariant. In canonically quantizing the theory, the $X^{\mu}$’s and their conjugates $P_{\mu}$ become position and momentum operators, as do $X^{*\mskip 1.0mu {\bar{\mu}}}$ and $P_{\bar{\mu}}^{*}$. The $\chi^{*\mskip 1.0mu i\mskip 1.0mu {\bar{\mu}}}$’s can be taken to be creation operators and the $\chi_{i}^{\mu}$’s to be annihilation operators, so the general state with momentum $P_{\mu}$ is built from the vacuum state $\left|P_{\mu}\right\rangle$ by applying some number of $\chi^{*\mskip 1.0mu i\mskip 1.0mu {\bar{\mu}}}$’s. As usual, the equations of motion of the supergravity fields $e$, $\psi_{i}$ and $A_{\mskip 1.0mu i}{}^{j}$ constrain the Hilbert space: Thus, varying $A_{\mskip 1.0mu i}{}^{j}$ in the combined lagrangian of (7) and (11) gives: $$G_{\mu{\bar{\mu}}}\,\chi^{*\mskip 1.0mu i\mskip 1.0mu {\bar{\mu}}}\,\chi_{j}^{% \mu}=q\;\delta_{j}^{i}\mskip 6.0mu ,$$ (12) where we have used a normal ordering scheme that is symmetric between the $\chi$’s and $\chi^{*}$’s. Acting the “$ii$” element of this constraint on a state $\Psi$ tells us that there must be exactly $q$ $\chi^{*\mskip 1.0mu i}$’s in the state for each $i$, so $\Psi$ has the form: $$\Psi=F_{{\bar{\mu}}_{1}\cdots{\bar{\mu}}_{q}\;{\bar{\nu}}_{1}\cdots{\bar{\nu}}% _{q}\;\cdots}\;\chi^{*\mskip 1.0mu 1\mskip 1.0mu {\bar{\mu}}_{1}}\cdots\chi^{*% \mskip 1.0mu 1\mskip 1.0mu {\bar{\mu}}_{q}}\;\chi^{*\mskip 1.0mu 2\mskip 1.0mu% {\bar{\nu}}_{1}}\cdots\chi^{*\mskip 1.0mu 2\mskip 1.0mu {\bar{\nu}}_{q}}\;% \cdots\;\left|P_{\mu}\right\rangle\mskip 6.0mu .$$ (13) This means that the theory is empty unless $q$ is an integer between 0 and $d$. (Thus the Chern–Simons term is necessary in an odd number of complex dimensions). The lack of a spectrum when $q$ is not an integer is an indication of the global anomaly of the theory in that case [30]. The off-diagonal elements of the constraint (12) impose a symmetry between the $i$ and $j$ indices of the tensor $F$, implying that it is represented by the rectangular Young tableaux $$q\left\{\vbox to 18.275pt{}\right.\mskip-5.0mu \underbrace{\mskip 3.5mu \raise% -1.075pt\hbox{{\raise 12.9pt\hbox{$\mskip-3.5mu {\mathchoice{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}}\hbox to 0.0pt{\raise-6.45pt\hbox{${% \mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0% pt{\raise-12.9pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt dep% th 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}}$}}\hbox to 0.0pt{\raise-19.35pt\hbox{${\mathchoice{\mskip 1.5mu \vbox% {\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt % height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.27999% 6pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0% .279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299% 904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu % {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0pt{\raise-25.8pt\hbox{${% \mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\mskip-3.5mu % {\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100% %\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0% .279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}\hbox to 0.0pt{% \raise-6.45pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth % 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.29% 9904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu% {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}}$}}\hbox to 0.0pt{\raise-12.9pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0pt{\raise-19.35pt\hbox{${% \mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0% pt{\raise-25.8pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt dep% th 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}}$}}\mskip-3.5mu {\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}}\hbox to 0.0pt{\raise-6.45pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0pt{\raise-12.9pt\hbox{${% \mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0% pt{\raise-19.35pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}}$}}\hbox to 0.0pt{\raise-25.8pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}$}}}}_{N}$$ (14) of $SU(d)$. (This is similar to the case of the $O(N)$ particle, where the state has $D/2$ rows and $N/2$ columns.) In general, the holonomy group of the Kähler space will be a full $U(d)$. Again using a symmetric normal ordering, and normalizing the $U(1)$ charge of $\chi_{i}^{\mu}$ to be 1, one sees that the particle represented by $\Psi$ has charge $N\,(d/2-q)$. If $N\,d$ is odd the particle will have a half-integral $U(1)$ charge, indicating that it is spinor-like. (The simplest case with $N=1$ gives the various pieces of the $SO(D)$ spinors broken into $U(d)$ representations, as $q$ is varied.) In these cases the theory again has a global anomaly unless the space supports a spin structure. In a Calabi–Yau space this problem never arises, since the holonomy group of the spacetime is $SU(d)$. The equations of motion of the state $\Psi$ are given by varying the lagrangian with respect to the gravitini and the einbein. In order to avoid normal ordering problems we shall for simplicity restrict ourselves here to the case of flat space†††Such problems lead to an unusual choice of creation and annihilation operators in the B–particle [14].. The einbein constraint shows that the particle is massless: $G^{\mu{\bar{\mu}}}P_{\mu}P_{\bar{\mu}}^{*}=0$, while the constraints from the gravitini give the equations of motion $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{}$&$\displaystyle{{}P^{% {\bar{\mu}}_{1}}F_{{\bar{\mu}}_{1}\cdots{\bar{\mu}}_{q}\mskip 1.0mu {\bar{\nu}% }_{1}\cdots{\bar{\nu}}_{q}\mskip 1.0mu \cdots}=0}$\\ $\displaystyle{}$&$\displaystyle{{}P_{\,[\mskip 1.0mu {\bar{\mu}}}^{*}\,F_{{% \bar{\mu}}_{1}\cdots{\bar{\mu}}_{q}\mskip 1.0mu ]\mskip 1.0mu {\bar{\nu}}_{1}% \cdots{\bar{\nu}}_{q}\mskip 1.0mu \cdots}=0\mskip 6.0mu .}$}}\,$$ (15) Eqs. (15) are analogous to the equation of motion and Bianchi identity of the photon (which one can get from the $O(2)$ theory): $\partial_{M}\,F_{MN}=0$ and $\partial_{\,[\mskip 1.0mu L}\,F_{MN\mskip 1.0mu ]}=0$. By going into a light-cone frame, one can see that the field strength in (14) is reduced on shell to a connection in the $$q-1\left\{\vbox to 16.34pt{}\right.\mskip-5.0mu \underbrace{\mskip 3.5mu % \raise-1.075pt\hbox{{\raise 9.675pt\hbox{$\mskip-3.5mu {\mathchoice{\mskip 1.5% mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2% 79996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height% 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt% depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt% \kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}% \mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width% 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule wid% th 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5% mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2% 79996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height% 0.279996pt width 100%}\mskip 1.5mu {}}}\hbox to 0.0pt{\raise-6.45pt\hbox{${% \mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0% pt{\raise-12.9pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt dep% th 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%% \hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.% 279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu % \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.2799% 96pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.% 279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt de% pth 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6% .299904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.% 5mu {}}}$}}\hbox to 0.0pt{\raise-19.35pt\hbox{${\mathchoice{\mskip 1.5mu \vbox% {\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt % height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.27999% 6pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0% .279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299% 904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu % {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}\mskip-3.5mu {\mathchoice{\mskip 1.5mu \vbox% {\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt % height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.27999% 6pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0% .279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299% 904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu % {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}\hbox to 0.0pt{\raise-6.45pt\hbox{${\mathchoice% {\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0pt{% \raise-12.9pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth % 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.29% 9904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu% {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}}$}}\hbox to 0.0pt{\raise-19.35pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}\mskip-3.5mu {\mathchoice{\mskip 1.5mu \vbox% {\hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt % height 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.27999% 6pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0% .279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.299% 904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu % {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}\hbox to 0.0pt{\raise-6.45pt\hbox{${\mathchoice% {\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}}$}}\hbox to 0.0pt{% \raise-12.9pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{\hrule height 0.0pt depth % 0.279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.29% 9904pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu% {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}}$}}\hbox to 0.0pt{\raise-19.35pt\hbox{${\mathchoice{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.% 279996pt width 100%\hbox{\vrule width 0.279996pt height 6.299904pt\kern 6.2999% 04pt\vrule width 0.279996pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {% }}{\mskip 1.5mu \vbox{\hrule height 0.0pt depth 0.279996pt width 100%\hbox{% \vrule width 0.279996pt height 6.299904pt\kern 6.299904pt\vrule width 0.279996% pt}\hrule height 0.279996pt width 100%}\mskip 1.5mu {}}{\mskip 1.5mu \vbox{% \hrule height 0.0pt depth 0.279996pt width 100%\hbox{\vrule width 0.279996pt h% eight 6.299904pt\kern 6.299904pt\vrule width 0.279996pt}\hrule height 0.279996% pt width 100%}\mskip 1.5mu {}}}$}}$}}}}_{N}$$ (16) representation of the massless little group $U(d-2)$. Note that the theory has no propagating particles for $q=0$ and $q=d$. 3.4 Conformal invariance In addition to the worldline symmetries we have discussed, the $U(N)$ particle is covariant under holomorphic diffeomorphisms. It is therefore invariant under holomorphic isometries of the spacetime. For example, in flat space it is invariant under $U(d)$ “Lorentz” transformations. The bosonic ($N=0$) string is invariant under the more general class of conformal transformations, defined by $\delta\,X^{\mu}=\xi^{\mu}$, with $\xi^{\mu}$ restricted to be a conformal Killing vector: $${\displ@y\halign to 0.0pt{\cr}$\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}% $\xi_{{\bar{\mu}}\mskip 1.0mu ,\mskip 1.0mu \mu}+{\bar{\xi}}_{\mu\mskip 1.0mu % ,\mskip 1.0mu {\bar{\mu}}}=2\,G_{\mu{\bar{\mu}}}\,\rho\mskip 6.0mu ,$}$\\ $\@lign\displaystyle{$$}$&$\@lign\displaystyle{{}$\xi_{{\bar{\mu}}\mskip 1.0mu% ;\mskip 1.0mu {\bar{\nu}}}+\xi_{{\bar{\nu}}\mskip 1.0mu ;\mskip 1.0mu {\bar{% \mu}}}=0\mskip 6.0mu ;$}$&\hbox to 0.0pt{\@lign$(17a)$}}$$ here the “scale-factor” $\rho$ defined by (3.4) vanishes if $\xi^{\mu}$ is a Killing vector, in which case one is back to an isometry transformation of the spacetime. If one attempts to generalize the conformal symmetry to the $U(N)$ lagrangian, one finds that a term $$\delta{\cal L}=\frac{i}{e}\;\xi_{,\mskip 1.0mu {\bar{\nu}}}^{\mu}\,\dot{X}^{*% \mskip 1.0mu {\bar{\nu}}}\,\psi\cdot\chi^{*}_{\mu}\,+\,\hbox{h.c.}$$ can not be canceled. This means that (3.4) must be replaced by the stronger condition that $\xi^{\mu}$ be a holomorphic conformal Killing vector, with $$\xi_{,\mskip 1.0mu {\bar{\nu}}}^{\mu}=0\mskip 6.0mu .$$ (17b) In addition, one finds the unwanted terms $$\delta{\cal L}=i\,\dot{X}^{*\mskip 1.0mu {\bar{\mu}}}\,\chi^{*\mskip 1.0mu {% \bar{\nu}}}\cdot\chi^{\mu}\left(-\rho_{,\mskip 1.0mu {\bar{\mu}}}\,G_{\mu{\bar% {\nu}}}+\rho_{,\mskip 1.0mu {\bar{\nu}}}\,G_{\mu{\bar{\mu}}}\right)\,+\,\hbox{% h.c.}$$ These terms vanish identically when $d=1$, but otherwise one must impose the condition that $\rho$ be constant. Aside from holomorphic isometries of the spacetime, this leaves only constant dilations. By going to Riemann-normal coordinates one can see that such holomorphic dilations are only possible in flat space. In this case there is clearly a dilation symmetry, with the fields $X^{\mu}$, $e$, $\chi_{i}^{\mu}$, $\psi_{i}$ and $A_{\mskip 1.0mu i}{}^{j}$ having weights 1, 2, 0, 1 and 0, respectively. We thus see that there are only interesting conformal transformations when the space-time is a Riemann surface $(d=1)$. (Note, however, that Riemann surfaces can not support massless propagating particles, since they can not have both space and time coordinates, so the theory is basically topological in this case.) Then one does have invariance under the full conformal group, with the transformations: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{}$&$\displaystyle{{}% \delta\,X^{\mu}=\xi^{\mu}}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,\chi_{i}^{\mu}=\xi^{\mu}{}_{,\mskip 1% .0mu \nu}\,\chi_{i}^{\nu}-\rho\,\chi_{i}^{\mu}}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,e=2\,\rho\,e}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,\psi_{i}=\rho\,\psi_{i}+e\rho_{,% \mskip 1.0mu \mu}\,\chi_{i}^{\mu}}$\\ $\displaystyle{}$&$\displaystyle{{}\delta\,A_{\mskip 1.0mu i}{}^{j}=\frac{1}{N% +1}\,\left(\delta_{i}^{j}\delta_{l}^{k}-N\delta_{i}^{k}\delta_{l}^{j}\right)% \left(\rho_{,\mskip 1.0mu \mu}\,\psi^{*\mskip 1.0mu l}\chi_{k}^{\mu}-\rho_{,% \mskip 1.0mu {\bar{\mu}}}\,\psi_{k}\chi^{*\mskip 1.0mu l\mskip 1.0mu {\bar{\mu% }}}-e\,\rho_{,\mskip 1.0mu \mu{\bar{\mu}}}\,\chi^{*\mskip 1.0mu l\mskip 1.0mu % {\bar{\mu}}}\chi_{k}^{\mu}\right)\mskip 6.0mu .}$}}\,$$ (18) The transformation of $A_{\mskip 1.0mu i}{}^{j}$ can actually be written as various linear combinations of the two sets of terms in (18); we have chosen the combination that leaves ${\rm Tr}A$ invariant, so the Chern–Simons lagrangian (11) does not break the conformal symmetry. 4 The partition function Thus far, we have found the spectrum of the particle using Hamiltonian quantization. It would be nice to also be able to calculate amplitudes of the theory. However, since the particle lagrangian describes a free particle, the only quantity one can calculate without introducing interactions is the one-loop partition function. This is nevertheless interesting, since the analogous partition function of the $N=2$ string disagrees with the result expected from its spectrum. Recall that the partition function of a particle with mass $m$ moving in a Euclidean $D$–dimensional space is [31]: $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{{\cal F}}$&$% \displaystyle{{}=-\,\frac{1}{2}\;{\rm Tr}\,\log\,\left(p^{2}+m^{2}\right)}$\\ $\displaystyle{}$&$\displaystyle{{}=\,\frac{V}{2}\int_{0}^{\infty}\frac{{\rm d% }T}{T^{\,1+D/2}}\;e^{-m^{2}\,T}\mskip 6.0mu ,}$}}\,$$ (19) where $T$ is the Schwinger proper-time parameter. The $N=2$ string describes only massless degrees of freedom in a flat 2–complex dimensional space ($d=2$, $D=4$). However, its partition function is [9] $${\cal F}=\frac{1}{2}\,\int_{\cal M}{{\rm d}^{2}\tau\over\tau_{2}^{2}}\mskip 6.% 0mu ,$$ (20) which is compatible with that of the particle only if $D=2$! There is a similar disagreement between the one-loop three-point function in the string [10] and that of the “Plebanski” field theory [32] which should be the effective field theory describing the string. One of the explanations advocated to solve this problem [32] is that the complex nature of the Kähler target space of the $N=2$ string means that one should use a complex Schwinger parameter. The problem could also come from the $U(1)$ gauge field in the theory, or have an intrinsically stringy nature. Since the $U(N)$ particle is intimately related to the $N=2$ string, and shares many of its features, calculating its partition function can distinguish between the various possibilities. We shall start with the $U(1)$ theory in flat space with complex dimension $d$, on a worldline which is a circle with proper length $T$. We need to evaluate the path integral over the lagrangian (7) with the Chern–Simons term (11). The new feature of the $U(N)$ string is that the gauge field $A$ can not be completely gauged away, since it can have a nontrivial Wilson line around the circle. The allowed gauge choice is $A=\theta/T$, with $\theta$ ranging from 0 to $2\pi$. This means that the spinors $\chi^{\mu}$ and the gravitino $\psi$ pick up a phase of $\theta$, when going around the circle. Most of the path integral is standard: As usual [31, 23], the integral over the einbein (gauged to $e=T$) modulo diffeomorphisms gives $$\frac{1}{2}\int_{0}^{\infty}\frac{{\rm d}T}{T}\mskip 6.0mu ,$$ (21) and the path integral over the $X^{\mu}$’s gives $$VT^{d}\,\mathop{\rm det}\nolimits^{\prime}\bigl{(}-\partial_{t}{}^{2}\bigr{)}^% {-d}=\frac{V}{T^{d}}\mskip 6.0mu .$$ (22) Similarly, the integral over the $\chi^{\mu}$’s gives a factor of $$\bigl{(}\mathop{\rm det}\nolimits_{\theta}\,i\partial_{t}\bigr{)}^{d}\mskip 6.% 0mu ,$$ (23) where the index on the “det” is to remind us of the shifted boundary conditions of the fermions. For a generic $A$, the gravitino can be completely gauged away using the supersymmetry transformation, with the resulting Jacobian reducing the exponent in (23) to $d-2$. The interesting question, of course, is what comes from the integral over $A$, modulo the $U(1)$ transformations. The zero-mode integration gives $\int_{0}^{2\pi}{\rm d}\theta/\sqrt{T}$; the integration over constant gauge transformations gives a factor of $2\pi\sqrt{T}$ in the denominator, and the Jacobian is $\mathop{\rm det}\nolimits^{\prime}\,i\partial_{t}=T$. All together, this means that the $A$ integration simply gives rise to an insertion of the projection operator $\int_{0}^{2\pi}\frac{{\rm d}\theta}{2\pi}$. Since the gauge-field integration contributes no factors of $T$, there will be no anomalous powers of $T$ in the particle. Putting together the above results, and adding the Chern–Simons term (11), we find $${\cal F}=\,\frac{V}{2}\int_{0}^{\infty}\frac{{\rm d}T}{T^{\,1+d}}\;\int_{0}^{2% \pi}\frac{{\rm d}\theta}{2\pi}\;e^{i\,(q-d/2)\,\theta}\;\bigl{(}2\sin\theta/2% \bigr{)}^{d-2}\mskip 6.0mu ,$$ (24) where we have used the result that (up to an ill-defined sign) $\mathop{\rm det}\nolimits_{\theta}\,i\partial_{t}=2\,\sin(\theta/2)$. Evaluating the $\theta$ integral, and comparing (24) to (19), one sees that the $U(1)$ particle describes $d-2\choose q-1$ massless particles in a $d$–complex dimensional space. This is exactly the number of degrees of freedom that one expects from the Hamiltonian calculation: the field strength of the particle is described by an antisymmetric tensor with $q$ indices (14); the equations of motion (15) reduce this to a light-cone connection with $q-1$ indices (16). To check whether or not subtleties occur when the gauge group is nonabelian, we shall also carry out the calculation of the partition function in the $U(2)$ theory. (The combinatorics of the $U(N)$ case will be left to our more intrepid readers.) The integrals over $e$ and $X$ are the same as before. In this case $A_{\mskip 1.0mu i}{}^{j}$ can be gauge fixed to $A={\rm diag}\left(\theta,\phi\right)$, where both $\theta$ and $\phi$ go from 0 to $2\pi$. The fermionic integrations are the same as in the $U(1)$ result (23), duplicated over $\theta$ and $\phi$. Similarly, the integrations over the diagonal zero modes of $A$ give projection operators for both $\theta$ and $\phi$. The only new features are that there is a factor of $1/2$, since a $90^{\circ}$ rotation in the $y$–$z$ plane interchanges $\theta$ and $\phi$, and that the two off-diagonal gauge transformations each contribute the Jacobian $\mathop{\rm det}\nolimits_{\theta-\phi}\,i\partial_{t}$. The integrations over $\theta$ and $\phi$ then give a factor of $${\hbox{}\,\vbox{\openup 3.0pt\halign{\cr}$\displaystyle{-\,\frac{1}{2}}$&$% \displaystyle{{}\int_{0}^{2\pi}\frac{{\rm d}\theta}{2\pi}\int_{0}^{2\pi}\frac{% {\rm d}\phi}{2\pi}\;e^{i\,(q-d/2)\,(\theta+\phi)}\;\bigl{(}2\sin(\theta/2-\phi% /2)\bigr{)}^{2}\;\bigl{(}2\sin\theta/2\bigr{)}^{d-2}\;\bigl{(}2\sin\phi/2\bigr% {)}^{d-2}}$\\ $\displaystyle{}$&$\displaystyle{{}=\;\frac{(d-1)!\,(d-2)!}{(d-q)!\,(d-q-1)!\,% q!\,(q-1)!}\mskip 6.0mu .}$}}\,$$ (25) This is the dimension of the $U(d-2)$ representation of the particle in (16), for $N=2$, so the path integral calculation of the partition function is again in perfect agreement with the result expected from the canonical quantization of the particle. There is no sign of the anomalies of the $N=2$ string. 5 Conclusions We have constructed the $U(N)$ spinning particles—one dimensional supergravity theories with $N$ complex local supersymmetries on the world line and a local $U(N)$ invariance. These theories describe massless particles moving in a Kähler spacetime with complex dimension $d$. The $U(1)$ and $U(2)$ theories can be defined on any Kähler space; consistency with supersymmetry forces the $U(N>2)$ theories to be in flat space. The theories have a spacetime conformal invariance only if the target space is a Riemann surface ($d=1$). The spectrum of the theories depends on the coefficient of the Chern–Simons term $(d/2-q)\;{\rm Tr}A$, which can be added to the lagrangian for any $N$. To avoid global anomalies $q$ must be an integer, and the manifold must support a spin structure if $N\,d$ is odd. The constraint coming from the gauge field implies that the “Lorentz group” $U(d)$ representation of the field strength of the particle is given by a rectangular Young tableaux with $q$ rows and $N$ columns. The equations of motion (15) then show that the particle is in the representation of the little group $U(d-2)$ with $q-1$ rows and $N$ columns. The $U(N)$ particle can be regarded as a toy model for the $N=2$ string, to which it is intimately related. (The $U(1)$ and $U(2)$ theories can be obtained by dimensionally reducing the $N=2$ string, and then gauging the global symmetry of the theory.) One result that has implications for the string is that the partition function of the particle, calculated by performing the path integral on the circle with a flat target space, is in perfect agreement with what one would expect from the spectrum of the particle. This is in contradistinction to the string, in which modular invariance forces the string partition function to have a peculiar dependence on the proper time $\tau$. The fact that this does not occur in the particle rules out several of the explanations proposed for this phenomenon. Other unsettled issues in the $N=2$ string are whether or not it is the same as the $N=4$ string, and whether or not it has spacetime supersymmetry and a full $SO(D)$ Lorentz invariance [8]. It is less clear here what implications can be drawn from the particle. One would not expect to be able to see spacetime supersymmetry in any (non super) particle. The general $U(N)$ particle defined on some Kähler space certainly does not have an $SO(D)$ Lorentz invariance. Indeed, the spectrum will not even fall into representations of $SO(D)$. It is also not equivalent to a $USp(N/2)$ particle. However, these properties may still hold in the special case of the $N=2$ string, for which spacetime must be hyperkähler in four real dimensions. Some issues which we have not explored, but which should not pose any great difficulty, are how to add masses to the $U(N)$ theories, and how to construct the $USp(N)$ theories, which are related to the $N=4$ strings. Masses have been included in the original spinning particle [1], and in the $O(N)$ particle [4], although only in flat space. One should be able to include them in the $U(N)$ strings in the same way. Alternatively, one can introduce the masses by dimensionally reducing the $N=2$ string using the Scherk–Schwarz mechanism [33]. Also, following the line of this paper, it should not be hard to find the proposed $USp(N)$ particles, for example by dimensionally reducing the $N=4$ string [7]. As with that string, these theories should be definable only on hyperkähler or quaternionic spacetimes. A more difficult and more interesting problem is to classify all spinning particle theories. References [1] F.A. Berezin and M.S. Marinov, Ann. Physics 104 (1977) 336; L. Brink, S. Deser, B. Zumino, P. Di Vecchia and P. Howe, Phys. Lett. 64B (1976) 435. [2] L. Brink, P. Di Vecchia and P. Howe, Phys. Lett. 65B (1976) 471; S. Deser and B. Zumino, ibid. 369. [3] D.J. Gross, J.A. Harvey, E. Martinec and R. Rohm, Nucl. Phys. B256 (1985) 253. [4] V.D. Gershun and V.I. Tkach, Pis’ma Zh. Exp. Theor. Fiz. 29 (1979) 320 [Sov. Phys. JETP 29 (1979) 288]. [5] P.S. Howe, S. Penati, M. Pernici and P. Townsend, Phys. Lett. 215B (1988) 555, Class. Quantum Grav. 6 (1989) 1125. [6] L. Brink and J.H. Schwarz, Nucl. Phys. B121 (1977) 285. [7] M. Pernici and P. van Nieuwenhuizen, Phys. Lett. 169B (1986) 381. [8] W. Siegel, Phys. Rev. Lett. 69 (1992) 1493, hep-th/9207043; Phys. Rev. D46 (1992) 3235, hep-th/9205075; Phys. Rev. D47 (1993) 2504, hep-th/9207043. [9] S.D. Mathur and S. Mukhi, Nucl. Phys. B302 (1988) 130. [10] M. Bonini, E. Gava and R. Iengo, Mod. Phys. Lett. A6 (1991) 795. [11] N. Marcus, “A tour through N=2 strings”, in “String theory, quantum gravity and the unification of the fundamental interactions”, eds. M. Bianchi, F. Fucito, E. Marinari and A. Sagnotti (World Scientific 1993), hep-th/9207024. [12] R. Casalbuoni, Phys. Lett. 62B (1976) 49; L. Brink and J.H. Schwarz, Phys. Lett. 100B (1981) 310. [13] E.A. Bergshoeff and J.W. van Holten, Phys. Lett. 226B (1989) 93. [14] N. Marcus and S. Yankielowicz, “The topological B model as a twisted spinning particle”, Tel-Aviv preprint TAUP–2192–94, hep-th/9408116, to appear in Nuclear Physics B. [15] J.H. Schwarz, Phys. Lett. 95B (1980) 219; E. Cremmer, “Supergravities in 5 dimensions”, in “Superspace and supergravity”, ed. S.W. Hawking and M. Roček (Cambridge Univ. Press, 1981); B. Julia, “Group disintegrations”, ibid. [16] “Supergravities in diverse dimensions”, ed. A. Salam and E. Sezgin (World Scientific, 1989), 2 volumes. [17] B. de Wit, A.K. Tollsten and H. Nicolai, hep-th/9208074, Nucl. Phys. B392 (1993) 3. [18] J. Bagger and E. Witten, Phys. Lett. 115B (1982) 202, Nucl. Phys. B222 (1983) 1. [19] E. Bergshoeff, S. Randjbar-Daemi, A. Salam, H. Sarmadi and E. Sezgin, Nucl. Phys. B269 (1986) 77. [20] D.Z. Freedman and P.K. Townsend, Nucl. Phys. B177 (1981) 282. [21] B. Zumino, Phys. Lett. 87B (1979) 203. [22] L. Álvarez-Gaumé and D.Z. Freedman, Comm. Math. Phys. 80 (1981) 443. [23] See, for example, A.M. Polyakov, “Gauge fields and strings” (Harwood Academic, 1987). [24] L. Brink, P. Di Vecchia and P. Howe, Nucl. Phys. B118 (1977) 76. [25] M. Henneaux and C. Teitelboim, “First and second quantized point particles of any spin”, in “Quantum mechanics of fundamental systems 2”, ed. C. Teitelboim and J. Zanelli (Plenum Press, 1989). [26] E. Witten, “Global anomalies in string theory”, in “Symposium on Anomalies, Geometry and Topology”, ed. W.A. Bardeen and A.R. White (World Scientific, 1985). [27] L. Álvarez-Gaumé, Comm. Math. Phys. 90 (1983) 161. [28] W. Siegel, Int. J. Mod. Phys. A3 (1988) 2713. [29] W. Siegel, Int. J. Mod. Phys. A4 (1989) 2015. [30] S. Elitzur, Y. Frishman, E. Rabinovici and A. Schwimmer, Nucl. Phys. B273 (1986) 93. [31] J. Polchinski, Comm. Math. Phys. 104 (1986) 37. [32] H. Ooguri and C. Vafa, Nucl. Phys. B361 (1991) 469. [33] J. Scherk and J.H. Schwarz, Nucl. Phys. B153 (1979) 61.
Theory of spin loss at metallic interfaces K. D. Belashchenko Department of Physics and Astronomy and Nebraska Center for Materials and Nanoscience, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA    Alexey A. Kovalev Department of Physics and Astronomy and Nebraska Center for Materials and Nanoscience, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA    M. van Schilfgaarde Department of Physics, Kings College London, Strand, London WC2R 2LS, United Kingdom Abstract Interfacial spin-flip scattering plays an important role in magnetoelectronic devices. Spin loss at metallic interfaces is usually quantified by matching the magnetoresistance data for multilayers to the Valet-Fert model, while treating each interface as a fictitious bulk layer whose thickness is $\delta$ times the spin-diffusion length. By employing the properly generalized circuit theory and the scattering matrix approaches, we derive the relation of the parameter $\delta$ to the spin-flip transmission and reflection probabilities at an individual interface. It is found that $\delta$ is proportional to the square root of the probability of spin-flip scattering. We calculate the spin-flip transmission probability for flat and rough Cu/Pd interfaces using the Landauer-Büttiker method based on the first-principles electronic structure and find $\delta$ in reasonable agreement with experiment. Spin transport at metallic interfaces is an essential ingredient of various spintronic device concepts, such as giant magnetoresistance (GMR) GMR ; Bass ; Bass2015 , spin injection and accumulation Johnson , spin-transfer torque Ralph , and spin pumping pumping . Spin-orbit coupling (SOC) enables some device concepts, such as spin-orbit torques in ferromagnet/heavy-metal bilayers Miron ; Liu and spin current detection based on the inverse spin-Hall effect ISHE in spin-caloritronic devices caloritronics . Interfacial spin-orbit scattering affects spin transport in GMR multilayers Bass ; Bass2015 , spin pumping Jaffres ; Chen2015 , spin injection Rashba , and Gilbert damping Kelly . It contributes to the spin relaxation in metallic films Long1 ; Long2 ; Long3 and to the magnetoanisotropies in the resistance of magnetic miltilayers Kobs , tunnelling conductance Gould ; Chantis ; Moser ; Park , and Andreev reflection Zutic-AR ; Hogl , which are especially large when the magnetic electrodes are half-metallic Burton ; Hogl . Interfacial spin-flip scattering can also appear due to spin fluctuations Zhang . In the absence of interfacial spin-flip scattering, spin transport in magnetoelectronic circuits can usually be described using the circuit theory Brataas.Nazarov.ea:PRL2000 ; Brataas.Nazarov.ea:EPJB2001 ; Brataas.PhysRep . In the presence of SOC, the spin current is not conserved at the interfaces. Absent a complete theory, interfacial spin-flip scattering has been described by introducing a fictitious bulk layer of thickness $t_{I}$, resistivity $\rho_{I}$, and spin-diffusion length $l^{I}_{sf}$, and using the parameter $\delta=t_{I}/l^{I}_{sf}$ to characterize “spin memory loss” at the interface Bass ; Bass2015 ; Baxter ; Manchon ; Kelly . The parameter $\delta$ was measured Bass ; Bass2015 for multiple interfaces by mapping the experimental current-perpendicular-to-the-plane magnetoresistance data, for spin valves with multilayer insertions, to the phenomenological Valet-Fert model VF1993 . However, the relation of the parameter $\delta$ to the scattering properties of an individual interface is not known. Moreover, this description of an interface is generally incomplete, because the spin-flip transmittance and the reflectances on two sides are all independent parameters. For example, the spin-flip reflectance is relevant for spin injection BGW and for the interface-induced spin relaxation in a spin reservoir Long1 ; Long2 ; Long3 . The existing formulations Fert-Lee ; Rashba ; Barnas including only one interfacial spin-relaxation parameter are, therefore, also incomplete. In this Letter we apply the scattering matrix and the generalized circuit theory approaches to establish the correspondence between the phenomenological parameter $\delta$ for a nonmagnetic interface, as extracted from GMR-like measurements, and the calculable spin-resolved transmittance and reflectance properties of an individual interface. The latter are calculated from first principles for the Cu/Pd interface. The theory provides a complete framework for including interfacial spin-flip scattering in magnetoelectronic devices. Valet-Fert theory. The layer thicknesses in the typical measurements Bass ; Bass2015 are about 3 nm; the resistance of each individual layer is at least a few times smaller than the resistance of each interface, as long as nominally pure materials are used. For example, the area-resistance products of a 3-nm layer of nominally pure Pd and of the Cu/Pd interface are about 0.14 and 0.45 f$\Omega\cdot$m${}^{2}$, respectively Bass . Therefore, in the following we treat the problem under the assumption that the bulk resistances are negligibly small compared to the interface resistances. This simplifies the expressions and does not affect the result to first order in spin-flip scattering rates supplement . To facilitate comparison with scattering theory, it is convenient to consider a periodic multilayer in which the FN${}_{1}$(N${}_{2}$N${}_{1}$)${}_{\mathcal{N}}$ block repeats itself. Here F is a ferromagnetic layer, N${}_{1}$ and N${}_{2}$ are two different non-magnetic layers, and we are interested in the properties of the N${}_{1}$/N${}_{2}$ interface. Describing an interface as a bulk interlayer, we solve the Valet-Fert equations VF1993 in the multilayer for parallel and alternating antiparallel configurations using the transfer-matrix approach. Taking the limit in which the resistance is dominated by and spin-flip scattering is present only at N${}_{1}$/N${}_{2}$ interfaces, we find a simple expression for the magnetoresistance: $$\Delta R=R_{AP}-R_{P}=\frac{(\beta r^{*}_{F})^{2}}{r_{I}}\frac{\delta}{\sinh m\delta},$$ (1) where $m=2\mathcal{N}$ is the number of interfaces, $\beta=(\rho_{\downarrow}-\rho_{\uparrow})/(\rho_{\uparrow}+\rho_{\downarrow})$ the spin asymmetry, $r^{*}_{F}=\rho^{*}_{F}t_{F}$ the effective resistance, $t_{F}$ the thickness, and $\rho^{*}_{F}=(\rho_{\uparrow}+\rho_{\downarrow})/4$ the effective resistivity of the ferromagnet, and $r_{I}=\rho_{I}t_{I}$ is the resistance of the interface. Scattering theory. Since we are dealing with low-resistance metallic interfaces, the relevant resistances are those measured in the two-terminal setup, rather than the four-terminal resistances measured in a constriction or calculated within the Landauer-Büttiker approach. For spin-conserving interfaces the relation between the two is well-known Bauer : the interface resistance appearing in series-resistor expressions is obtained from the Landauer-Büttiker resistance by subtracting the spurious contribution of the Sharvin resistance. The approach of Ref. Bauer, , which takes into account the deviations of the distribution functions from equilibrium, can be readily applied to the periodic multilayer introduced above. We use the result of Ref. Bauer, for the two-terminal conductance $G^{S}$: $$G^{S}=2G_{0}\sum_{ij\sigma\sigma^{\prime}}[(I-T+R)^{-1}T]_{i\sigma,j\sigma^{\prime}}$$ (2) where $i$, $j$ denote conduction channels, $G_{0}=e^{2}/h$, and the transmission and reflection matrices $T$ and $R$ are now $2\times 2$ in spin space. The transmission and reflection matrices are calculated using the semiclassical concatenation rules Datta . The irrelevant spin-flip scattering in the ferromagnetic layers is neglected, and the spin-diagonal transmission and reflection matrices across half of the ferromagnetic layer are written as $$T^{F}_{i\sigma,j\sigma^{\prime}}=\frac{1}{M_{1}}\frac{\delta_{\sigma\sigma^{\prime}}}{1+s_{\sigma}},\quad R^{F}_{i\sigma,j\sigma^{\prime}}=\frac{1}{M_{1}}\frac{s_{\sigma}\delta_{\sigma\sigma^{\prime}}}{1+s_{\sigma}}$$ (3) where $M_{1}$ is the number of conducting channels per spin in the adjacent normal metal, and $s_{\sigma}=r_{\sigma}M_{1}/2$, where $r_{\sigma}$ is the resistance of one spin channel (which includes the F/N interface resistance). The factor $\frac{1}{2}$ comes from the fact that the supercell period contains half of the F layer at each edge. Concatenation of two such “half-thick” F layers leads to the correct scattering matrices for the whole F layer. The results of this calculation are identical to those of the circuit theory, Eqs. (6)-(7). Circuit theory. A more general approach, not limited to periodic structures, is to employ the magnetoelectronic circuit theory Brataas.Nazarov.ea:PRL2000 ; Brataas.Nazarov.ea:EPJB2001 ; Brataas.PhysRep extended to include spin-flip scattering supplement . For an adjacent pair of layers L${}_{1}$, L${}_{2}$ in a magnetic multilayer, the charge $I^{0}$ and spin $\bar{I}^{s}$ currents in, say, layer L${}_{2}$ are: $$\displaystyle I^{0}_{2}$$ $$\displaystyle=G\Delta f^{0}+\bar{G}^{s}\Delta\bar{f}^{s}-\bar{G}^{t}\cdot\bar{f}_{1}^{s}-\bar{G}^{r}\cdot\bar{f}_{2}^{s},$$ (4) $$\displaystyle\bar{I}^{s}_{2}$$ $$\displaystyle=\bar{G}^{s}\Delta f^{0}+G\Delta\bar{f}^{s}-\hat{{\cal G}}^{t}\cdot\bar{f}_{1}^{s}-\hat{{\cal G}}^{r}\cdot\bar{f}_{2}^{s}.$$ (5) Here $\Delta f^{0}=f^{0}_{1}-f^{0}_{2}$ and $\Delta f^{s}=f^{s}_{1}-f^{s}_{2}$ are interfacial drops of charge and spin components of the distribution function. We introduced $28$ parameters, including one scalar charge conductance $G$, three vector conductances $\bar{G}^{s}$, $\bar{G}^{t}$ and $\bar{G}^{r}$, and two tensor conductances $\hat{{\cal G}}^{t}$ and $\hat{{\cal G}}^{r}$ (see Supplemental Material supplement for their definitions and relation to the notation used in Ref. Waintal.Myers.ea:Phys.Rev.B2000 ). Equations (4)-(5) represent the most general form of the boundary conditions; in particular, they include the effects of the mixing conductances, which are important in noncollinear magnetic multilayers Zwierzycki.Tserkovnyak.ea:PRB2005 ; Tserkovnyak.Brataas.ea:PRL2002 ; Kovalev.Bauer.ea:PRB2006 . They also reproduce the generalization of Valet-Fert theory to noncollinear systems Kovalev.Bauer.ea:PRB2002 ; Barnas:PRB2005 . The expressions simplify for a non-magnetic, axially symmetric interface, for which $\bar{G}^{s}=\bar{G}^{t}=\bar{G}^{r}=0$, and the tensors $\hat{{\cal G}}^{t}$ and $\hat{{\cal G}}^{r}$ are diagonal in the axial reference frame. For highly transparent interfaces all conductances should be properly renormalized Schep.vanHoof.ea:PRB1997 ; Bauer.Tserkovnyak.ea:PRB2003 ; the expressions are given in the Supplemental Material supplement . We apply the circuit theory to the FN${}_{1}$(N${}_{2}$N${}_{1}$)${}_{\mathcal{N}}$F spin valve, using Kirchhoff’s rules for charge and spin conservation in each node. For simplicity, we assume that the spin accumulation is aligned parallel or perpendicular to the interface; the general case can be treated as a superposition of these alignments. Retaining only first-order terms in spin-flip scattering at each concatenation step, we find the magnetoresistance $$\displaystyle\Delta R=\frac{(\beta r^{*}_{F})^{2}}{\tilde{r}_{I}m}\left[1-\frac{{\cal\tilde{G}}^{t}}{\tilde{G}}-(m^{2}-1)\frac{2{\cal\tilde{G}}^{t}+{\cal\tilde{G}}^{r}_{1}+{\cal\tilde{G}}^{r}_{2}}{6\tilde{G}}\right],$$ (6) where the tilde accentuates the renormalized conductances supplement for the given spin accumulation axis (for example, $2G_{0}/\tilde{G}=2G_{0}/G-1/2M_{1}-1/2M_{2}$ Bauer ). Before renormalization, $G=G_{0}(T_{\uparrow\uparrow}+T_{\downarrow\downarrow}+T_{\uparrow\downarrow}+T_{\downarrow\uparrow})$, ${\cal G}^{t}=2G_{0}(T_{\uparrow\downarrow}+T_{\downarrow\uparrow})$, and ${\cal G}^{r}_{i}=2G_{0}(R_{\uparrow\downarrow}^{i}+R_{\downarrow\uparrow}^{i})$ corresponds to reflectance with incidence from metal N${}_{i}$. When the number of layers is large, we can neglect $m$-independent terms and rewrite (6) as $$\Delta R_{\parallel(\perp)}=\frac{(\beta r^{*}_{F})^{2}}{\tilde{r}_{I}m}\left[1-\frac{1}{3}m^{2}\frac{{\cal G}^{sl}_{\parallel(\perp)}}{\tilde{G}}\right]$$ (7) where $\tilde{r}_{I}=\tilde{G}^{-1}$ is the renormalized interface resistance, and we also introduced the spin-loss conductance ${\cal G}^{sl}={\cal G}^{t}+({\cal G}^{r}_{1}+{\cal G}^{r}_{2})/2$. Note that ${\cal G}^{sl}$ does not need to be renormalized by the Sharvin resistance when calculated up to the first order in the spin-flip processes. To establish correspondence with the Valet-Fert model, we note that, to second order in $x$, we have $x/\sinh x\approx(1-x^{2}/6)$. Relating Eq. (7) and (1), we find $$\delta^{2}=2\frac{{\cal G}^{sl}}{\tilde{G}}$$ (8) The assumption of small $m\delta$ is, however, not essential. Applying Eqs. (4)-(5) to three contiguous non-magnetic layers supplement , we find the following finite-difference equation for the spin accumulation: $${\cal D}^{2}f^{s}_{i}=f^{s}_{i-1}-2f^{s}_{i}+f^{s}_{i+1},$$ (9) where ${\cal D}^{2}=2{\cal\tilde{G}}^{sl}/(\tilde{G}-{\cal\tilde{G}}^{t})$. The most general solution of Eq. (9) has the form: $$f^{s}_{i}=C_{1}e^{\delta i}+C_{2}e^{-\delta i},$$ (10) where $\delta=\ln\left\{1+({\cal D}^{2}/2)[1+(1+4/{\cal D}^{2})^{1/2}]\right\}$. This is identical to the solution of the Valet-Fert equations VF1993 and generalizes the definition of $\delta$ (8) to the strong spin-flip scattering case. If the spin-flip scattering is weak, we recover Eq. (8), since in this limit $\delta\approx{\cal D}$. Equation (8) shows that $\delta$ is proportional not to the spin-flip scattering probability at the interface (as it has been usually assumed Bass ), but to its square root. Thus, for example, a seemingly large value $\delta\approx 0.24$ deduced experimentally for the Cu/Pd interface corresponds to a spin-flip scattering probability of less than 2%. For weak spin-flip scattering, the parameter $\delta$ measured in multilayer ($m\gg 1$) magnetoresistance experiments depends only on the sum of spin-flip transmission ($T_{\uparrow\downarrow}$) and reflection ($R^{i}_{\uparrow\downarrow}$) probabilities. These parameters are not related through unitarity, and there is no reason to assume any specific relation between them for a thin interface. In fact, spin transport in circuits containing spin-non-conserving interfaces generally depends separately on these probabilities. Therefore, the parameter $\delta$ and the area-resistance product of the interface do not provide complete information needed for the description of arbitrary magnetoelectronic circuits. We also note that the $T^{(m)}_{\uparrow\downarrow}$ and $R^{(m)}_{\uparrow\downarrow}$ components of the matrices, which are obtained by concatenating $m$ identical spin-non-conserving scattering matrices, converge with each other when $m$ becomes large: $T^{(m)}_{\uparrow\downarrow}\approx R^{(m)}_{\uparrow\downarrow}\approx m(T_{\uparrow\downarrow}+R_{\uparrow\downarrow})$. (The latter equality holds as long as $T^{(m)}_{\uparrow\downarrow}\ll T^{(m)}_{\uparrow\uparrow}$.) For this reason, the resistance and parameter $\delta=t/l_{sf}$ completely describe the behavior of a sufficiently thick non-magnetic bulk layer in an arbitrary circuit, as assumed in the Valet-Fert theory. First-principles calculations. The spin-resolved transmittances and reflectances were calculated using the Landauer-Büttiker approach Datta implemented within the tight-binding linear muffin-tin orbital (TB-LMTO) method Turek . The discretized representation was used for the coordinate operator in transport calculations Kudr , and SOC was included as a perturbation to the LMTO potential parameters Belashchenko-SOC ; Turek-SOC . The generalized gradient approximation is used for exchange and correlation PBE . We focus on the Cu/Pd interface, for which the experimental measurements yield a fairly large parameter $\delta\approx 0.24$, with relatively narrow error bars Kurt . We consider (111) and (001) interface orientations, with the spin quantization axis, corresponding to the polarization of the spin current in a device, aligned either parallel or perpendicular to the interface. We assume that the atomic positions lie on the ideal face-centered cubic lattice with a lattice constant $a=3.818$ Å. In addition to the ideal interfaces, several simple intermixing models are considered for the (111) orientation. Some care needs to be taken to define the spin-flip scattering probabilities, bearing in mind that, owing to the presence of SOC in the bulk, the electronic states in each spin reservoir are already not pure spin-up and spin-down spinors. This bulk spin mixing should be separated from the spin-flip scattering at the interface. To define the spin-resolved interfacial transmittance $T_{\sigma\sigma^{\prime}}$ and reflectance $R^{i}_{\sigma\sigma^{\prime}}$ (where $i=\mathrm{Cu}$ or Pd), we turn off SOC in the leads and introduce “ramp-up” regions where SOC is gradually increased as one moves away from the embedding planes toward the Cu/Pd interface. For generic $\mathbf{k}$-points this “adiabatic embedding” allows pure spin states in the leads to evolve without scattering into the bulk eigenstates, and the spin-dependent scattering probabilities are thus properly defined note-Pd . An exception occurs near the boundaries of the projections of the Fermi sheets, where the group velocity is nearly parallel to the interface. Here the deformation of the Fermi surface by SOC can lead to strong reflection. To examine the effect of adiabatic embedding on the Pd side, we consider a Pd slab of thickness $D$, located at $|x|<D/2$ and attached to Pd leads without SOC at $|x|>D/2$, with the SOC parameters scaled by a function $f(|x|)$ such that $f(0)=1$ and $f(D/2)=0$. We used a simple trapezoidal form of $f(x)$, which is constant over a few atomic layers near the interface and then declines linearly to zero; the results are insensitive to the shape of $f(x)$. As long as $D$ is at least a few dozen monolayers in this test system, $T_{\uparrow\downarrow}$ is negligible, while $R_{\uparrow\downarrow}$ is 2–4 times smaller compared to $R^{\mathrm{Pd}}_{\uparrow\downarrow}$ in the Cu/Pd system with a similar ramp-up region on the Pd side. Fig. 1 shows that the $\mathbf{k}$-resolved $R_{\uparrow\downarrow}$ in the test system is indeed significant only near the edges of the Fermi surface projections. As expected, $R_{\uparrow\downarrow}$ in the test Pd system quickly saturates as the width $D$ is increased. Qualitatively, the situation is analogous to the ballistic scattering from a ferromagnetic domain wall Brataas1999 . Strong reflection near the edges of the Fermi surface projection persists in the Cu/Pd system with adiabatic embedding. Since these edges are in no way special for the scattering from the abrupt Cu/Pd interface, it should be attributed to the reflection from the ramp-up region. Therefore, we subtract $R_{\uparrow\downarrow}$ for the test Pd system from $R^{\mathrm{Pd}}_{\uparrow\downarrow}$ for the Cu/Pd interface. Since the former is a few times smaller than the latter, the uncertainties inherent in this procedure lead to relatively small errors in $\delta$ compared to the experimental uncertainty note-filtering . In addition to ideal (111) and (001) interfaces, we considered several simple models of roughness with intermixing in one monolayer for the (111) interface, with the following structures of this monolayer: (A) 1:1 superlattice (50/50 model), (B) $2\times 2$ ordering of Pd atoms within the Cu monolayer (75/25 model), (C) $2\times 2$ ordering of Cu atoms within the Pd monolayer (25/75 model). The results are listed in Table 1. Here $\bar{R}^{\mathrm{Cu}}_{\uparrow\downarrow}/A$ and $\bar{R}^{\mathrm{Pd}}_{\uparrow\downarrow}/A$ are the specific spin-flip reflectances for Cu with SOC embedded in Cu without SOC, and for adiabatically embedded Pd with SOC, respectively. The integration is performed using a mesh of $256\times 256$ points in the full two-dimensional Brillouin zone; a coarser $64\times 64$ mesh yields very similar results. For each interface we consider two orientations of the spin quantization axis, parallel ($\parallel$) and perpendicular ($\perp$) to the interface, which reflects the orientation of the spin accumulation in the device. In the parallel case we average $T_{\uparrow\downarrow}$ and $R^{s}_{\uparrow\downarrow}$ over two orthogonal in-plane orientations of the spin quantization axis; we also average over the reversed spin indices, e.g., $T_{\uparrow\downarrow}$ and $T_{\downarrow\uparrow}$, as well $T_{\uparrow\uparrow}$ and $T_{\downarrow\downarrow}$. The deviations from axial symmetry are appreciable only for the 50/50 model of the (111) interface, where they reach 35% for $R^{\mathrm{Cu}}_{\uparrow\downarrow}$. In all cases listed in Table 1 the spin-loss conductance ${\cal G}^{sl}$ is dominated by spin-flip reflection. Thus, the parameter $\delta$ is not directly related to the probability of a spin flip in transmission, as it has been previously assumed Bass . Fig. 2 shows $\mathbf{k}$-resolved transmittances and reflectances for the (111) interface with magnetization parallel to the interface. Note the mirror symmetry in the plane perpendicular to the spin quantization axis. Fig. 2(d) shows strong reflection at the Fermi edges, similar to Fig. 1, which is due to the adiabatic embedding on the Pd side. However, contrary to Fig. 1, significant spin-flip reflection is also seen at generic $\mathbf{k}$-points in Fig. 2(d), which originates at the Cu/Pd interface. The values of the parameter $\delta$ for devices with in-plane ($\parallel$) spin accumulation (Table 1) can be directly compared with the experimental value $\delta=0.24^{+0.06}_{-0.03}$ Kurt . The results for (001) and (111) interface orientations are quite similar and in reasonable agreement with experiment. In agreement with Ref. Galinon, , the calculated interface area-resistance product $AR$ is overestimated by 65-100% and is not strongly affected by intermixing. Intermixing also has a relatively small effect on $\delta$, increasing it by a small amount. Due to the fairly large size mismatch, the structure of the Cu/Pd multilayer can exhibit significant disorder and strain relaxation, which may lead to the discrepancy in the area-resistance product. The overestimation of $\delta$ may be due to the same reason. Table 1 shows that $\delta$ becomes notably larger when the spin accumulation is oriented perpendicular to the interface. This angular dependence can be tested in experiments on multilayers Bass ; Bass2015 by utilizing ferromagnetic layers with perpendicular magnetization. Anisotropy of a similar kind was found for the spin relaxation rate in thin films Long1 ; Long2 ; Long3 . This spin relaxation is due to spin-flip reflection at the film surface, and it can also be described using the generalized circuit theory. In conclusion, we have formulated a theory of spin loss at metallic interfaces, linking the calculable spin-dependent scattering properties of an interface with the phenomenological parameter $\delta$ measured in experiments on magnetoresistance in multilayers. This relation [Eq. (8)] shows that spin-flip scattering on the order of a few percent yields $\delta$ that is comparable to unity. First-principles calculations for the Cu/Pd interface give $\delta$ in reasonable agreement with experiment, but somewhat overestimated. Understanding of spin loss at metallic interfaces is an important ingredient for the analysis of spin transport in magnetic heterostructures with strong spin-orbit coupling. Acknowledgements. AK is much indebted to Gerrit Bauer for stimulating discussions on the circuit theory with spin-flip scattering. This work was supported by the National Science Foundation through Grant No. DMR-1308751 and the Nebraska MRSEC, Grant No. DMR-1420645, as well as by the DOE Early Career Award DE-SC0014189 (AK) and the EPSRC CCP9 Flagship project, EP/M011631/1 (MvS). The computations were performed utilizing the Holland Computing Center of the University of Nebraska. References (1) E. Y. Tsymbal and D. G. Pettifor, Perspectives of giant magnetoresistance, in: Solid State Physics, ed. by H. Ehrenreich and F. Spaepen, Vol. 56 (Academic Press, 2001), p. 113. (2) J. Bass and W. P. Pratt, J. Phys.: Condens. Matter 19, 183201 (2007). (3) J. Bass, J. Magn. Magn. Mater. 408, 244 (2016). (4) M. Johnson and R. H. Silsbee, Phys. Rev. Lett. 55, 1790 (1985). (5) D. C. Ralph and M. D. Stiles, J. Magn. Magn. Mater. 320, 1190 (2008). (6) Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. Lett. 88, 117601 (2002). (7) I. M. Miron, G. Gaudin, S. Auffret, B. Rodmacq, A. Schuhl, S. Pizzini, J. Vogel, and P. Gambardella, Nature Mater. 9, 230 (2010). (8) L. Liu, T. Moriyama, D. C. Ralph, and R. A. Buhrman, Phys. Rev. Lett. 106, 036601 (2011). (9) E. Saitoh, M. Ueda, H. Miyajima, and G. Tatara, Appl. Phys. Lett. 88, 182509 (2006). (10) G. E. W. Bauer, E. Saitoh, and B. J. van Wees, Nature Mater. 11, 391 (2012). (11) J.-C. Rojas-Sánchez, N. Reyren, P. Laczkowski, W. Savero, J.-P. Attané, C. Deranlot, M. Jamet, J.-M. George, L. Vila, and H. Jaffrès, Phys. Rev. Lett. 112, 106602 (2014). (12) K. Chen and S. Zhang, Phys. Rev. Lett. 114, 126602 (2015). (13) E. I. Rashba, Eur. Phys. J. B 29, 513 (2002). (14) Y. Liu, Z. Yuan, R. J. H. Wesselink, A. A. Starikov, and P. J. Kelly, Phys. Rev. Lett. 113, 207202 (2014). (15) N. H. Long, P. Mavropoulos, B. Zimmermann, S. Heers, D. S. G. Bauer, S. Blügel, and Y. Mokrousov, Phys. Rev. B 87, 224420 (2013). (16) N. H. Long, P. Mavropoulos, S. Heers, B. Zimmermann, Y. Mokrousov, and S. Blügel, Phys. Rev. B 88, 144408 (2013). (17) N. H. Long, P. Mavropoulos, B. Zimmermann, D. S. G. Bauer, S. Blügel, and Y. Mokrousov, Phys. Rev. B 90, 064406 (2014). (18) A. Kobs, S. Heße, W. Kreuzpaintner, G. Winkler, D. Lott, P. Weinberger, A. Schreyer and H. P. Oepen, Phys. Rev. Lett. 106, 217207 (2011). (19) C. Gould, C. Rüster, T. Jungwirth, E. Girgis, G. M. Schott, R. Giraud, K. Brunner, G. Schmidt, and L. W. Molenkamp, Phys. Rev. Lett. 93, 117203 (2004). (20) A. N. Chantis, K. D. Belashchenko, E. Y. Tsymbal, and M. van Schilfgaarde, Phys. Rev. Lett. 98, 046601 (2007). (21) J. Moser, A. Matos-Abiague, D. Schuh, W. Wegscheider, J. Fabian, and D. Weiss, Phys. Rev. Lett. 99, 056601 (2007). (22) B. G. Park, J. Wunderlich, D. A. Williams, S. J. Joo, K. Y. Jung, K. H. Shin, K. Olejník, A. B. Shick, and T. Jungwirth, Phys. Rev. Lett. 100, 087204 (2008). (23) I. Žutić and S. Das Sarma, Phys. Rev. B 60, R16322 (1999). (24) P. Högl, A. Matos-Abiague, I. Žutić, and J. Fabian, Phys. Rev. Lett. 115, 116601 (2015). (25) J. D. Burton and E. Y. Tsymbal, Phys. Rev. B 93, 024419 (2016). (26) S. Zhang, P. M. Levy, A. C. Marley, and S. S. P. Parkin, Phys. Rev. Lett. 79, 3744 (1997). (27) A. Brataas, Y. V. Nazarov, and G. E. W. Bauer, Phys. Rev. Lett. 84, 2481 (2000). (28) A. Brataas, Y. V. Nazarov, and G. E. W. Bauer, Eur. Phys. J. B 22, 99 (2001). (29) A. Brataas, G. E. W. Bauer, and P. J. Kelly, Phys. Rep. 427, 157 (2006). (30) D. V. Baxter, S. D. Steenwyk, J. Bass, and W. P. Pratt, J. Appl. Phys. 85, 4545 (1999). (31) A. Manchon, N. Strelkov, A. Deac, A. Vedyayev, and B. Dieny, Phys. Rev. B 73, 184418 (2006). (32) T. Valet and A. Fert, Phys. Rev. B 48, 7099 (1993). (33) K. D. Belashchenko, J. K. Glasbrenner, and A. L. Wysocki, Phys. Rev. B 86, 224402 (2012). (34) A. Fert and S.-F. Lee, Phys. Rev. B 53, 6554 (1996). (35) M. Wawrzyniak, M. Gmitra, and J. Barnaś, J. Appl. Phys. 99, 023905 (2006). (36) See Supplemental Material for the details of the circuit theory with spin-flip scattering, renormalization of the conductances for Ohmic contacts, and transport in a multilayer. (37) G. E. W. Bauer, K. M. Schep, Ke Xia, and P. J. Kelly, J. Phys. D: Appl. Phys. 35, 2410 (2002). (38) S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, New York, 1995). (39) X. Waintal, E. B. Myers, P. W. Brouwer, and D. C. Ralph, Phys. Rev. B 62, 12317 (2000). (40) M. Zwierzycki, Y. Tserkovnyak, P. J. Kelly, A. Brataas, and G. E. W. Bauer, Phys. Rev. B 71, 064420 (2005). (41) Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. Lett. 88, 117601 (2002). (42) A. A. Kovalev, G. E. W. Bauer, and A. Brataas, Phys. Rev. B 73, 054407 (2006). (43) K. M. Schep, J. B. A. N. van Hoof, P. J. Kelly, G. E. W. Bauer, J. E. Inglesfield, Phys. Rev. B 56, 10805 (1997). (44) G. E. W. Bauer, Y. Tserkovnyak, D. Huertas-Hernando, and A. Brataas, Phys. Rev. B 67, 094421 (2003). (45) I. Turek, V. Drchal, J. Kudrnovský, M. Šob, and P. Weinberger, Electronic Structure of Disordered Alloys, Surfaces and Interfaces (Kluwer, Boston, 1997). (46) J. Kudrnovský, V. Drchal, C. Blaas, P. Weinberger, I. Turek, and P. Bruno, Phys. Rev. B 62, 15084 (2000). (47) K. D. Belashchenko, L. Ke, M. Däne, L. X. Benedict, T. N. Lamichhane, V. Taufour, A. Jesche, S. L. Bud’ko, P. C. Canfield, and V. P. Antropov, Appl. Phys. Lett. 106, 062408 (2015). (48) I. Turek, V. Drchal, and J. Kudrnovský, Philos. Mag. 88, 2787 (2008). (49) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). (50) H. Kurt, R. Loloee, K. Eid, W. P. Pratt Jr., and J. Bass, Appl. Phys. Lett. 81, 4787 (2002). (51) The ramp-up region is needed only on the Pd side of the Cu/Pd interface, where strong spin-flip scattering otherwise occurs at the sharp boundary between Pd without SOC (lead) and Pd with SOC (scattering region). (52) A. Brataas, G. Tatara, and G. E. W. Bauer, Phys. Rev. B 60, 3406 (1999). (53) As will be reported elsewhere, a more accurate procedure based on filtering by $\mathbf{k}$-points tends to yield $\delta$ values that are slightly larger, by 10-15%. (54) C. Galinon, K. Tewolde, R. Loloee, W.-C. Chiang, S. Olson, H. Kurt, W. P. Pratt, J. Bass, P. X. Xu, K. Xia, and M. Talanana, Appl. Phys. Lett. 86, 182502 (2005). (55) A. A. Kovalev, A. Brataas, and G. E. W. Bauer, Phys. Rev. B 66, 224424 (2002). (56) J. Barnaś, A. Fert, M. Gmitra, I. Weymann, V. K. Dugaev, Phys. Rev. B 72, 024426 (2005) \close@column@grid Supplemental Material I Circuit theory in the presence of spin-flip scattering Consider two metallic nodes separated by a scattering region. The current in each node depends on the potential drop and on the spin accumulation drop between the nodes. The current evaluated for node $2$ is S_Brataas.Nazarov.ea:EPJB2001 $$\hat{I}_{2}=G_{0}\sum_{nm}\left[\hat{t}^{\prime}_{mn}\hat{f}_{1}(\hat{t}^{\prime}_{nm})^{\dagger}-\left(M_{2}\hat{f}_{2}-\hat{r}_{mn}\hat{f}_{2}(\hat{r}_{nm})^{\dagger}\right)\right],$$ (S1) where $G_{0}=e^{2}/h$, $\hat{r}_{mn}$ is the spin-dependent reflection amplitude for electrons reflected from channel $n$ into channel $m$ in node 2, and $\hat{t}^{\prime}_{mn}$ is the spin-dependent transmission amplitude for electrons transmitted from channel $n$ in node 1 into channel $m$ in node 2. Note that the ensuing results can be easily rewritten for the current $\hat{I}_{1}$ in node $1$. Spin-flip scattering at the interface makes the matrices $\hat{r}_{mn}$ and $\hat{t}^{\prime}_{mn}$ non-diagonal in spin space. Let us introduce a matrix: $$\check{S}_{mn}=\left(\begin{array}[]{cc}\hat{r}_{mn}&\hat{t}^{\prime}_{mn}\\ \hat{t}_{mn}&\hat{r}^{\prime}_{mn}\end{array}\right),$$ (S2) where $\hat{r}^{\prime}$ and $\hat{t}$ are the amplitudes of reflection and transmission into node 1. Charge conservation requires $\check{S}\check{S}^{\dagger}=\check{1}$, and, therefore, $$\sum_{mn}\check{S}_{mn}\check{S}_{mn}^{\dagger}=\check{M}=\hat{\sigma}^{0}\otimes\hat{M},$$ (S3) where $\hat{\sigma}^{0}$ is a unit matrix in spin space, the symbol $\otimes$ denotes the Kronecker product, and $\hat{M}$ is a diagonal matrix with elements $M_{ii}=M_{i}$ representing the number of channels in electrode $i$. We extract only part of Eq. (S3) that contains $\hat{r}_{mn}$ and $\hat{t}^{\prime}_{mn}$ coefficients: $$\sum_{mn}\hat{r}_{mn}\hat{r}_{mn}^{\dagger}+\hat{t}^{\prime}_{mn}(\hat{t}^{\prime}_{mn})^{\dagger}=M_{2}\hat{\sigma}^{0}$$ (S4) leading to three independent constraints on the elements of the $S$ matrix. If the system has time reversal symmetry, the total $S$ matrix also satisfies $S=S^{T}$. The spin-dependent distribution functions in nodes $1$ and $2$, as well as the current matrix, can be expressed via the Pauli matrices $\hat{\sigma}^{1}$, $\hat{\sigma}^{2}$, $\hat{\sigma}^{3}$ and the unit matrix $\hat{\sigma}^{0}$: $\hat{f}_{1}=\hat{\sigma}^{0}f_{1}^{0}+\hat{\sigma}\bar{f}_{1}^{s}$, $\hat{f}_{2}=\hat{\sigma}^{0}f_{2}^{0}+\hat{\sigma}\bar{f}_{2}^{s}$, $\hat{I}=(\hat{\sigma}^{0}I^{0}+\hat{\sigma}\bar{I}^{s})/2$. We express the scattering amplitudes with the help of notations proposed in Ref. S_Waintal.Myers.ea:Phys.Rev.B2000 . Denoting the unit matrix as $\hat{\sigma}^{0}$, we define $\mathcal{R}_{mn}^{\mu\nu}=\mbox{Tr}[(\hat{r}_{mn}\otimes\hat{r}_{mn}^{*})\cdot(\hat{\sigma}^{\mu}\otimes\hat{\sigma}^{\nu})]/4$ and $\mathcal{T}_{mn}^{\mu\nu}=\mbox{Tr}[(\hat{t}^{\prime}_{mn}\otimes\hat{t}^{\prime*}_{mn})\cdot(\hat{\sigma}^{\mu}\otimes\hat{\sigma}^{\nu})]/4$. The circuit theory expression (S1) can now be rewritten in the form of Eqs. (4)-(5) of the main text, with the following definitions of the conductances: $$\displaystyle G=2G_{0}\sum_{mn}\mathcal{T}_{mn}^{\nu\nu},\quad G_{i}^{s}=2G_{0}\sum_{mn}(\mathcal{T}_{mn}^{i0}+\mathcal{T}_{mn}^{0i}+i\varepsilon_{ijk}\mathcal{T}_{mn}^{jk}),$$ (S5) $$\displaystyle G_{i}^{t}=4G_{0}\sum_{mn}i\varepsilon_{ijk}\mathcal{T}_{mn}^{jk},\quad G_{i}^{r}=4G_{0}\sum_{mn}i\varepsilon_{ijk}\mathcal{R}_{mn}^{jk},$$ (S6) $$\displaystyle\mathcal{G}_{ij}^{t}=2G_{0}\delta_{ij}^{kl}\sum_{mn}(\mathcal{T}_{mn}^{kl}+\mathcal{T}_{mn}^{lk}+i\varepsilon_{klv}[\mathcal{T}_{mn}^{0v}-\mathcal{T}_{mn}^{v0}]),$$ (S7) $$\displaystyle\mathcal{G}_{ij}^{r}=2G_{0}\delta_{ij}^{kl}\sum_{mn}(\mathcal{R}_{mn}^{kl}+\mathcal{R}_{mn}^{lk}+i\varepsilon_{klv}[\mathcal{R}_{mn}^{0v}-\mathcal{R}_{mn}^{v0}]),$$ (S8) where $\delta_{ij}^{kl}=\delta_{ik}\delta_{jl}-\delta_{ij}\delta_{kl}$, and summation over the repeated indices is assumed everywhere. In the case of a non-magnetic (disordered) interface with axial symmetry, $\bar{G}^{s}=\bar{G}^{t}=\bar{G}^{r}=0$, while the tensors $\hat{{\cal G}}^{t}$ and $\hat{{\cal G}}^{r}$ are diagonal in the reference frame aligned with the symmetry axis. These simplifications lead to the following expressions for the currents in the nodes: $$\displaystyle I_{0}$$ $$\displaystyle=G\Delta f_{0},$$ (S9) $$\displaystyle\bar{I}_{2}^{s}$$ $$\displaystyle=(G-\mathcal{G}^{t})\Delta\bar{f}_{s}-\mathcal{G}_{2}^{sl}\bar{f}^{s}_{2},$$ (S10) $$\displaystyle\bar{I}_{1}^{s}$$ $$\displaystyle=(G-\mathcal{G}^{t})\Delta\bar{f}_{s}+\mathcal{G}_{1}^{sl}\bar{f}^{s}_{2},$$ (S11) where we introduced the spin-loss conductance $\mathcal{G}_{1(2)}^{sl}=\mathcal{G}_{1(2)}^{r}+\mathcal{G}^{t}$ calculated along one of the symmetry axes in the nodes. II Renormalizations for Ohmic contacts It is well known that interface resistances in transparent Ohmic contacts are renormalized by the Sharvin resistance S_Schep.vanHoof.ea:PRB1997 ; S_Bauer.Tserkovnyak.ea:PRB2003 . The circuit theory can be generalized to account for the drift contributions in the nodes by renormalizing the conductances $G$, $\mathcal{G}^{t}$, and $\mathcal{G}_{1(2)}^{sl}$. This can be done by connecting nodes $1$ and $2$ to proper reservoirs with spin-dependent distribution functions $\hat{f}_{L}$ and $\hat{f}_{R}$ via transparent contacts. The currents in the nodes then become $\hat{I}_{1}=2G_{0}M_{1}(\hat{f}_{L}-\hat{f}_{1})$ and $\hat{I}_{2}=2G_{0}M_{2}(\hat{f}_{2}-\hat{f}_{R})$, where $M_{1(2)}$ describe the number of channels in the nodes. Substituting these currents in Eqs. (S9), (S10), and (S11), we arrive at the amended circuit theory: $$\displaystyle I_{0}$$ $$\displaystyle=G(\Delta f_{0}+\dfrac{I_{0}}{4G_{0}M_{1}}+\dfrac{I_{0}}{4G_{0}M_{2}}),$$ (S12) $$\displaystyle I^{s}_{1}$$ $$\displaystyle=(G-\mathcal{G}^{t})(\Delta f_{s}+\dfrac{I^{s}_{1}}{4G_{0}M_{1}}+\dfrac{I^{s}_{2}}{4G_{0}M_{2}})+\mathcal{G}_{1}^{sl}(f_{s}^{1}+\dfrac{I^{s}_{1}}{4G_{0}M_{1}}),$$ (S13) $$\displaystyle I^{s}_{2}$$ $$\displaystyle=(G-\mathcal{G}^{t})(\Delta f_{s}+\dfrac{I^{s}_{1}}{4G_{0}M_{1}}+\dfrac{I^{s}_{2}}{4G_{0}M_{2}})-\mathcal{G}_{2}^{sl}(f_{s}^{2}-\dfrac{I^{s}_{2}}{4G_{0}M_{2}}).$$ (S14) These equations are equivalent to Eqs. (S9)-(S11) after the substitution $G\rightarrow\tilde{G}$, $\mathcal{G}^{t}\rightarrow\tilde{\mathcal{G}}^{t}$, and $\mathcal{G}_{1(2)}^{sl}\rightarrow\tilde{\mathcal{G}}_{1(2)}^{sl}$, where $$\displaystyle\dfrac{2}{\tilde{G}}=\dfrac{2}{G}-\dfrac{1}{2G_{0}M_{1}}-\dfrac{1}{2G_{0}M_{2}},$$ (S15) $$\displaystyle\dfrac{2}{\tilde{G}-\tilde{\mathcal{G}}^{t}+\frac{\tilde{\mathcal{G}}_{1}^{sl}\tilde{\mathcal{G}}_{2}^{sl}}{\tilde{\mathcal{G}}_{1}^{sl}+\tilde{\mathcal{G}}_{2}^{sl}}}=\dfrac{2}{G-\mathcal{G}^{t}+\frac{\mathcal{G}_{1}^{sl}\mathcal{G}_{2}^{sl}}{\mathcal{G}_{1}^{sl}+\mathcal{G}_{2}^{sl}}}-\dfrac{1}{2G_{0}M_{1}}-\dfrac{1}{2G_{0}M_{2}},$$ (S16) $$\displaystyle\dfrac{1}{\tilde{\mathcal{G}}_{1}^{sl}}=\dfrac{1}{\mathcal{G}_{1}^{sl}}-\dfrac{1}{2G_{0}M_{1}}-\dfrac{\mathcal{G}_{2}^{sl}/\mathcal{G}_{1}^{sl}-M_{2}/M_{1}}{\mathcal{G}_{1}^{sl}+\mathcal{G}_{2}^{sl}+2\mathcal{G}_{1}^{sl}\dfrac{\mathcal{G}_{2}^{sl}-2G_{0}M_{2}}{G-\mathcal{G}^{t}}},$$ (S17) $$\displaystyle\dfrac{1}{\tilde{\mathcal{G}}_{2}^{sl}}=\dfrac{1}{\mathcal{G}_{2}^{sl}}-\dfrac{1}{2G_{0}M_{2}}-\dfrac{\mathcal{G}_{1}^{sl}/\mathcal{G}_{2}^{sl}-M_{1}/M_{2}}{\mathcal{G}_{1}^{sl}+\mathcal{G}_{2}^{sl}+2\mathcal{G}_{2}^{sl}\dfrac{\mathcal{G}_{1}^{sl}-2G_{0}M_{1}}{G-\mathcal{G}^{t}}}.$$ (S18) Note that these equations can be further simplified in the symmetric case, $\mathcal{G}_{1}^{sl}=\mathcal{G}_{2}^{sl}$ and $M_{1}=M_{2}$. III Transport in N${}_{1}|$N${}_{2}$ superlattice We now assume that we have a superlattice constructed out of repeated interfaces between two normal metals N${}_{1}$ and N${}_{2}$. We take nodes in both N${}_{1}$ and N${}_{2}$ layers, and the conductances $\tilde{G}$, $\tilde{\mathcal{G}}^{t}$, $\tilde{\mathcal{G}}_{1}^{sl}$, and $\tilde{\mathcal{G}}_{2}^{sl}$ describe the two nodes. We arrive at the following equations for the spin current in node $i$: $$\displaystyle I^{s}_{i}$$ $$\displaystyle=(\tilde{G}-\tilde{\mathcal{G}}^{t})(f^{s}_{i-1}-f^{s}_{i})-\tilde{\mathcal{G}}_{1}^{sl}f^{s}_{i},$$ (S19) $$\displaystyle I^{s}_{i}$$ $$\displaystyle=(\tilde{G}-\tilde{\mathcal{G}}^{t})(f^{s}_{i}-f^{s}_{i+1})+\tilde{\mathcal{G}}_{2}^{sl}f^{s}_{i},$$ (S20) which leads to the recursive formula: $$\dfrac{2\tilde{\mathcal{G}}^{sl}}{\tilde{G}-\tilde{\mathcal{G}}^{t}}f^{s}_{i}=f^{s}_{i-1}-2f^{s}_{i}+f^{s}_{i+1},$$ (S21) where $\tilde{\mathcal{G}}^{sl}=(\tilde{\mathcal{G}}_{1}^{sl}+\tilde{\mathcal{G}}_{2}^{sl})/2$. This recursive equation has the following solution: $$f_{s}^{i}=C_{1}e^{\delta i}+C_{2}e^{-\delta i},$$ (S22) where $$\delta=\ln\left[1+\dfrac{\tilde{\mathcal{G}}^{sl}}{\tilde{G}-\tilde{\mathcal{G}}^{t}}\left(1+\sqrt{1+\dfrac{2(\tilde{G}-\tilde{\mathcal{G}}^{t})}{\tilde{\mathcal{G}}^{sl}}}\right)\right],$$ (S23) and the constants $C_{1}$ and $C_{2}$ depend on the boundary conditions. IV Accounting for the bulk contribution Within the circuit theory, spin transport across a non-magnetic interface that is axially symmetric (either microscopically or after averaging over crystallite orientations) is fully characterized by four conductances: $\tilde{G}$, $\tilde{\mathcal{G}}^{t}$, $\tilde{\mathcal{G}}_{1}^{sl}$, and $\tilde{\mathcal{G}}_{2}^{sl}$. We will also refer to the quantities $\tilde{\mathcal{G}}^{s}=\tilde{G}-\tilde{\mathcal{G}}^{t}$, which appear in Eqs. (S19)-(S20), as spin conductances. In the main text of the paper we have neglected the resistivities of the bulk metallic layers and assumed that spin relaxation occurs only at the interfaces, in order to simplify the resulting expressions. These features can be restored by placing the circuit nodes in the middle of the bulk layers. A contact between two nodes is then defined to include both the physical interface and the adjacent bulk regions extending up to these nodes, as shown in Fig. S1. Spin-transport in a bulk diffusive region $i$ is assumed to obey the Valet-Fert model, which yields $\tilde{\mathcal{G}}_{bi}^{s}=\tilde{G}_{bi}\delta_{i}/\sinh\delta_{i}$ and $\tilde{\mathcal{G}}_{bi}^{sl}=\tilde{G}_{bi}\delta_{i}\tanh(\delta_{i}/2)$, where $\delta_{i}=t_{i}/l^{i}_{sf}$ is defined similar to the spin-memory loss parameter for an interface. We have added a subscript $b$ to distinguish bulk and interface conductances in the following. There is only one $\tilde{\mathcal{G}}_{bi}^{sl}$ parameter, because a bulk region is left-right symmetric. Thus, two parameters $\tilde{G}_{bi}$ and $\delta_{i}$ completely describe a diffusive bulk layer. (A general interface can not be fully described in this way, because four independent conductances can not be reduced to two parameters $\tilde{G}$ and $\delta$.) Introducing the conductances $\tilde{G}_{a}$, $\tilde{\mathcal{G}}_{a}^{s}$, $\tilde{\mathcal{G}}_{a1}^{sl}$, and $\tilde{\mathcal{G}}_{a2}^{sl}$ for the composite three-layer “contact,” we can apply Eq. (9) from the main text to obtain $${\cal D}^{2}=\frac{\tilde{\mathcal{G}}_{a1}^{sl}+\tilde{\mathcal{G}}_{a2}^{sl}}{\tilde{\mathcal{G}}^{s}_{a}},$$ (S24) which now fully takes into account the bulk contributions. The composite conductances can be obtained by concatenating the interface with the adjacent bulk regions using the circuit theory: $$\displaystyle\tilde{\mathcal{G}}_{a}^{s}$$ $$\displaystyle=\frac{\tilde{\mathcal{G}}_{b1}^{s}\tilde{\mathcal{G}}_{b2}^{s}\tilde{\mathcal{G}}^{s}}{(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{c1}^{sl})(\tilde{\mathcal{G}}_{b2}^{s}+\tilde{\mathcal{G}}_{c2}^{sl})+(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{b2}^{s}+\tilde{\mathcal{G}}_{c1}^{sl}+\tilde{\mathcal{G}}_{c2}^{sl})\tilde{\mathcal{G}}^{s}},$$ (S25) $$\displaystyle\tilde{\mathcal{G}}_{a1}^{sl}$$ $$\displaystyle=\frac{(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{c1}^{sl})[\tilde{\mathcal{G}}_{b2}^{s}(\tilde{\mathcal{G}}_{c2}^{sl}+\tilde{\mathcal{G}}_{b2}^{sl})+\tilde{\mathcal{G}}_{b2}^{sl}\tilde{\mathcal{G}}_{c2}^{sl}]+[(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{c1}^{sl}+\tilde{\mathcal{G}}_{c2}^{sl})\tilde{\mathcal{G}}_{b2}^{sl}+\tilde{\mathcal{G}}_{b2}^{s}(\tilde{\mathcal{G}}_{c1}^{sl}+\tilde{\mathcal{G}}_{c2}^{sl}+\tilde{\mathcal{G}}_{b2}^{sl})]\tilde{\mathcal{G}}^{s}}{(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{c1}^{sl})(\tilde{\mathcal{G}}_{b2}^{s}+\tilde{\mathcal{G}}_{c2}^{sl})+(\tilde{\mathcal{G}}_{b1}^{s}+\tilde{\mathcal{G}}_{b2}^{s}+\tilde{\mathcal{G}}_{c1}^{sl}+\tilde{\mathcal{G}}_{c2}^{sl})\tilde{\mathcal{G}}^{s}},$$ (S26) where $\tilde{\mathcal{G}}^{sl}_{ci}=\tilde{\mathcal{G}}^{sl}_{bi}+\tilde{\mathcal{G}}^{sl}_{i}$. The expression for $\tilde{\mathcal{G}}_{a2}^{sl}$ is obtained from $\tilde{\mathcal{G}}_{a1}^{sl}$ by interchanging the indices 1 and 2. We also have $\tilde{G}^{-1}_{a}=\tilde{G}^{-1}+\tilde{G}_{b1}^{-1}+\tilde{G}_{b2}^{-1}$. Expanding of Eq. (S24) to first order in spin-flip scattering results in $${\cal D}^{2}\approx\frac{\mathcal{G}_{1}^{sl}+\mathcal{G}_{2}^{sl}+2\mathcal{G}_{b1}^{sl}+2\mathcal{G}_{b2}^{sl}}{\tilde{G}_{a}},$$ (S27) Equation (S27) shows that to lowest order in spin-flip scattering there are only two relevant parameters for the interface in a periodic N${}_{1}$/N${}_{2}$ multilayer with diffusive layers: its renormalized conductance $\tilde{G}$ and the symmetric spin-loss conductance $\mathcal{G}^{sl}=(\mathcal{G}_{1}^{sl}+\mathcal{G}_{2}^{sl})/2$. Under these conditions, the treatment based on the Valet-Fert model, with $\delta$ given by Eq. (8) of the main text, gives the same result as the full circuit theory. This justifies our treatment in the main text, where the correspondence with the Valet-Fert model was established for a multilayer with vanishing bulk resistance and spin relaxation. Higher-order correction to ${\cal D}$ is always positive, which means that we have slightly overestimated $\delta$. However, this correction is very small for the Cu/Pd interface; for $\delta=0.4$ and typical parameters for bulk Pd S_Bass2015 the correction to $\delta^{2}$ is less than $0.01$. The correction may, however, be significant for interface with strong spin-flip scattering, such as Cu/Pt with $\delta\sim 1$ S_Bass2015 . References (1) A. Brataas, Y. V. Nazarov, and G. E. W. Bauer, Eur. Phys. J. B 22, 99 (2001). (2) X. Waintal, E. B. Myers, P. W. Brouwer, and D. C. Ralph, Phys. Rev. B 62, 12317 (2000). (3) K. M. Schep, J. B. A. N. van Hoof, P. J. Kelly, G. E. W. Bauer, J. E. Inglesfield, Phys. Rev. B 56, 10805 (1997). (4) G. E. W. Bauer, Y. Tserkovnyak, D. Huertas-Hernando, and A. Brataas, Phys. Rev. B 67, 094421 (2003). (5) J. Bass, J. Magn. Magn. Mater. 408, 244 (2016).
Local solvability and stability of an inverse spectral problem for higher-order differential operators Natalia P. Bondarenko Abstract. In this paper, we for the first time prove local solvability and stability of an inverse spectral problem for higher-order ($n>3$) differential operators with distribution coefficients. The inverse problem consists in the recovery of differential equation coefficients from $(n-1)$ spectra and the corresponding weight numbers. The proof method is constructive. It is based on the reduction of the nonlinear inverse problem to a linear equation in the Banach space of infinite sequences. We prove that, under a small perturbation of the spectral data, the main equation remains uniquely solvable and estimate the differences of the coefficients in the corresponding functional spaces. Keywords: inverse spectral problem; higher-order differential operators; distribution coefficients; local solvability; stability AMS Mathematics Subject Classification (2020): 34A55 34B09 34B40 34L05 46F10 1 Introduction This paper deals with the differential equation $$y^{(n)}+p_{n-2}(x)y^{(n-2)}+p_{n-3}(x)y^{(n-3)}+\dots+p_{1}(x)y^{\prime}+p_{0}(x)y=\lambda y,\quad x\in(0,1),$$ (1.1) where $n\geq 2$, $p_{k}$ are complex-valued functions, $p_{k}\in W_{2}^{k-1}[0,1]$, $k=\overline{0,n-2}$, and $\lambda$ is the spectral parameter. Recall that: • For $s\geq 1$, $W_{2}^{s}[0,1]$ is the space of functions $f(x)$ whose derivatives $f^{(j)}(x)$ are absolutely continuous on $[0,1]$ for $j=\overline{0,s-1}$ and $f^{(s)}\in L_{2}[0,1]$. • $W_{2}^{0}[0,1]=L_{2}[0,1]$. • $W_{2}^{-1}[0,1]$ is the space of generalized functions (distributions) $f(x)$ whose antiderivatives $f^{(-1)}(x)$ belong to $L_{2}[0,1]$. We study the inverse spectral problem that consists in the recovery of the coefficients $(p_{k})_{k=0}^{n-2}$ from the eigenvalues $\{\lambda_{l,k}\}_{l\geq 1}$ and the weight numbers $\{\beta_{l,k}\}_{l\geq 1}$ of the boundary value problems $\mathcal{L}_{k}$, $k=\overline{1,n-1}$, for equation (1.1) with the corresponding boundary conditions $$y^{(j)}(0)=0,\quad j=\overline{0,k-1},\qquad y^{(s)}(1)=0,\quad s=\overline{0,n-k-1}.$$ (1.2) The main goal of this paper is to prove the local solvability and stability of the inverse problem. This is the first result of such kind for arbitrary-order differential operators with distribution coefficients. 1.1 Historical background Spectral theory of linear ordinary differential operators has a fundamental significance for mathematics and applications. For $n=2$, equation (1.1) turns into the Sturm-Liouville (also called the one-dimensional Schrödinger) equation $y^{\prime\prime}+p_{0}y=\lambda y$, which models various processes in quantum and classical mechanics, material science, astrophysics, acoustics, electronics. The third-order differential equations model thin membrane flow of viscous liquid [1] and elastic beam vibrations [2], and are arise in the integration of the nonlinear Boussinesq equation by the inverse scattering transform [3]. The fourth-order and the six-order linear differential operators arise in geophysics [4] and in vibration theory [5, 6]. Therefore, the development of general mathematical methods for investigation of spectral problems for arbitrary order linear differential operators is fundamentally important. Inverse problems of spectral analysis consist in the reconstruction of differential operators by their spectral characteristics. Such problems have been studied fairly completely for Sturm-Liouville operators $-y^{\prime\prime}+q(x)y$ with regular (integrable) potentials $q(x)$ (see the monographs [7, 8, 9, 10, 11] and references therein) as well as with distribution potentials of class $W_{2}^{-1}$ (see, e.g., the papers [12, 13, 14, 15, 16, 17] and [18] for a more extensive bibliography). The basic results for the inverse Sturm-Liouville problem were obtained by the method of Gelfand and Levitan [19], which is based on transformation operators. However, inverse problems for differential operators of higher orders $n\geq 2$ are significantly more difficult for investigation, since the Gelfand-Levitan method does not work for them. Therefore, Yurko [20, 21, 22] has developed the method of spectral mappings, which is based on the theory of analytic functions. The central place in this method is taken by the contour integration in the complex plane of the spectral parameter $\lambda$ of some meromorphic functions (spectral mappings), which were introduced by Leibenson [23, 24]. Applying the method of spectral mappings, Yurko has created the theory of inverse spectral problems for arbitrary order differential operators with regular coefficients and also with the Bessel-type singularities on a finite interval and on the half-line (see [20, 21, 22, 25, 26]). Another approach was developed by Beals et al [27, 28] for inverse scattering problems on the line, which are essentially different from the problems on a finite interval. In [29], Mirzoev and Shkalikov have proposed a regularization approach for higher-order differential equations with distribution coefficients (generalized functions). This has motivated a number of researchers to study solutions and spectral theory for such equations (see the bibliography in the recent papers [30, 31]). Inverse spectral problems for higher-order differential operators with distribution coefficients have been investigated in [32, 31, 33, 34, 35]. In particular, the uniqueness theorems have been proved in [32, 31, 33]. The paper [34] is concerned with a reconstruction approach, based on developing the ideas of the method of spectral mappings. In [35], the necessary and sufficient conditions for the inverse problem solvability have been obtained for a third-order differential equation. In this paper, we focus on local solvability and stability of the inverse problem. These aspects for the Sturm-Liouville operators were studied in [8, 9, 10, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 15, 12, 16] and many other papers. Local solvability has a fundamental significance in the inverse problem theory, especially for such problems, for which global solvability theorems are absent or contain hard-to-verify conditions. Stability is important for justification of numerical methods. For the higher-order differential equation (1.1) on a finite interval, stability of the inverse problem in the uniform norm was proved by Yurko for regular coefficients $p_{k}\in W_{1}^{k}[0,1]$ (see Theorem 2.3.2 in [22]). Furthermore, local solvability and stability in the $L_{2}$-norm were formulated without proofs as Theorems 2.3.4 and 2.5.3 in [22]. For distribution coefficients, local solvability and stability theorems have been proved in [12] for $n=2$ and in [35] for $n=3$. However, to the best of the author’s knowledge, there were no results in this direction for $n\geq 4$. It is worth mentioning that, for the Sturm-Liouville operators with distribution potentials of the classes $W_{2}^{\alpha}[0,1]$, $\alpha>-1$, Savchuk and Shkalikov [15] have obtained the uniform stability of the inverse spectral problem. This result was extended by Hryniv [16] to $\alpha=-1$ by another method. Recently, the uniform stability of inverse problems was also proved for some nonlocal operators (see [46, 47, 48]). However, the approaches of the mentioned studies do not work for higher-order differential operators. Thus, the uniform stability for them is an open problem, which is not considered in this paper. 1.2 Main results Let us formulate the inverse problem for equation (1.1) and the corresponding local solvability and stability theorem. Consider the problems $\mathcal{L}_{k}$ given by (1.1) and (1.2). For each $k\in\{1,2,\dots,n-1\}$, the spectrum of the problem $\mathcal{L}_{k}$ is a countable set of eigenvalues $\{\lambda_{l,k}\}_{l\geq 1}$. They are supposed to be numbered counting with multiplicities according to the asymptotics obtained in [49]: $$\lambda_{l,k}=(-1)^{n-k}\left(\frac{\pi}{\sin\frac{\pi k}{n}}(l+\theta_{k}+\varkappa_{l,k})\right)^{n},\quad l\geq 1,\,k=\overline{1,n-1},$$ (1.3) where $\theta_{k}$ are some constants independent of $(p_{s})_{s=0}^{n-2}$ and $\{\varkappa_{l,k}\}\in l_{2}$. The weight numbers $\{\beta_{l,k}\}_{l\geq 1\,k=\overline{1,n-1}}$ will be defined in Section 3. Thus, the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ are generated by the coefficients $p=(p_{k})_{k=0}^{n-2}$ of equation (1.1). In this paper, we confine ourselves by $p$ of some class $W$ with certain restrictions on the spectra. Definition 1.1. We say that $p=(p_{k})_{k=0}^{n-2}$ belongs to the class $W$ if (W-1) $p_{k}\in W_{2}^{k-1}[0,1]$, $k=\overline{0,n-2}$. (W-2) For each $k\in\{1,2,\dots,n-1\}$, the eigenvalues $\{\lambda_{l,k}\}_{l\geq 1}$ are simple. (W-3) $\{\lambda_{l,k}\}_{l\geq 1}\cap\{\lambda_{l,k+1}\}_{l\geq 1}=\varnothing$ for $k=\overline{1,n-2}$. Thus, we study the following inverse problem. Problem 1.2. Given the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$, find the coefficients $p=(p_{k})_{k=0}^{n-2}\in W$. Problem 1.2 generalizes the classical inverse Sturm-Liouville problem studied by Marchenko [10] and Gelfand and Levitan [19] (see Example 3.3). The uniqueness for solution of Problem 1.2 follows from the results of [34, 33] (see Section 3 for details). Note that, if the conditions (W-2) and (W-3) are violated, then the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ do not uniquely specify the coefficients $p$ and additional spectral characteristics are needed. In particular, for multiple eigenvalues in the case $n=2$ the generalized weight numbers were defined in [50, 51]. For $n=3$ and $\lambda_{l,1}=\lambda_{l,2}$, the additional weight numbers $\gamma_{l}$ were used in [35]. For higher orders $n$, the situation becomes much more complicated, so in this paper we confine ourselves to the class $W$. Anyway, in view of the eigenvalues asymptotics (1.3), the assumptions (W-2) and (W-3) hold for all sufficiently large indices $l$. Along with $p$, we consider an analogous vector $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}\in W$. We agree that, if a symbol $\alpha$ denotes an object related to $p$, then the symbol $\tilde{\alpha}$ with tilde will denote the analogous object related to $\tilde{p}$. The main result of this paper is the following theorem on local solvability and stability of Problem 1.2. Theorem 1.3. Let $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}\in W$ be fixed. Then, there exists $\delta>0$ (which depends on $\tilde{p}$) such that, for any complex numbers $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ satisfying the inequality $$\Omega:=\left(\sum_{l=1}^{\infty}\Biggl{(}\sum_{k=1}^{n-1}\bigl{(}l^{-1}|\lambda_{l,k}-\tilde{\lambda}_{l,k}|+l^{-2}|\beta_{l,k}-\tilde{\beta}_{l,k}|\bigr{)}\Biggr{)}^{2}\right)^{1/2}\leq\delta,$$ (1.4) there exists a unique $p=(p_{k})_{k=0}^{n-2}$ with the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$. Moreover, $$\|p_{k}-\tilde{p}_{k}\|_{W_{2}^{k-1}[0,1]}\leq C\Omega,\quad k=\overline{0,n-2},$$ (1.5) where the constant $C$ depends only on $\tilde{p}$ and $\delta$. Theorem 1.3 generalizes the previous results of [12] for $n=2$ and of [35] for $n=3$. However, for $n\geq 4$, to the best of the author’s knowledge, Theorem 1.3 is the first existence result for the inverse problem solution in the case of distribution coefficients. The proof of Theorem 1.3 is based on the constructive approach of [34]. Namely, we reduce the nonlinear inverse problem to the so-called main equation, which is a linear equation in the Banach space of bounded infinite sequences. The unique solvability of the main equation follows from the smallness of $\delta$. Furthermore, we derive reconstruction formulas for the coefficients $(p_{k})_{k=0}^{n-2}$ in the form of infinite series. The crucial step in the proof is establishing the convergence of those series in the corresponding spaces $W_{2}^{k-1}[0,1]$ (including the space of generalized functions $W_{2}^{-1}[0,1]$). In order to prove the convergence, we rigorously analyze the solution of the main equation and obtain the precise estimates for the Weyl solutions. Along with Theorem 1.3, we also prove Theorem 6.1 on global solvability of the inverse problem under several requirements on an auxiliary model problem. The paper is organized as follows. In Section 2, we discuss the regularization of equation (1.1) and provide other preliminaries. In Section 3, the weight numbers are defined and the properties of the spectral data are described. In Section 4, we derive the main equation basing on the results of [34]. In Section 5, the reconstruction formulas for the coefficients $(p_{k})_{k=0}^{n-2}$ are obtained. In Section 6, we prove solvability and stability of the inverse problem. 2 Preliminaries In this section, we explain in which sense we understand equation (1.1). For this purpose, an associated matrix and quasi-derivatives are introduced. In addition, we provide other preliminaries. We begin with some notations, which are used throughout the paper: • $\delta_{j,k}$ is the Kronecker delta. • $C_{k}^{j}=\frac{k!}{j!(k-j)!}$ are the binomial coefficients. • In estimates, the same symbol $C$ is used for various positive constants that do not depend on $x$, $l$, $\lambda$, etc. • The spaces $W_{2}^{k}[0,1]$ are equipped with the following norms: $$\displaystyle\|y\|_{W_{2}^{k}[0,1]}$$ $$\displaystyle=\left(\sum_{j=0}^{k}\|y^{(j)}\|_{L_{2}[0,1]}^{2}\right)^{1/2},\quad k\geq 0,$$ $$\displaystyle\|y\|_{W_{2}^{-1}[0,1]}$$ $$\displaystyle=\sup_{c\in\mathbb{C}}\|(y^{(-1)}+c)\|_{L_{2}[0,1]}.$$ For $n\geq 3$, consider the differential expression $$\ell_{n}(y):=y^{(n)}+\sum_{k=0}^{n-2}p_{k}(x)y,\quad x\in(0,1).$$ Fix any function $\sigma\in L_{2}[0,1]$ such that $p_{0}=\sigma^{\prime}$. Define the associated matrix $F(x)=[f_{k,j}(x)]_{k,j=1}^{n}$ for the differential expression $\ell_{n}(y)$ by the formulas $$f_{n-1,1}:=-\sigma,\quad f_{n,2}:=\sigma-p_{1},\quad f_{n,k}:=-p_{k-1},\>\>k=\overline{3,n-1}.$$ (2.1) All the other entries $f_{k,j}$ are assumed to be zero. Clearly, $f_{k,j}\in L_{2}[0,1]$. Using the matrix function $F(x)$, introduce the quasi-derivatives $$y^{[0]}:=y,\quad y^{[k]}:=(y^{[k-1]})^{\prime}-\sum_{j=1}^{k}f_{k,j}y^{[j-1]},\quad k=\overline{1,n},$$ (2.2) and the domain $$\mathcal{D}_{F}:=\{y\colon y^{[k]}\in AC[0,1],\,k=\overline{0,n-1}\}.$$ Due the the special structure of the associated matrix $F(x)$, we have $$\displaystyle y^{[k]}=y^{(k)},\quad k=\overline{0,n-2},\qquad y^{[n-1]}=y^{(n-1)}+\sigma y,$$ (2.3) $$\displaystyle y^{[n]}=(y^{[n-1]})^{\prime}+\sum_{k=1}^{n-2}p_{k}y^{(k)}+\sigma y^{\prime},$$ (2.4) and so $\mathcal{D}_{F}\subset W_{1}^{n-1}[0,1]$. Note that the differential expression is correctly defined in the sense of generalized functions for any $y\in W_{1}^{n-1}[0,1]$. However, if $y\in\mathcal{D}_{F}$, then $y^{[n]}\in L_{1}[0,1]$ and relations (2.3) and (2.4) directly imply the following lemma. Lemma 2.1. For $y\in\mathcal{D}_{F}$, $\ell_{n}(y)$ is a regular generalized function and $\ell_{n}(y)=y^{[n]}$. Thus, for $y\in\mathcal{D}_{F}$, $\ell_{n}(y)$ is a function of $L_{1}[0,1]$ and the relation $\ell_{n}(y)=y^{[n]}$ gives the regularization of this differential expression. We call a matrix function $F(x)$ an associated matrix of the differential expression $\ell_{n}(y)$ if $F(x)$ defines the quasi-derivatives $y^{[k]}$ and the domain $\mathcal{D}_{F}$ so that the assertion of Lemma 2.1 holds. A function $y$ is called a solution of equation (1.1) if $y\in\mathcal{D}_{F}$ and $\ell_{n}(y)=\lambda y$ a.e. on $(0,1)$. Along with $F(x)$, we consider the matrix function $F^{\star}(x)=[f_{k,j}^{\star}(x)]_{k,j=1}^{n}$ such that $$f_{k,j}^{\star}(x)=(-1)^{k+j+1}f_{n-j+1,n-k+1}(x),\quad k,j=\overline{1,n}.$$ Using (2.1), we obtain $$f_{k,1}^{\star}=(-1)^{k+1}p_{n-k},\>\>k=\overline{2,n-2},\quad f_{n-1,1}^{\star}=(-1)^{n}(p_{1}-\sigma),\quad f_{n,2}^{\star}=(-1)^{n}\sigma,$$ (2.5) and all the other entries $f_{k,j}^{\star}$ equal zero. For example, for $n=6$, we have $$F(x)=\begin{bmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ -\sigma&0&0&0&0&0\\ 0&\sigma-p_{1}&-p_{2}&-p_{3}&-p_{4}&0\end{bmatrix},\qquad F^{\star}(x)=\begin{bmatrix}0&0&0&0&0&0\\ -p_{4}&0&0&0&0&0\\ p_{3}&0&0&0&0&0\\ -p_{2}&0&0&0&0&0\\ p_{1}-\sigma&0&0&0&0&0\\ 0&\sigma&0&0&0&0\end{bmatrix}.$$ Using the matrix function $F^{\star}(x)$, define the quasi-derivatives $$z^{[0]}:=z,\quad z^{[k]}:=(z^{[k-1]})^{\prime}-\sum_{j=1}^{k}f^{\star}_{k,j}z^{[j-1]},\quad k=\overline{1,n},$$ (2.6) the domain $$\mathcal{D}_{F^{\star}}:=\{z\colon z^{[k]}\in AC[0,1],\,k=\overline{0,n-1}\},$$ (2.7) and the differential expression $$\ell_{n}^{\star}(z):=(-1)^{n}z^{[n]}.$$ (2.8) Note that, in (2.7) and (2.8), we use the quasi-derivatives defined by (2.6). Below we call a function $z$ a solution of the differential equation $$\ell_{n}^{\star}(z)=\lambda z,\quad x\in(0,1),$$ (2.9) if $z\in\mathcal{D}_{F^{\star}}$ and the equality (2.9) holds a.e. on $(0,1)$. Throughout this paper, we always use the quasi-derivatives (2.2) for functions of $\mathcal{D}_{F}$ and the quasi-derivatives (2.6) for functions of $\mathcal{D}_{F^{\star}}$. For $z\in\mathcal{D}_{F^{\star}}$, relations (2.5) and (2.6) imply $$z^{[k]}=(z^{[k-1]})^{\prime}+(-1)^{k}p_{n-k}z,\quad k=\overline{2,n-2}.$$ Therefore, one can show by induction that $$\displaystyle z^{[k]}$$ $$\displaystyle=z^{(k)}+\sum_{j=0}^{k-2}\left(\sum_{s=j}^{k-2}(-1)^{s+k}C_{s}^{j}p_{n-k+s}^{(s-j)}\right)z^{(j)},\quad k=\overline{0,n-2},$$ (2.10) $$\displaystyle z^{[n-1]}$$ $$\displaystyle=z^{(n-1)}+\sum_{j=0}^{n-3}\left(\sum_{s=j}^{n-3}(-1)^{s+n-1}C_{s}^{j}p_{s+1}^{(s-j)}\right)z^{(j)}+(-1)^{n}\sigma z,$$ (2.11) $$\displaystyle z^{[n]}$$ $$\displaystyle=(z^{[n-1]})^{\prime}+(-1)^{n+1}\sigma z^{\prime}.$$ Consequently, induction implies $z^{(k)}\in AC[0,1]$ for $k=\overline{0,n-2}$. Hence $\mathcal{D}_{F^{\star}}\subset W_{1}^{n-1}[0,1]$. For $z\in\mathcal{D}_{F^{\star}}$ and $y\in\mathcal{D}_{F}$, define the Lagrange bracket: $$\langle z,y\rangle=\sum_{k=0}^{n-1}(-1)^{k}z^{[k]}y^{[n-k-1]}.$$ (2.12) Then, the Lagrange identity holds: $$\frac{d}{dx}\langle z,y\rangle=z\ell_{n}(y)-y\ell_{n}^{\star}(z).$$ In particular, if $z$ and $y$ solve the equations $\ell^{\star}(z)=\mu z$ and $\ell(y)=\lambda y$, respectively, then we get $$\frac{d}{dx}\langle z,y\rangle=(\lambda-\mu)yz.$$ (2.13) Substituting (2.10) and (2.11) into (2.12), we derive the relation $$\langle z,y\rangle=\sum_{k=0}^{n-1}(-1)^{n-k-1}z^{(n-k-1)}y^{(k)}+\sum_{k=0}^{n-3}y^{(k)}\sum_{j=0}^{n-k-3}\left(\sum_{s=j}^{n-k-3}(-1)^{s}C_{s}^{j}p_{s+k+1}^{(s-j)}\right)z^{(j)},$$ (2.14) where all the derivatives are regular, since $z,y\in W_{1}^{n-1}[0,1]$ and $p_{k}\in W_{2}^{k-1}[0,1]$, $k=\overline{1,n-2}$. Remark 2.2. The associated matrix $F(x)$ given by (2.1) regularizes the differential expression $\ell_{n}(y)$ only for $n\geq 3$. For $n=2$, the associated matrix is constructed in a different way (see [52]): $$F=\begin{bmatrix}-\sigma&0\\ -\sigma^{2}&\sigma\end{bmatrix}.$$ Nevertheless, the main result of this paper (Theorem 1.3) holds for $n=2$ and, moreover, has been already proved in [12] for real-valued potentials. Therefore, below in the proofs, we confine ourselves to the case $n\geq 3$ and use the associated matrix (2.1). For $n=2$, the proofs are valid with minor modifications. Remark 2.3. For regularization of the differential expression $\ell_{n}(y)$, different associated matrices can be used (see [29, 31, 33]). However, it has been proved in [33] that the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ do not depend on the choice of the associated matrix. 3 Spectral data In this section, we discuss the properties of the spectral characteristics for the boundary value problems $\mathcal{L}_{k}$, $k=\overline{1,n-1}$. In particular, the weight numbers $\{\beta_{l,k}\}$ are defined as the residues of some entries of the Weyl-Yurko matrix. For $k=\overline{1,n}$, denote by $\mathcal{C}_{k}(x,\lambda)$ and $\Phi_{k}(x,\lambda)$ the solutions of equation (1.1) satisfying the initial conditions $$\mathcal{C}_{k}^{[j-1]}(0,\lambda)=\delta_{k,j},\quad j=\overline{1,n},$$ and the boundary conditions $$\Phi_{k}^{[j-1]}(0,\lambda)=\delta_{k,j},\quad j=\overline{1,k},\qquad\Phi_{k}^{[s-1]}(1,\lambda)=0,\quad s=\overline{1,n-k}.$$ respectively. The functions $\{\Phi_{k}(x,\lambda)\}_{k=1}^{n}$ are called the Weyl solutions of equation (1.1). Let us summarize the properties of the solutions $\mathcal{C}_{k}(x,\lambda)$ and $\Phi_{k}(x,\lambda)$. For details, see [32]. The functions $\mathcal{C}_{k}(x,\lambda)$, $k=\overline{1,n}$, are uniquely defined as solutions of the initial value problems, and they are entire in $\lambda$ for each fixed $x\in[0,1]$ together with their quasi-derivatives $\mathcal{C}_{k}^{[j]}(x,\lambda)$, $j=\overline{1,n-1}$. The Weyl solutions $\Phi_{k}(x,\lambda)$, $k=\overline{1,n}$, and their quasi-derivatives are meromorphic in $\lambda$. Furthermore, the fundamental matrices $$\mathcal{C}(x,\lambda):=[\mathcal{C}_{k}^{[j-1]}(x,\lambda)]_{j,k=1}^{n},\quad\Phi(x,\lambda):=[\Phi_{k}^{[j-1]}(x,\lambda)]_{j,k=1}^{n}$$ are related as follows: $$\Phi(x,\lambda)=\mathcal{C}(x,\lambda)M(\lambda),$$ where $M(\lambda)=[M_{j,k}(\lambda)]_{j,k=1}^{n}$ is called the Weyl-Yurko matrix. The entries $M_{j,k}(\lambda)$ satisfy the relations $$\displaystyle M_{j,k}(\lambda)$$ $$\displaystyle=\delta_{j,k},\quad j\leq k,$$ (3.1) $$\displaystyle M_{j,k}(\lambda)$$ $$\displaystyle=-\frac{\Delta_{j,k}(\lambda)}{\Delta_{k,k}(\lambda)},\quad k=\overline{1,n-1},\quad j=\overline{k+1,n},$$ (3.2) where $\Delta_{k,k}(\lambda):=\det([\mathcal{C}_{j}^{[n-s]}(1,\lambda)]_{s,j=k+1}^{n}$, $k=\overline{1,n-1}$, and $\Delta_{j,k}(\lambda)$ is obtained from $\Delta_{k,k}(\lambda)$ by replacing $\mathcal{C}_{j}$ by $\mathcal{C}_{k}$. Clearly, the functions $\Delta_{j,k}(\lambda)$ for $j\geq k$ are entire in $\lambda$. Thus, $M(\lambda)$ is a unit lower-triangular matrix, whose entries under the main diagonal are meromorphic in $\lambda$, and the poles of the $k$-th column coincide with the zeros of $\Delta_{k,k}(\lambda)$. On the other hand, the zeros of $\Delta_{k,k}(\lambda)$ coincide with the eigenvalues of the problem $\mathcal{L}_{k}$ for equation (1.1) with the boundary conditions (1.2) for each $k=\overline{1,n-1}$. Therefore, under the assumption (W-2) of Definition 1.1, all the poles of $M(\lambda)$ are simple. The Laurent series at $\lambda=\lambda_{l,k}$ has the form $$M(\lambda)=\frac{M_{\langle-1\rangle}(\lambda_{l,k})}{\lambda-\lambda_{l,k}}+M_{\langle 0\rangle}(\lambda_{l,k})+M_{\langle 1\rangle}(\lambda_{l,k})(\lambda-\lambda_{l,k})+\dots,$$ where $M_{\langle j\rangle}(\lambda_{l,k})$ are the corresponding $(n\times n)$-matrix coefficients. Define the weight matrices $$\mathcal{N}(\lambda_{l,k}):=(M_{\langle 0\rangle}(\lambda_{l,k}))^{-1}M_{\langle-1\rangle}(\lambda_{l,k}).$$ Theorem 4.4 of [33] implies the following uniqueness result. Proposition 3.1 ([33]). The spectral data $\{\lambda_{l,k},\mathcal{N}(\lambda_{l,k})\}_{l\geq 1,\,k=\overline{1,n-1}}$ uniquely specify the coefficients $p=(p_{k})_{k=0}^{n-2}$ satisfying the conditions (W-1) and (W-2) of Definition 1.1. The structural properties of the weight matrices $\mathcal{N}(\lambda_{l,k})=[\mathcal{N}_{s,j}(\lambda_{l,k})]_{s,j=1}^{n}$ are similar to the ones for the case of regular coefficients (see [22, 34]). In view of (3.1), $\mathcal{N}_{s,j}(\lambda_{l,k})=0$ for $s\leq j$. Moreover, under the condition (W-3), $\mathcal{N}_{s,j}(\lambda_{l,k})=0$ for $s>j+1$. Thus, the only non-zero entries of $\mathcal{N}(\lambda_{l,k})$ are $\mathcal{N}_{j+1,j}(\lambda_{l,k})$ for such $j$ that $\Delta_{j,j}(\lambda_{l,k})=0$. Therefore, instead of the weight matrices $\mathcal{N}(\lambda_{l,k})$, it is sufficient to use the weight numbers $$\beta_{l,k}:=\mathcal{N}_{k+1,k}(\lambda_{l,k})=-\frac{\Delta_{k+1,k}(\lambda_{l,k})}{\frac{d}{d\lambda}\Delta_{k,k}(\lambda_{l,k})}.$$ Indeed, if $\lambda_{l_{1},k_{1}}=\lambda_{l_{2},k_{2}}=\dots=\lambda_{l_{r},k_{r}}$ is a group of equal eigenvalues (of different problems $\mathcal{L}_{k_{j}}$), that is maximal by inclusion, we have $$\mathcal{N}(\lambda_{l_{1},k_{1}})=\mathcal{N}(\lambda_{l_{2},k_{2}})=\dots=\mathcal{N}(\lambda_{l_{r},k_{r}})=\sum_{s=1}^{r}\beta_{l_{s},k_{s}}E_{k_{s}+1,k_{s}},$$ where $E_{i,j}$ denotes the matrix with the unit entry at the position $(i,j)$ and all the other entries equal zero. Hence, Proposition 3.1 implies the following corollary. Corollary 3.2. The spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ uniquely specify the coefficients $p=(p_{k})_{k=0}^{n-2}\in W$. By Theorem 6.2 of [49], the weight numbers have the asymptotics $$\beta_{l,k}=l^{n}(\beta_{k}+\varkappa_{l,k}^{0}),\quad l\geq 1,\,k=\overline{1,n-1},$$ (3.3) where $\beta_{k}\neq 0$ and $\{\varkappa_{l,k}^{0}\}\in l_{2}$. Example 3.3. For $n=2$, $\{\lambda_{l,1}\}_{l\geq 1}$ are the Dirichlet eigenvalues of the Sturm-Liouville equation $y^{\prime\prime}+p_{0}y=\lambda y$ and $$\mathcal{N}(\lambda_{l,1})=\begin{bmatrix}0&0\\ \beta_{l,1}&0\end{bmatrix}.$$ One can easily check that $\beta_{l,1}=\alpha_{l}^{-1}$, where $\alpha_{l}:=\int_{0}^{1}y_{l}^{2}(x)\,dx$ and $y_{l}(x)$ is the eigenfunction of $\lambda_{l,1}$ such that $y_{l}^{[1]}(0)=1$ for a distributional potential $p_{0}$ or $y^{\prime}(0)=1$ for an integrable potential $p_{0}$. Therefore, $\{\lambda_{l,1},\beta_{l,1}\}_{l\geq 1}$ are equivalent to the spectral data $\{\lambda_{l,1},\alpha_{l}\}_{l\geq 1}$ of the classical inverse Sturm-Liouville problem studied by Marchenko [10], Gelfand and Levitan [19], etc. Example 3.4. For $n=3$, (W-3) implies $\{\lambda_{l,1}\}\cap\{\lambda_{l,2}\}=\varnothing$. Hence $$\mathcal{N}(\lambda_{l,1})=\begin{bmatrix}0&0&0\\ \beta_{l,1}&0&0\\ 0&0&0\end{bmatrix},\quad\mathcal{N}(\lambda_{l,2})=\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&\beta_{l,2}&0\end{bmatrix}.$$ The recovery of the coefficients $p=(p_{0},p_{1})\in W$ from the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=1,2}$ has been investigated in [35]. 4 Main equation In this section, we reduce Problem 1.2 to a linear equation in the Banach space $m$ of bounded infinite sequences. First, we deduce an infinite system of linear equations. Second, this system is transformed to achieve the absolute convergence of the series by the method of [22]. Although we rely on the general approach of [34], the construction of the main equation is simplified because of the separation condition (W-3). Consider the two coefficient vectors $p=(p_{k})_{k=0}^{n-2}$ and $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}$ of the class $W$. Note that the differential expression $\tilde{\ell}_{n}(y)$ with the coefficients $\tilde{p}$ has the associated matrix $\tilde{F}(x)$, which can be different from $F(x)$, so the corresponding quasi-derivatives differ. The matrix $\tilde{F}^{\star}(x)$ and the corresponding quasi-derivatives are defined analogously to $F^{\star}(x)$ and (2.6), respectively. For $k=\overline{1,n}$, denote by $\tilde{\Phi}^{\star}_{k}(x,\lambda)$ the solution of the boundary value problem $$\displaystyle\tilde{\ell}_{n}^{\star}(\tilde{\Phi}_{k}^{\star})=\lambda\tilde{\Phi}_{k}^{\star},\quad x\in(0,1),$$ $$\displaystyle\tilde{\Phi}_{k}^{\star[j-1]}(0,\lambda)=\delta_{k,j},\quad j=\overline{1,k},\qquad\tilde{\Phi}_{k}^{\star[s-1]}(1,\lambda)=0,\quad s=\overline{1,n-k}.$$ Introduce the notations $$\displaystyle V:=\{(l,k,\varepsilon)\colon l\in\mathbb{N},\,k=\overline{1,n-1}.\,\varepsilon=0,1\},$$ $$\displaystyle\lambda_{l,k,0}:=\lambda_{l,k},\quad\lambda_{l,k,1}:=\tilde{\lambda}_{l,k},\quad\beta_{l,k,0}:=\beta_{l,k},\quad\beta_{l,k,1}:=\tilde{\beta}_{l,k},$$ $$\displaystyle\varphi_{l,k,\varepsilon}(x):=\Phi_{k+1}(x,\lambda_{l,k,\varepsilon}),\quad\tilde{\varphi}_{l,k,\varepsilon}(x):=\tilde{\Phi}_{k+1}(x,\lambda_{l,k,\varepsilon}),\quad(l,k,\varepsilon)\in V,$$ (4.1) $$\displaystyle\tilde{G}_{(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})}(x):=(-1)^{n-k+1}\beta_{l,k,\varepsilon}\frac{\langle\tilde{\Phi}^{\star}_{n-k+1}(x,\lambda_{l,k,\varepsilon}),\tilde{\Phi}_{k_{0}+1}(x,\lambda_{l_{0},k_{0},\varepsilon_{0}})\rangle}{\lambda_{l_{0},k_{0},\varepsilon_{0}}-\lambda_{l,k,\varepsilon}}.$$ (4.2) Note that, for each fixed $k\in\{1,2,\dots,n-2\}$, the Weyl solution $\Phi_{k+1}(x,\lambda)$ has the poles $\{\lambda_{l,k+1,0}\}_{l\geq 1}$, which do not coincide with $\{\lambda_{l,k,0}\}_{l\geq 1}$ because of (W-3). Furthermore, the solution $\Phi_{n}(x,\lambda)\equiv\mathcal{C}_{n}(x,\lambda)$ is entire in $\lambda$. For technical convenience, assume that $\{\lambda_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}\cap\{\tilde{\lambda}_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}=\varnothing$. The opposite case requires minor changes. Hence, the functions $\varphi_{l,k,\varepsilon}(x)$ are correctly defined by (4.1), and so do $\tilde{\varphi}_{l,k,\varepsilon}(x)$, $(l,k,\varepsilon)\in V$. It has been shown in [34] that $\tilde{\Phi}^{\star}_{j}(x,\lambda)$ has the poles $\tilde{\lambda}_{l,j}^{\star}=\tilde{\lambda}_{l,n-j-1}$, $l\geq 1$, for $j=\overline{1,n-1}$, and $\tilde{\Phi}^{\star}_{n}(x,\lambda)$ is entire in $\lambda$. Therefore, $\tilde{\Phi}_{n-k+1}^{\star}(x,\lambda_{l,k,\varepsilon})$ is correctly defined and so do $\tilde{G}_{(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})}(x)$ for $(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})\in V$. In [34], the following infinite linear system has been obtained: $$\varphi_{l_{0},k_{0},\varepsilon_{0}}(x)=\tilde{\varphi}_{l_{0},k_{0},\varepsilon_{0}}(x)+\sum_{(l,k,\varepsilon)\in V}(-1)^{\varepsilon}\varphi_{l,k,\varepsilon}(x)\tilde{G}_{(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})}(x),\quad(l_{0},k_{0},\varepsilon_{0})\in V.$$ (4.3) Our next goal is to combine the terms in (4.3) for achieving the absolute convergence of the series. Introduce the numbers $$\xi_{l}:=\sum_{k=1}^{n-1}\left(l^{-(n-1)}|\lambda_{l,k}-\tilde{\lambda}_{l,k}|+l^{-n}|\beta_{l,k}-\tilde{\beta}_{l,k}|\right),\quad l\geq 1,$$ (4.4) which characterize the “distance” between the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ and $\{\tilde{\lambda}_{l,k},\tilde{\beta}_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ of $p$ and $\tilde{p}$, respectively. The asymptotics (1.3) and (3.3) imply that $\{\xi_{l}\}\in l_{2}$. In addition, define the functions $$w_{l,k}(x):=l^{-k}\exp(-xl\cot(k\pi/n)),$$ which characterize the growth of $\varphi_{l,k,\varepsilon}(x)$: $|\varphi_{l,k,\varepsilon}(x)|\leq Cw_{l,k}(x)$ (see Lemma 7 in [34]). Pass to the new variables $$\begin{bmatrix}\psi_{l,k,0}(x)\\ \psi_{l,k,1}(x)\end{bmatrix}:=w_{l,k}^{-1}(x)\begin{bmatrix}\xi_{l}^{-1}&-\xi_{l}^{-1}\\ 0&1\end{bmatrix}\begin{bmatrix}\varphi_{l,k,0}(x)\\ \varphi_{l,k,1}(x)\end{bmatrix},$$ (4.5) $$\begin{bmatrix}\tilde{R}_{(l_{0},k_{0},0),(l,k,0)}(x)&\tilde{R}_{(l_{0},k_{0},0),(l,k,1)}(x)\\ \tilde{R}_{(l_{0},k_{0},1),(l,k,0)}(x)&\tilde{R}_{(l_{0},k_{0},1),(l,k,1)}(x)\end{bmatrix}:=\\ \frac{w_{l,k}(x)}{w_{l_{0},k_{0}}(x)}\begin{bmatrix}\xi_{l_{0}}^{-1}&-\xi_{l_{0}}^{-1}\\ 0&1\end{bmatrix}\begin{bmatrix}\tilde{G}_{(l,k,0),(l_{0},k_{0},0)}(x)&\tilde{G}_{(l,k,1),(l_{0},k_{0},0)}(x)\\ \tilde{G}_{(l,k,0),(l_{0},k_{0},1)}(x)&\tilde{G}_{(l,k,1),(l_{0},k_{0},1)}(x)\end{bmatrix}\begin{bmatrix}\xi_{l}&1\\ 0&-1\end{bmatrix}.$$ (4.6) Analogously to $\psi_{l,k,\varepsilon}(x)$, define $\tilde{\psi}_{l,k,\varepsilon}(x)$. For brevity, denote $v=(n,k,\varepsilon)$, $v_{0}=(n_{0},k_{0},\varepsilon_{0})$, $v,v_{0}\in V$. Then, relation (4.3) is transformed into $$\psi_{v_{0}}(x)=\tilde{\psi}_{v_{0}}(x)+\sum_{v\in V}\tilde{R}_{v_{0},v}(x)\psi_{v}(x),\quad v_{0}\in V,$$ (4.7) where $$|\psi_{v}(x)|,|\tilde{\psi}_{v}(x)|\leq C,\quad|\tilde{R}_{v_{0},v}(x)|\leq\frac{C\xi_{l}}{|l-l_{0}|+1},\quad v_{0},v\in V.$$ (4.8) Consider the Banach space $m$ of bounded infinite sequences $a=[a_{v}]_{v\in V}$ with the norm $\|a\|_{m}:=\sup_{v\in V}|a_{v}|$. For each fixed $x\in[0,1]$, define the linear operator $\tilde{R}(x)\colon m\to m$ as follows: $$(\tilde{R}(x)a)_{v_{0}}=\sum_{v\in V}\tilde{R}_{v_{0},v}(x)a_{v},\quad v_{0}\in V.$$ In view of the estimates (4.8), $\psi(x),\tilde{\psi}(x)\in m$ and the operator $\tilde{R}(x)$ is bounded in $m$ for each fixed $x\in[0,1]$. Denote by $I$ the identity operator in $m$. Then, the system (4.7) can be represented as a linear equation in the Banach space $m$: $$(I-\tilde{R}(x))\psi(x)=\tilde{\psi}(x),\quad x\in[0,1].$$ (4.9) Equation (4.9) is called the main equation of Problem 1.2. We have derived (4.9) under the assumption that $\{\lambda_{l,k},\beta_{l,k}\}$ and $\{\tilde{\lambda}_{l,k},\tilde{\beta}_{l,k}\}$ are the spectral data of the two problems with the coefficients $p=(p_{k})_{k=0}^{n-2}$ and $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}$, respectively. Anyway, the main equation (4.9) can be used for the reconstruction of $p$ by $\{\lambda_{l,k},\beta_{l,k}\}$. Indeed, one can choose an arbitrary $\tilde{p}\in W$, find $\tilde{\psi}(x)$ and $\tilde{R}(x)$ by using $\tilde{p}$, $\{\tilde{\lambda}_{l,k},\tilde{\beta}_{l,k}\}$, and $\{\lambda_{l,k},\beta_{l,k}\}$, then find $\psi(x)$ by solving the main equation. In order to find $p$ from $\psi(x)$, we need reconstruction formulas, which are obtained in the next section. 5 Reconstruction formulas In this section, we derive formulas for recovering the coefficients $(p_{k})_{k=0}^{n-2}$ from the solution $\psi(x)$ of the main equation (4.9). For the derivation, we use the special structure of the associated matrices $F(x)$ and $F^{\star}(x)$. The arguments of this section are based on formal calculations with infinite series. The convergence of those series will be rigorously studied in the next section. Find $\{\varphi_{l,k,\varepsilon}(x)\}_{(l,k,\varepsilon)\in V}$ from (4.5): $$\begin{bmatrix}\varphi_{l,k,0}(x)\\ \varphi_{l,k,1}(x)\end{bmatrix}=w_{l,k}(x)\begin{bmatrix}\xi_{l}&1\\ 0&1\end{bmatrix}\begin{bmatrix}\psi_{l,k,0}(x)\\ \psi_{l,k,1}(x)\end{bmatrix}$$ (5.1) Then, we can recover the Weyl solutions for $k_{0}=\overline{1,n}$: $$\Phi_{k_{0}}(x,\lambda):=\tilde{\Phi}_{k_{0}}(x,\lambda)+\sum_{(l,k,\varepsilon)\in V}(-1)^{\varepsilon+n-k+1}\beta_{l,k,\varepsilon}\varphi_{l,k,\varepsilon}(x)\frac{\langle\tilde{\Phi}_{n-k+1}^{\star}(x,\lambda_{l,k,\varepsilon}),\tilde{\Phi}_{k_{0}+1}(x,\lambda)\rangle}{\lambda-\lambda_{l,k,\varepsilon}}.$$ (5.2) Formally applying the differential expression $\ell_{n}(y)$ to the left- and the right-hand sides of (5.2), after some transforms (see [34, Section 4.1]), we arrive at the relation $$\displaystyle\sum_{(l,k,\varepsilon)\in V}(-1)^{\varepsilon}\varphi_{l,k,\varepsilon}(x)\langle\tilde{\eta}_{l,k,\varepsilon}(x),\tilde{\Phi}_{k_{0}}(x,\lambda)\rangle=$$ $$\displaystyle\sum_{s=0}^{n-2}\hat{p}_{s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)+\sum_{s=0}^{n-1}t_{n,s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)$$ $$\displaystyle+\sum_{s=0}^{n-3}\sum_{r=s+1}^{n-2}p_{r}(x)t_{r,s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda),$$ (5.3) where $\hat{p}_{s}:=p_{s}-\tilde{p}_{s}$, $$\displaystyle\tilde{\eta}_{l,k,\varepsilon}(x)$$ $$\displaystyle:=(-1)^{n-k+1}\beta_{l,k,\varepsilon}\tilde{\Phi}^{\star}_{n-k+1}(x,\lambda_{l,k,\varepsilon}),$$ (5.4) $$\displaystyle t_{r,s}(x)$$ $$\displaystyle:=\sum_{u=s}^{r-1}C_{r}^{u+1}C_{u}^{s}T_{r-u-1,u-s}(x),$$ (5.5) $$\displaystyle T_{j_{1},j_{2}}(x)$$ $$\displaystyle:=\sum_{(l,k,\varepsilon)\in V}(-1)^{\varepsilon}\varphi_{l,k,\varepsilon}^{(j_{1})}(x)\tilde{\eta}_{l,k,\varepsilon}^{(j_{2})}(x).$$ (5.6) Note that, for $(l,k,\varepsilon)\in V$, the functions $\tilde{\eta}_{l,k,\varepsilon}(x)$ are solutions of the equation $\tilde{\ell}^{\star}(z)=\lambda_{l,k,\varepsilon}z$, so the quasi-derivatives for them are generated by the matrix function $\tilde{F}^{\star}(x)$. Thus, relation (2.14) for the Lagrange bracket in (5.3) implies $$\displaystyle\langle\tilde{\eta}_{l,k,\varepsilon}(x),\tilde{\Phi}_{k_{0}}(x,\lambda)\rangle=$$ $$\displaystyle\sum_{s=0}^{n-1}(-1)^{n-s-1}\tilde{\eta}_{l,k,\varepsilon}^{(n-s-1)}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)$$ $$\displaystyle+\sum_{s=0}^{n-3}\left(\sum_{j=0}^{n-s-3}\sum_{r=j}^{n-s-3}(-1)^{r}C_{r}^{j}\tilde{p}_{s+r+1}^{(r-j)}(x)\tilde{\eta}_{l,k,\varepsilon}^{(j)}(x)\right)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda).$$ (5.7) Substituting (5.7) into (5.3) and using (5.6), we obtain $$\displaystyle\sum_{s=0}^{n-1}(-1)^{n-s-1}T_{0,n-s-1}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)+\sum_{s=0}^{n-3}\left(\sum_{j=0}^{n-s-3}\sum_{r=j}^{n-s-3}(-1)^{r}C_{r}^{j}\tilde{p}_{s+r+1}^{(r-j)}(x)T_{0,j}(x)\right)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)$$ $$\displaystyle=\sum_{s=0}^{n-2}\hat{p}_{s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)+\sum_{s=0}^{n-1}t_{n,s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda)+\sum_{s=0}^{n-3}\sum_{r=s+1}^{n-2}p_{r}(x)t_{r,s}(x)\tilde{\Phi}_{k_{0}}^{(s)}(x,\lambda).$$ (5.8) Let us group the terms at $\Phi_{k_{0}}^{(s)}(x,\lambda)$ and assume that the corresponding left- and right-hand sides are equal to each other: $$\displaystyle\Phi_{k_{0}}^{(n-1)}(x,\lambda)\colon\quad$$ $$\displaystyle T_{0,0}(x)=T_{0,0}(x),$$ $$\displaystyle\Phi_{k_{0}}^{(n-2)}(x,\lambda)\colon\quad$$ $$\displaystyle-T_{0,1}(x)=\hat{p}_{n-2}(x)+t_{n,n-2}(x),$$ $$\displaystyle\Phi_{k_{0}}^{(s)}(x,\lambda)\colon\quad$$ $$\displaystyle(-1)^{n-s-1}T_{0,n-s-1}(x)+\sum_{j=0}^{n-s-3}\sum_{r=j}^{n-s-3}(-1)^{r}C_{r}^{j}\tilde{p}_{s+r+1}^{(r-j)}(x)T_{0,j}(x)$$ $$\displaystyle=\,\hat{p}_{s}(x)+t_{n,s}(x)+\sum_{r=s+1}^{n-2}p_{r}(x)t_{r,s}(x),\quad s=\overline{0,n-3}.$$ From here, we obtain the reconstruction formulas $$\displaystyle p_{s}(x)=$$ $$\displaystyle\,\tilde{p}_{s}(x)-\left(t_{n,s}(x)+(-1)^{n-s}T_{0,n-s-1}(x)\right)$$ $$\displaystyle+\sum_{j=0}^{n-s-3}\sum_{r=j}^{n-s-3}(-1)^{r}C_{r}^{j}\tilde{p}_{r+s+1}^{(r-j)}(x)T_{0,j}(x)-\sum_{r=s+1}^{n-2}p_{r}(x)t_{r,s}(x),$$ (5.9) for $s=n-2,n-3,\dots,1,0$. Thus, we arrive at the following constructive procedure for solving Problem 1.2. Procedure 5.1. Suppose that the spectral data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ are given. We have to find the coefficients $p=(p_{k})_{k=0}^{n-2}$. 1. Choose $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}\in W$ and find the spectral data $\{\tilde{\lambda}_{l,k},\tilde{\beta}_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$. 2. Find the Weyl solutions $\tilde{\varphi}_{l,k,\varepsilon}(x)=\tilde{\Phi}_{k+1}(x,\lambda_{l,k,\varepsilon})$ and $\tilde{\Phi}_{n-k+1}^{\star}(x,\lambda_{l,k,\varepsilon})$ for $(l,k,\varepsilon)\in V$, and construct $\tilde{G}_{(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})}(x)$ by (4.2). 3. Construct $\tilde{\psi}_{v}(x)$ for $v\in V$ by (4.5) and $\tilde{R}_{v_{0},v}(x)$ for $v_{0},v\in V$ by (4.6). 4. Find $\psi_{v}(x)$, $v\in V$, by solving the main equation (4.9). 5. For $(l,k,\varepsilon)\in V$, determine $\varphi_{l,k,\varepsilon}(x)$ by (5.1) and $\tilde{\eta}_{l,k,\varepsilon}(x)$ by (5.4). 6. For $s=n-2,n-3,\dots,1,0$, find $p_{s}(x)$ by formula (5.9), in which $t_{r,s}(x)$ and $T_{j_{1},j_{2}}(x)$ are defined by (5.5) and (5.6), respectively. Procedure 5.1 will be used in the next section for proving Theorem 1.3. In general, there is a challenge to choose a model problem $\tilde{p}$ so that the series for $p_{s}$ converge in the corresponding spaces. Note that steps 1–5 works for any $\tilde{p}\in W$, since for them the estimate $\{\xi_{l}\}\in l_{2}$ is sufficient. But the situation differs for step 6. In Section 6, we prove the validity of step 6 in the case $\{l^{n-2}\xi_{l}\}\in l_{2}$. 6 Solvability and stability In this section, we prove the following theorem on the solvability of Problem 1.2. Theorem 6.1. Let complex numbers $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ and coefficients $\tilde{p}=(\tilde{p}_{k})_{k=0}^{n-2}\in W$ be such that: (S-1) For each $k=\overline{1,n-1}$, the numbers $\{\lambda_{l,k}\}_{l\geq 1}$ are distinct. (S-2) $\{\lambda_{l,k}\}_{l\geq 1}\cap\{\lambda_{l,k+1}\}_{l\geq 1}=\varnothing$ for $k=\overline{1,n-2}$. (S-3) $\beta_{l,k}\neq 0$ for all $l\geq 1$, $k=\overline{1,n-1}$. (S-4) $\{l^{n-2}\xi_{l}\}_{l\geq 1}\in l_{2}$, where the numbers $\xi_{l}$ are defined in (4.4). (S-5) The operator $(I-\tilde{R}(x))$, which is constructed by using $\{\lambda_{l,k},\beta_{l,k}\}$ and $\tilde{p}$ according to Section 4, has a bounded inverse operator for each fixed $x\in[0,1]$. Then, $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ are the spectral data of some (unique) $p=(p_{k})_{k=0}^{n-2}\in W$. Theorem 6.1 provides sufficient conditions for global solvability of the inverse problem. Theorem 1.3 on local solvability and stability will be obtained as a corollary of Theorem 6.1. Thus, Theorem 6.1 plays an auxiliary role in this paper but also has a separate significance. The proof of Theorem 6.1 is based on Procedure 5.1. We investigate the properties of the solution $\psi(x)$ of the main equation and prove the convergence of the series in (5.6) and (5.9) in the corresponding spaces of regular and generalized functions. This part of the proofs is the most difficult one, since the series converge in different spaces and precise estimates for the Weyl solutions are needed. Finally, we show that the numbers $\{\lambda_{l,k},\beta_{l,k}\}$ satisfying the conditions of Theorem 6.1 are the spectral data of the coefficients $p=(p_{k})_{k=0}^{n-2}$ reconstructed by formulas (5.9). In the end of this section, we prove Theorem 1.3. Proceed to the proof of Theorem 6.1. Let $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ and $\tilde{p}$ satisfy the hypotheses (S-1)–(S-5). We emphasize that $\{\lambda_{l,k},\beta_{l,k}\}$ are not necessarily the spectral data corresponding to some $p$. We have to prove this. By virtue of (S-5), the operator $(I-\tilde{R}(x))$ has a bounded inverse. Therefore, the main equation (4.9) is uniquely solvable in $m$ for each fixed $x\in[0,1]$. Consider its solution $\psi(x)=[\psi_{v}(x)]_{v\in V}$. Recover the functions $\varphi_{l,k,\varepsilon}(x)$ for $(l,k,\varepsilon)\in V$ by (5.1). Let us study their properties. For this purpose, we need the auxiliary estimates for $\tilde{\Phi}_{k}(x,\lambda)$ and $\tilde{\Phi}^{\star}_{k}(x,\lambda)$, which were deduced from the results of [32] and used in Section 4.2 of [34]. Proposition 6.2 ([32, 34]). For $(l,k,\varepsilon)\in V$, $x\in[0,1]$, and $\nu=\overline{0,n-1}$, the following estimates hold: $$\displaystyle|\tilde{\Phi}_{k+1}^{[\nu]}(x,\lambda_{l,k,\varepsilon})|\leq Cl^{\nu}w_{l,k}(x),\quad|\tilde{\Phi}_{k+1}^{[\nu]}(x,\lambda_{l,k,0})-\tilde{\Phi}_{k+1}^{[\nu]}(x,\lambda_{l,k,1})|\leq Cl^{\nu}w_{l,k}(x)\xi_{l},$$ $$\displaystyle|\tilde{\Phi}_{n-k+1}^{\star[\nu]}(x,\lambda_{l,k,\varepsilon})|\leq Cl^{\nu-n}w_{l,k}^{-1}(x),\quad|\tilde{\Phi}_{n-k+1}^{\star[\nu]}(x,\lambda_{l,k,0})-\tilde{\Phi}_{n-k+1}^{\star[\nu]}(x,\lambda_{l,k,1})|\leq Cl^{\nu-n}w_{l,k}^{-1}(x)\xi_{l},$$ where $w_{l,k}(x):=l^{-k}\exp(-xl\cot(k\pi/n))$. Comparing (1.4) to (4.4), we conclude that $$\Omega=\left(\sum_{l=1}^{\infty}(l^{n-2}\xi_{l})^{2}\right)^{1/2}.$$ (6.1) Lemma 6.3. For $(l,k,\varepsilon)\in V$, we have $\varphi_{l,k,\varepsilon}\in C^{n-2}[0,1]$ and $$\displaystyle|\varphi_{l,k,\varepsilon}^{(\nu)}(x)|\leq Cl^{\nu}w_{l,k}(x),\quad|\varphi_{l,k,0}^{(\nu)}(x)-\varphi_{l,k,1}^{(\nu)}(x)|\leq Cl^{\nu}w_{l,k}(x)\xi_{l},\quad\nu=\overline{0,n-2},$$ $$\displaystyle|\varphi_{l,k,\varepsilon}(x)-\tilde{\varphi}_{l,k,\varepsilon}(x)|\leq C\Omega w_{l,k}(x)\chi_{l},$$ $$\displaystyle|\varphi_{l,k,0}(x)-\varphi_{l,k,1}(x)-\tilde{\varphi}_{l,k,0}(x)+\tilde{\varphi}_{l,k,1}(x)|\leq C\Omega w_{l,k}(x)\chi_{l}\xi_{l},$$ $$\displaystyle\left.\begin{array}[]{c}|\varphi_{l,k,\varepsilon}^{(\nu)}(x)-\tilde{\varphi}_{l,k,\varepsilon}^{(\nu)}(x)|\leq C\Omega l^{\nu-1}w_{l,k}(x),\\ |\varphi_{l,k,0}^{(\nu)}(x)-\varphi_{l,k,1}^{(\nu)}-\tilde{\varphi}_{l,k,0}^{(\nu)}(x)+\tilde{\varphi}_{l,k,1}^{(\nu)}(x)|\leq C\Omega l^{\nu-1}w_{l,k}(x)\xi_{l},\end{array}\right\}\quad\nu=\overline{1,n-2},$$ where $x\in[0,1]$ and $$\chi_{l}:=\left(\sum_{k=1}^{\infty}\frac{1}{k^{2}(|l-k|+1)^{2}}\right)^{1/2},\quad\{\chi_{l}\}_{l\geq 1}\in l_{2}.$$ Proof. First, let us investigate the smoothness of the functions $\tilde{\psi}_{v}(x)$ and $\tilde{R}_{v_{0},v}(x)$ in the main equation. Recall that $\{\tilde{\Phi}_{k}(x,\lambda)\}_{k=1}^{n}$ and $\{\tilde{\Phi}_{k}^{\star}(x,\lambda)\}_{k=1}^{n}$ are solutions of equations $\tilde{\ell}(y)=\lambda y$ and $\tilde{\ell}^{\star}(z)=\lambda z$, respectively. Hence $$\tilde{\Phi}_{k}(.,\lambda)\in\mathcal{D}_{\tilde{F}}\subset W_{1}^{n-1}[0,1],\quad\tilde{\Phi}_{k}^{\star}(.,\lambda)\in\mathcal{D}_{\tilde{F}^{\star}}\subset W_{1}^{n-1}[0,1].$$ (6.2) Then, due to (4.1) and (4.5), we get $\tilde{\psi}_{v}\in W_{1}^{n-1}[0,1]\subset C^{n-2}[0,1]$ for $v\in V$. Next, applying (2.13) to (4.2), we obtain $$\tilde{G}^{\prime}_{(l,k,\varepsilon),(l_{0},k_{0},\varepsilon_{0})}(x)=(-1)^{n-k+1}\beta_{l,k,\varepsilon}\tilde{\Phi}^{\star}_{n-k+1}(x,\lambda_{l,k,\varepsilon})\tilde{\Phi}_{k_{0}+1}(x,\lambda_{l_{0},k_{0},\varepsilon_{0}})\in W_{1}^{n-1}[0,1].$$ (6.3) Using (6.3) together with (4.6), we conclude that $\tilde{R}_{v_{0},v}\in W_{1}^{n}[0,1]\subset C^{n-1}[0,1]$. Moreover, using the estimates of Proposition 6.2, we obtain $$|\tilde{\varphi}^{[\nu]}_{l,k,\varepsilon}(x)|\leq Cl^{\nu}w_{l,k}(x),\quad|\tilde{\varphi}^{[\nu]}_{l,k,0}(x)-\tilde{\varphi}^{[\nu]}_{l,k,1}(x)|\leq Cl^{\nu}w_{l,k}(x)\xi_{l},$$ (6.4) for $\nu=\overline{0,n-1}$, $(l,k,\varepsilon)\in V$. Recall that $\tilde{\varphi}^{[\nu]}_{l,k,\varepsilon}(x)=\tilde{\varphi}^{(\nu)}_{l,k,\varepsilon}(x)$ for $\nu=\overline{0,n-2}$. Then, using (4.2), (4.5), (4.6), (6.3), and (6.4), we arrive at the estimates $$|\tilde{\psi}_{v}^{(\nu)}(x)|\leq Cl^{\nu},\quad|\tilde{R}_{v_{0},v}^{(\nu)}(x)|\leq C(l+l_{0})^{\nu-1}\xi_{l},\quad v_{0},v\in V,\quad\nu=\overline{1,n-2},$$ (6.5) and (4.8) for $\nu=0$. These estimates coincide with the ones for the case of regular coefficients (see formulas (2.3.40) in [22]). Therefore, the remaining part of the proof almost repeats the proof of Lemma 1.6.7 in [22], so we omit the technical details. By differentiating the relation $\psi(x)=(I-\tilde{R}(x))^{-1}\tilde{\psi}(x)$ and analyzing the convergence of the obtained series, we prove the following properties of $\psi(x)=[\psi_{v}(x)]_{v\in V}$: $$\displaystyle\psi_{v}\in C^{n-2}[0,1],\quad|\psi_{v}^{(\nu)}(x)|\leq Cl^{\nu},\quad\nu=\overline{0,n-2},$$ $$\displaystyle|\psi_{v}(x)-\tilde{\psi}_{v}(x)|\leq C\Omega\chi_{l},\quad|\psi_{v}^{(\nu)}(x)-\tilde{\psi}_{v}^{(\nu)}(x)|\leq C\Omega l^{\nu-1},\quad\nu=\overline{1,n-2}.$$ Using the latter estimates together with (5.1), we readily arrive at the claimed estimates for $\varphi_{l,k,\varepsilon}(x)$. ∎ Analogous estimates can be obtained for $\tilde{\eta}_{l,k,\varepsilon}(x)$ defined by (5.4). Lemma 6.4. For $(l,k,\varepsilon)\in V$, we have $\eta_{l,k,\varepsilon}\in C^{n-2}[0,1]$ and $$|\tilde{\eta}_{l,k,\varepsilon}^{(\nu)}(x)|\leq Cl^{\nu}w_{l,k}^{-1}(x),\quad|\tilde{\eta}_{l,k,0}^{(\nu)}(x)-\tilde{\eta}_{l,k,1}^{(\nu)}(x)|\leq Cl^{\nu}w_{l,k}^{-1}(x)\xi_{l},\quad\nu=\overline{0,n-2}.$$ Proof. The assertion $\tilde{\eta}_{l,k,\varepsilon}\in C^{n-2}[0,1]$ follows from (5.4) and (6.2). The estimates of Proposition 6.2 together with (5.4) imply $$|\tilde{\eta}_{l,k,\varepsilon}^{[\nu]}(x)|\leq Cl^{\nu}w_{l,k}^{-1}(x),\quad|\tilde{\eta}_{l,k,0}^{[\nu]}(x)-\tilde{\eta}_{l,k,1}^{[\nu]}(x)|\leq Cl^{\nu}w_{l,k}^{-1}(x)\xi_{l},\quad\nu=\overline{0,n-2}.$$ (6.6) Note that the quasi-derivatives for $\tilde{\eta}_{l,k,\varepsilon}(x)$ are generated by the matrix $\tilde{F}^{\star}(x)$. Hence, the relation similar to (2.10) is valid for them: $$\tilde{\eta}_{l,k,\varepsilon}^{[\nu]}(x)=\tilde{\eta}_{l,k,\varepsilon}^{(\nu)}(x)+\sum_{j=0}^{\nu-2}\left(\sum_{s=j}^{\nu-2}(-1)^{s+\nu}C_{s}^{j}\tilde{p}_{n-\nu+s}^{(s-j)}(x)\right)\tilde{\eta}_{l,k,\varepsilon}^{(j)}(x),\quad\nu=\overline{0,n-2}.$$ (6.7) Since $\tilde{p}_{k}\in W_{2}^{k-1}[0,1]$ for $k=\overline{0,n-2}$, we get that all the derivatives $\tilde{p}_{n-\nu+s}^{(s-j)}$ in (6.7) belong to $W_{1}^{1}[0,1]$, so they are bounded. Therefore, relation (6.7) implies the estimates similar to (6.6) for the classical derivatives $\tilde{\eta}_{l,k,\varepsilon}^{(\nu)}(x)$. ∎ Proceed to the investigation of the convergence for the series $t_{r,s}(x)$ and $T_{j_{1},j_{2}}(x)$ in the reconstruction formulas (5.9). We rely on Proposition 6.5, which (due to our notations) readily follows from Lemma 8 in [34] and its proof. Proposition 6.5. 1. If $j_{1}+j_{2}=n-2$, then there exist regularization constants $a_{j_{1},j_{2},l,k}$ such that the series $$\mathscr{T}_{j_{1},j_{2}}^{reg}(x):=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}\bigl{(}\tilde{\varphi}_{l,k,0}^{[j_{1}]}(x)\tilde{\eta}_{l,k,0}^{[j_{2}]}(x)-\tilde{\varphi}_{l,k,1}^{[j_{1}]}(x)\tilde{\eta}^{[j_{2}]}_{l,k,1}(x)-a_{j_{1},j_{2},l,k}\bigr{)}$$ converges in $L_{2}[0,1]$, $\|\mathscr{T}_{j_{1},j_{2}}^{reg}\|_{L_{2}[0,1]}\leq C\Omega$, and $a_{j_{1},j_{2},l,k}+a_{j_{1}+1,j_{2}-1,l,k}=0$, so the series $$\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}\bigl{(}\tilde{\varphi}_{l,k,0}^{[j_{1}]}(x)\tilde{\eta}_{l,k,0}^{[j_{2}]}(x)-\tilde{\varphi}_{l,k,1}^{[j_{1}]}(x)\tilde{\eta}^{[j_{2}]}_{l,k,1}(x)+\tilde{\varphi}_{l,k,0}^{[j_{1}+1]}(x)\tilde{\eta}_{l,k,0}^{[j_{2}-1]}(x)-\tilde{\varphi}_{l,k,1}^{[j_{1}+1]}(x)\tilde{\eta}^{[j_{2}-1]}_{l,k,1}(x)\bigr{)}$$ converges in $L_{2}[0,1]$ without regularization. 2. If $j_{1}+j_{2}<n-2$, then the series $$\mathscr{T}_{j_{1},j_{2}}(x):=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}\bigl{(}\tilde{\varphi}_{l,k,0}^{[j_{1}]}(x)\tilde{\eta}_{l,k,0}^{[j_{2}]}(x)-\tilde{\varphi}_{l,k,1}^{[j_{1}]}(x)\tilde{\eta}^{[j_{2}]}_{l,k,1}(x)\bigr{)}$$ converges absolutely and uniformly on $[0,1]$. Moreover, $\max_{x\in[0,1]}|\mathscr{T}_{j_{1},j_{2}}(x)|\leq C\Omega$. The constants $a_{j_{1},j_{2},l,k}$ are explicitly found in the proof of Lemma 8 of [34]. However, we do not provide them here in order not to introduce many additional notations. Moreover, explicit formulas for $a_{j_{1},j_{2},l,k}$ are not needed in the proofs. Below, similarly to the series in Proposition 6.5, we consider the series $T_{j_{1},j_{2}}(x)$ with the brackets: $$T_{j_{1},j_{2}}(x)=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}\bigl{(}\varphi_{l,k,0}^{(j_{1})}(x)\tilde{\eta}_{l,k,0}^{(j_{2})}(x)-\varphi_{l,k,1}^{(j_{1})}(x)\tilde{\eta}^{(j_{2})}_{l,k,1}(x)\bigr{)}.$$ Moreover, we agree that we understand the summation of several series $T_{j_{1},j_{2}}(x)$ (in particular, in (5.5) and in (5.9)) in the sense $$\sum_{(l,k,\varepsilon)\in V}b_{l,k,\varepsilon}+\sum_{(l,k,\varepsilon)\in V}c_{l,k,\varepsilon}=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}(b_{l,k,0}+b_{l,k,1}+c_{l,k,0}+c_{l,k,1}).$$ (6.8) Using Lemmas 6.3, 6.4 and Proposition 6.5, we get the lemma on the convergence of $T_{j_{1},j_{2}}(x)$. Lemma 6.6. 1. If $j_{1}+j_{2}=n-2$, then the series $T_{j_{1},j_{2}}(x)$ converges in $L_{2}[0,1]$ with the regularization constants $a_{j_{1},j_{2},l,k}$ from Proposition 6.5, that is, the series $$T_{j_{1},j_{2}}^{reg}(x):=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}\bigl{(}\varphi_{l,k,0}^{(j_{1})}(x)\tilde{\eta}_{l,k,0}^{(j_{2})}(x)-\varphi_{l,k,1}^{(j_{1})}(x)\tilde{\eta}^{(j_{2})}_{l,k,1}(x)-a_{j_{1},j_{2},l,k}\bigr{)}$$ converges in $L_{2}[0,1]$. Moreover, $$\|T_{j_{1},j_{2}}^{reg}(x)\|_{L_{2}[0,1]}\leq C\Omega.$$ (6.9) 2. If $j_{1}+j_{2}=n-2-s$ and $s\in\{1,2,\dots,n-2\}$, then $T_{j_{1},j_{2}}(x)$ converges in $W_{2}^{s}[0,1]$ and $\|T_{j_{1},j_{2}}(x)\|_{W_{2}^{s}[0,1]}\leq C\Omega$. Proof. Observe that, for $j_{1}+j_{2}\leq n-2$, we have $\tilde{\varphi}_{l,k,\varepsilon}^{[j_{1}]}(x)=\tilde{\varphi}_{l,k,\varepsilon}^{(j_{1})}(x)$ and $\tilde{\eta}_{l,k,\varepsilon}^{[j_{2}]}(x)$ satisfies (6.7). Therefore, using the induction by $j_{1}+j_{2}=0,1,\dots,n-2$, we get from Proposition 6.5 that, for $j_{1}+j_{2}<n-2$, the series $$\tilde{T}_{j_{1},j_{2}}(x):=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}(\tilde{\varphi}_{l,k,0}^{(j_{1})}(x)\tilde{\eta}_{l,k,0}^{(j_{2})}(x)-\tilde{\varphi}_{l,k,1}^{(j_{1})}\tilde{\eta}_{l,k,1}^{(j_{2})}(x))$$ converges absolutely and uniformly on $[0,1]$ and, for $j_{1}+j_{2}=n-2$, it converges in $L_{2}[0,1]$ with regularization, in other words, the series $$\tilde{T}_{j_{1},j_{2}}^{reg}(x):=\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}(\tilde{\varphi}_{l,k,0}^{(j_{1})}(x)\tilde{\eta}_{l,k,0}^{(j_{2})}(x)-\tilde{\varphi}_{l,k,1}^{(j_{1})}\tilde{\eta}_{l,k,1}^{(j_{2})}(x)-a_{j_{1},j_{2},l,k})$$ converges in $L_{2}[0,1]$. The regularization constants $a_{j_{1},j_{2},l,k}$ are the same as in Proposition 6.5, because the difference $\mathscr{T}_{j_{1},j_{2}}^{reg}(x)-\tilde{T}_{j_{1},j_{2}}^{reg}(x)$ for $j_{1}+j_{2}=n-2$ can be represented as a linear combination of several series $T_{i_{1},i_{2}}(x)$ with lower powers ($i_{1}+i_{2}<n-2$), which converge absolutely and uniformly on $[0,1]$. The addition and subtraction of series throughout this proof are understood in the sense (6.8). Moreover, the corresponding estimates hold $$\displaystyle\max_{x\in[0,1]}|\tilde{T}_{j_{1},j_{2}}(x)|\leq C\Omega,$$ $$\displaystyle\quad j_{1}+j_{2}<n-2,$$ (6.10) $$\displaystyle\|\tilde{T}_{j_{1},j_{2}}^{reg}(x)\|_{L_{2}[0,1]}\leq C\Omega,$$ $$\displaystyle\quad j_{1}+j_{2}=n-2.$$ (6.11) Consider the series $$\displaystyle T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x)=$$ $$\displaystyle\sum_{(l,k,\varepsilon)\in V}(\varphi_{l,k,\varepsilon}^{(j_{1})}(x)-\tilde{\varphi}_{l,k,\varepsilon}^{(j_{1})}(x))\tilde{\eta}_{l,k,\varepsilon}^{(j_{2})}(x)$$ $$\displaystyle=$$ $$\displaystyle\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}(\varphi_{l,k,0}^{(j_{1})}(x)-\varphi_{l,k,1}^{(j_{1})}(x)-\tilde{\varphi}_{l,k,0}^{(j_{1})}(x)+\tilde{\varphi}_{l,k,1}^{(j_{1})}(x))\tilde{\eta}_{l,k,0}^{(j_{2})}(x)$$ $$\displaystyle+\sum_{l=1}^{\infty}\sum_{k=1}^{n-1}(\varphi_{l,k,1}^{(j_{1})}(x)-\tilde{\varphi}_{l,k,1}^{(j_{1})}(x))(\tilde{\eta}_{l,k,0}^{(j_{2})}(x)-\tilde{\eta}_{l,k,1}^{(j_{2})}(x)).$$ Let us apply the estimates of Lemmas 6.3 and 6.4 for $\varphi_{l,k,\varepsilon}^{(j_{1})}(x)$ and $\tilde{\eta}_{l,k,\varepsilon}^{(j_{2})}(x)$, respectively. We have the two cases: $$\displaystyle j_{1}=0\colon\quad$$ $$\displaystyle|T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x)|\leq C\Omega\sum_{l=1}^{\infty}l^{j_{2}}\xi_{l}\chi_{l},$$ $$\displaystyle j_{1}>0\colon\quad$$ $$\displaystyle|T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x)|\leq C\Omega\sum_{l=1}^{\infty}l^{j_{1}+j_{2}-1}\xi_{l}$$ Since $\{l^{n-2}\xi_{l}\}\in l_{2}$ and $\{\chi_{l}\}\in l_{2}$, we conclude that the series $T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x)$ in the both cases converges absolutely and uniformly on $[0,1]$ for $j_{1}+j_{2}\leq n-2$. Moreover, for the difference $T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x)$, the estimates similar to (6.10) and (6.11) hold. For $j_{1}+j_{2}=n-2$, this readily implies the convergences of the series $$T_{j_{1},j_{2}}^{reg}(x)=\tilde{T}_{j_{1},j_{2}}^{reg}(x)+(T_{j_{1},j_{2}}(x)-\tilde{T}_{j_{1},j_{2}}(x))$$ in $L_{2}[0,1]$ and the estimate (6.9). Now, let $j_{1}+j_{2}=n-3$. Formal differentiation together with the summation rule (6.8) imply $$T^{\prime}_{j_{1},j_{2}}(x)=T_{j_{1}+1,j_{2}}(x)+T_{j_{1},j_{2}+1}(x).$$ As we have already shown, the series $T_{j_{1}+1,j_{2}}(x)$ and $T_{j_{1},j_{2}+1}(x)$ converge in $L_{2}[0,1]$ with regularization and satisfy the estimate (6.9). Moreover, for their sum, the regularization constants vanish: $a_{j_{1}+1,j_{2},l,k}+a_{j_{1},j_{2}+1,l,k}$, so the series $T^{\prime}_{j_{1},j_{2}}(x)$ converges in $L_{2}[0,1]$ without regularization. Consequently, $T_{j_{1},j_{2}}\in W_{2}^{1}[0,1]$ and $$\displaystyle\|T_{j_{1},j_{2}}(x)\|_{W_{2}^{1}[0,1]}$$ $$\displaystyle=\|T_{j_{1},j_{2}}(x)\|_{L_{2}[0,1]}+\|T_{j_{1},j_{2}}^{\prime}(x)\|_{L_{2}[0,1]}$$ $$\displaystyle\leq\max_{x\in[0,1]}|T_{j_{1},j_{2}}(x)|+\|T_{j_{1}+1,j_{2}}^{reg}(x)\|_{L_{2}[0,1]}+\|T_{j_{1},j_{2}+1}^{reg}(x)\|_{L_{2}[0,1]}\leq C\Omega.$$ Thus, the lemma is already proved for $j_{1}+j_{2}=n-2$ and $j_{1}+j_{2}=n-3$. By induction, we complete the proof for $j_{1}+j_{2}=n-4,\dots,1,0$. ∎ Thus, we are ready to investigate the convergence of the series in the reconstruction formulas (5.9). Lemma 6.7. The reconstruction formulas (5.9) define the functions $p_{s}\in W_{2}^{s-1}[0,1]$ for $s=\overline{0,n-2}$ and $\|p_{s}(x)-\tilde{p}_{s}(x)\|_{W_{2}^{s-1}[0,1]}\leq C\Omega$. Proof. We prove the lemma by induction. Fix $s\in\{0,1,\dots,n-2\}$. Suppose that the assertion of the lemma is already proved for all $s_{1}>s$. Due to (5.5), we have $$\displaystyle\mathscr{S}_{1}(x)$$ $$\displaystyle:=t_{n,s}(x)+(-1)^{n-s}T_{0,n-s-1}(x)=\sum_{j=0}^{n-s-1}b_{j}T_{n-s-1-j,j}(x)$$ $$\displaystyle=\frac{d}{dx}\left(\sum_{j=0}^{n-s-2}d_{j}T_{n-s-2-j,j}(x)\right)$$ where $$\displaystyle b_{j}:=C_{n}^{j+s+1}C_{j+s}^{s}+(-1)^{n-s}\delta_{j,n-s-1},\quad\sum\limits_{j=0}^{n-s-1}b_{j}=0,$$ $$\displaystyle d_{j}:=\sum_{i=0}^{j}(-1)^{j-i}b_{i},\quad j=\overline{0,n-s-2}.$$ By virtue of Lemma 6.6, the series $T_{n-s-2-j,j}$ for $j=\overline{0,n-s-2}$ belong to $W_{2}^{s}[0,1]$ for $s\geq 1$ and converge with regularization in $L_{2}[0,1]$ for $s=0$. In the both cases, we conclude that $\mathscr{S}_{1}\in W_{2}^{s-1}[0,1]$. Furthermore, $\|\mathscr{S}_{1}(x)\|_{W_{2}^{s-1}[0,1]}\leq C\Omega$. Consider the next term in (5.9): $$\mathscr{S}_{2}(x):=\sum_{j=0}^{n-s-3}\sum_{r=j}^{n-s-3}(-1)^{r}C_{r}^{j}\tilde{p}_{r+s+1}^{(r-j)}(x)T_{0,j}(x).$$ In view of $p_{k}\in W_{2}^{k-1}[0,1]$ and Lemma 6.6, we have $$\displaystyle\tilde{p}_{k+s+1}^{(k-j)}\in W_{2}^{s+j}[0,1]\subseteq W_{2}^{s}[0,1],$$ $$\displaystyle T_{0,j}\in W_{2}^{n-j-2}[0,1]\subseteq W_{2}^{s+1}[0,1].$$ Hence $\mathscr{S}_{2}\in W_{2}^{s}[0,1]$. For the last term $$\mathscr{S}_{3}(x):=-\sum_{r=s+1}^{n-2}p_{r}(x)t_{r,s}(x),$$ we have $p_{r}\in W_{2}^{r-1}[0,1]$ by the induction hypothesis, so $p_{r}\in W_{2}^{s}[0,1]$, and $t_{r,s}\in W_{2}^{s+1}[0,1]$ according to (5.5) and Lemma 6.6. Hence $\mathscr{S}_{3}\in W_{2}^{s}[0,1]$. Lemma 6.6 also implies $\|\mathscr{S}_{j}(x)\|_{W_{2}^{s}[0,1]}\leq C\Omega$ for $j=2,3$. Since $p_{s}(x)=\tilde{p}_{s}(x)+\mathscr{S}_{1}(x)+\mathscr{S}_{2}(x)+\mathscr{S}_{3}(x)$, we arrive at the assertion of the lemma. ∎ Thus, we have obtained the vector $p=(p_{k})_{k=0}^{n-2}$ by the reconstruction formulas. It remains to prove the following lemma. Lemma 6.8. The spectral data of $p$ coincide with $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ Proof. The proof is based on the approximation approach which has been considered in [35] for $n=3$ in detail. The proof for higher orders $n$ is similar, so we omit the technical details and just outline the main idea. Along with $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$, consider the “truncated” data $$\lambda_{l,k}^{N}:=\begin{cases}\lambda_{l,k},&l\leq N,\\ \tilde{\lambda}_{l,k},&l>N,\end{cases}\qquad\beta_{l,k}^{N}:=\begin{cases}\beta_{l,k},&l\leq N,\\ \tilde{\beta}_{l,k},&l>N.\end{cases}$$ Then, by using $\{\lambda_{l,k}^{N},\beta_{l,k}^{N}\}$ instead of $\{\lambda_{l,k},\beta_{l,k}\}$, one can construct the main equation $$(I-\tilde{R}^{N}(x))\psi^{N}(x)=\tilde{\psi}^{N}(x),\quad x\in[0,1],$$ (6.12) analogously to (4.9). It can be shown that equation (6.12) is uniquely solvable for sufficiently large values of $N$. Therefore, one can find the functions $\varphi_{l,k,\varepsilon}^{N}(x)$, $(l,k,\varepsilon)\in V$, by using the solution $\psi^{N}(x)$ similarly to (5.1). Then, construct the functions $\{\Phi_{k}^{N}(x,\lambda)\}_{k=1}^{n}$ and $p^{N}=(p_{k}^{N})_{k=0}^{n-2}$ analogously to (5.2) and (5.9), respectively. The advantage of the “truncated” data $\{\lambda_{l,k}^{N},\beta_{l,k}^{N}\}$ over $\{\lambda_{l,k},\beta_{l,k}\}$ is that the series in (5.2) and (5.9) are finite, so one can show by direct calculations that $\{\Phi_{k}^{N}(x,\lambda)\}_{k=1}^{n}$ are the Weyl solutions of equation (1.1) with the coefficients $p^{N}$ and deduce that $\{\lambda_{l,k}^{N},\beta_{l,k}^{N}\}$ are the spectral data of $p^{N}$. At this stage, the assumptions (S-1)–(S-3) of Theorem 6.1 are crucial. If these assumptions do not hold, then one needs additional data to recover $p^{N}$. Then, using (5.9), we show that $$\lim_{N\to\infty}\|p_{k}^{N}-p_{k}\|_{W_{2}^{k-1}[0,1]}=0,\quad k=\overline{0,n-2}.$$ Furthermore, it can be shown that the spectral data depends continuously on the coefficients $p=(p_{k})_{k=0}^{n-2}$, which concludes the proof. ∎ Let us summarize the arguments of this section in the proof of Theorem 6.1. Proof of Theorem 6.1. Let $\{\lambda_{l,k},\beta_{l,k}\}$ and $\tilde{p}$ satisfy the hypothesis of the theorem. Then, the main equation (4.9) is uniquely solvable. By using its solution $\psi(x)$, we find the functions $\{\varphi_{l,k,\varepsilon}(x)\}_{(l,k,\varepsilon)\in V}$ by (5.1) and reconstruct the coefficients $(p_{k})_{k=0}^{n-2}$ by (5.9). By virtue of Lemma 6.7, $p_{k}\in W_{2}^{k-1}[0,1]$ for $k=\overline{0,n-2}$. Lemma 6.8 implies that $\{\lambda_{l,k},\beta_{l,k}\}$ are the spectral data of $p=(p_{k})_{k=0}^{n-2}$. Taking (S-1) and (S-2) into account, we conclude that $p\in W$. The uniqueness of $p$ is given by Corollary 3.2. ∎ Proof of Theorem 1.3. Let us show that, if data $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n}}$ and $\tilde{p}$ satisfy the conditions of Theorem 1.3 for sufficiently small $\delta>0$, then they also satisfy the hypothesis of Theorem 6.1. It follows from (1.4) that $$|\lambda_{l,k}-\tilde{\lambda}_{l,k}|\leq\delta,\quad|\beta_{l,k}-\tilde{\beta}_{l,k}|\leq\delta,\quad l\geq 1,\,k=\overline{1,n-1}.$$ On the other hand, Definition 1.1 and the asymptotics (1.3) and (3.3) imply that the eigenvalues $\{\tilde{\lambda}_{l,k}\}_{l\geq 1}$ are separated for each fixed $k\in\{1,2,\dots,n-1\}$ as well as for neighboring values of $k$ and the weight numbers $\{\tilde{\beta}_{l,k}\}$ are separated from zero. Rigorously speaking, we have $$\displaystyle|\tilde{\lambda}_{l,k}-\tilde{\lambda}_{l_{0},k}|\geq\delta_{0},\quad l_{0}\neq l,\quad k=\overline{1,n-1},$$ $$\displaystyle|\tilde{\lambda}_{l_{0},k}-\tilde{\lambda}_{l,k+1}|\geq\delta_{0},\quad l,l_{0}\geq 1,\quad k=\overline{1,n-2},$$ $$\displaystyle|\tilde{\beta}_{l,k}|\geq\delta_{0},\quad l\geq 1,\quad k=\overline{1,n-1}.$$ By choosing $\delta<\delta_{0}/2$, we achieve the conditions (S-1)–(S-3) of Theorem 6.1 for $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$. Furthermore, (S-4) directly follows from (4.4) and (1.4). Using (4.8) and (6.1), we estimate $$\|\tilde{R}(x)\|_{m\to m}=\sup_{v_{0}\in V}\sum_{v\in V}|\tilde{R}_{v_{0},v}(x)|\leq C\sup_{l_{0}\geq 1}\sum_{l=1}^{\infty}\frac{\xi_{l}}{|l-l_{0}|+1}\leq C\Omega,$$ where the constant $C$ depends only on $\tilde{p}$ and $\delta$ if $\Omega\leq\delta$. Therefore, choosing a sufficiently small $\delta$ for the fixed $\tilde{p}$, we achieve $\|\tilde{R}(x)\|\leq\frac{1}{2}$. Then, the operator $(I-\tilde{R}(x))$ has a bounded inverse, that is, the condition (S-5) of Theorem 6.1 is fulfilled. Thus, the numbers $\{\lambda_{l,k},\beta_{l,k}\}_{l\geq 1,\,k=\overline{1,n-1}}$ satisfy the conditions (S-1)–(S-5) of Theorem 6.1, which implies the existence of $p=(p_{k})_{k=0}^{n-2}\in W$ with the spectral data $\{\lambda_{l,k},\beta_{l,k}\}$. It is worth noting that, if $\Omega\leq\delta$, then the constant $C$ in Lemmas 6.3–6.7 depends only on $\tilde{p}$ and $\delta$. Hence, the estimate (1.5) follows from Lemma 6.7, which concludes the proof. ∎ Funding. This work was supported by Grant 21-71-10001 of the Russian Science Foundation, https://rscf.ru/en/project/21-71-10001/ References [1] Bernis, F.; Peletier, L.A. Two problems from draining flows involving third-order ordinary differential equations, SIAM J. Math. Anal. 27 (1996), no. 2, 515–527. [2] Greguš, M. Third Order Linear Differential Equations, Springer, Dordrecht (1987). [3] McKean, H. Boussinesq’s equation on the circle, Comm. Pure Appl. Math. 34 (1981), no. 5, 599–691. [4] Barcilon, V. On the uniqueness of inverse eigenvalue problems, Geophys. J. Inter. 38 (1974), no. 2, 287–298. [5] Gladwell, G.M.L. Inverse Problems in Vibration, Second Edition, Solid Mechanics and Its Applications, Vol. 119, Springer, Dordrecht (2005). [6] Möller, M.; Zinsou, B. Sixth order differential operators with eigenvalue dependent boundary conditions, Appl. Anal. Disc. Math. 7 (2013), no. 2, 378–389. [7] Levitan, B.M. Inverse Sturm-Liouville Problems, VNU Sci. Press, Utrecht (1987). [8] Pöschel, J.; Trubowitz, E. Inverse Spectral Theory, Academic Press, New York (1987). [9] Freiling, G.; Yurko, V. Inverse Sturm-Liouville Problems and Their Applications, Nova Science Publishers, Huntington, NY (2001). [10] Marchenko, V.A. Sturm-Liouville Operators and Their Applications, Revised Edition, AMS Chelsea Publishing, Providence, RI (2011). [11] Kravchenko, V.V. Direct and Inverse Sturm-Liouville Problems, Birkhäuser, Cham (2020). [12] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operators with singular potentials, Inverse Problems 19 (2003), no. 3, 665–684. [13] Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operators with singular potentials. II. Reconstruction by two spectra, North-Holland Mathematics Studies 197 (2004), 97–114. [14] Freiling, G.; Ignatiev, M. Y.; Yurko, V. A. An inverse spectral problem for Sturm-Liouville operators with singular potentials on star-type graph, Proc. Symp. Pure Math. 77 (2008), 397–408. [15] Savchuk, A.M.; Shkalikov, A.A. Inverse problems for Sturm-Liouville operators with potentials in Sobolev spaces: uniform stability, Funct. Anal. Appl. 44 (2010), no. 4, 270–285. [16] Hryniv, R.O. Analyticity and uniform stability in the inverse singular Sturm-Liouville spectral problem, Inverse Problems 27 (2011), no. 6, 065011. [17] Eckhardt, J.; Gesztesy, F.; Nichols, R.; Teschl, G. Supersymmetry and Schrödinger-type operators with distributional matrix-valued potentials, J. Spectral Theory 4 (2014), no. 4, 715–768. [18] Bondarenko, N.P. Solving an inverse problem for the Sturm-Liouville operator with singular potential by Yurko’s method, Tamkang J. Math. 52 (2021), no. 1, 125-154. [19] Gel’fand, I. M.; Levitan, B. M. On the determination of a differential equation from its spectral function, Izv. Akad. Nauk SSSR, Ser. Mat. 15 (1951) 309–360 [in Russian]. [20] Yurko, V.A. Recovery of nonselfadjoint differential operators on the half-line from the Weyl matrix, Math. USSR-Sb. 72 (1992), no. 2, 413–438. [21] Yurko, V.A. Inverse problems of spectral analysis for differential operators and their applications, J. Math. Sci. 98 (2000), no. 3, 319–426. [22] Yurko, V. A. Method of Spectral Mappings in the Inverse Problem Theory, Inverse and Ill-Posed Problems Series, Utrecht, VNU Science (2002). [23] Leibenson, Z.L. The inverse problem of spectral analysis for higher-order ordinary differential operators, Trudy Moskov. Mat. Obshch. 15 (1966), 70–144; English transl. in Trans. Moscow Math. Soc. 15 (1966). [24] Leibenson, Z.L. Spectral expansions of transformations of systems of boundary value problems, Trudy Moskov. Mat. Obshch. 25 (1971), 15–58; English transl. in Trans. Moscow Math. Soc. 25 (1971). [25] Yurko, V.A. On higher-order differential operators with a singular point, Inverse Problems 9 (1993), no. 4, 495–502. [26] Yurko, V.A. On higher-order differential operators with a regular singularity, Sb. Math. 186 (1995), no. 6, 901–928. [27] Beals, R. The inverse problem for ordinary differential operators on the line, American J. Math. 107 (1985), no. 2, 281–366. [28] Beals, R.; Deift, P.; Tomei, C. Direct and Inverse Scattering on the Line, Mathematical Surveys and Monographs, Vol. 28, Providence, AMS (1988). [29] Mirzoev, K.A.; Shkalikov, A.A. Differential operators of even order with distribution coefficients, Math. Notes 99 (2016), no. 5, 779–784. [30] Konechnaja, N.N.; Mirzoev, K.A.; Shkalikov, A.A. Asymptotics of solutions of two-term differential equations, Math. Notes 113 (2023), no. 2, 228–242. [31] Bondarenko N.P. Linear differential operators with distribution coefficients of various singularity orders, Math. Meth. Appl. Sci. 46 (2023), no. 6, 6639–6659. [32] Bondarenko, N.P. Inverse spectral problems for arbitrary-order differential operators with distribution coefficients, Mathematics 9 (2021), no. 22, Article ID 2989. [33] Bondarenko, N.P. Regularization and inverse spectral problems for differential operators with distribution coefficients, Mathematics 11 (2023), no. 16, Article ID 3455. [34] Bondarenko, N.P. Reconstruction of higher-order differential operators by their spectral data, Mathematics 10 (2022), no. 20, Article ID 3882. [35] Bondarenko, N.P. Inverse spectral problem for the third-order differential equation, Results Math. 78 (2023), Article number: 179. [36] Borg, G. Eine Umkehrung der Sturm-Liouvilleschen Eigenwertaufgabe: Bestimmung der Differentialgleichung durch die Eigenwerte, Acta Math. 78 (1946), 1–96 [in German]. [37] Buterin, S.; Kuznetsova, M. On Borg’s method for non-selfadjoint Sturm-Liouville operators, Anal. Math. Phys. 9 (2019), 2133–2150. [38] Hochstadt, H. On the well-posedness of the inverse Sturm-Liouville problem, J. Diff. Equ. 23 (1977), 402–413. [39] McLaughlin, J.R. Stability theorems for two inverse problems, Inverse Probl. 4 (1988), 529–540. [40] Korotyaev, E. Stability for inverse resonance problem, International Math. Research Notices 2004 (2004), no. 73, 3927–3936. [41] Marletta, M.; Weikard, R. Weak stability for an inverse Sturm-Liouville problem with finite spectral data and complex potential, Inverse Problems 21 (2005), 1275-1290. [42] Horvath, M.; Kiss, M. Stability of direct and inverse eigenvalue problems: the case of complex potentials, Inverse Problems 27 (2011), 095007 (20pp). [43] Bondarenko, N.; Buterin, S. On a local solvability and stability of the inverse transmission eigenvalue problem, Inverse Problems 33 (2017), 115010. [44] Xu, X.-C.; Ma, L.-J.; Yang, C.-F. On the stability of the inverse transmission eigenvalu problem from the data of McLaughlin and Polyakov, J. Diff. Eqns. 316 (2022), 222–248. [45] Guo, Y.; Ma, L.-J.; Xu, X.-C.; An, Q. Weak and strong stability of the inverse Sturm-Liouville problem, Math. Meth. Appl. Sci. (2023), 1–22, https://doi.org/10.1002/mma.9421 [46] Buterin, S. Uniform full stability of recovering convolutional perturbation of the Sturm-Liouville operator from the spectrum, J. Diff. Eqns. 282 (2021), 67–103. [47] Buterin, S.; Djurić, N. Inverse problems for Dirac operators with constant delay: uniqueness, characterization, uniform stability, Lobachevskii J. Math. 43 (2022), no. 6, 1492–1501. [48] Kuznetsova, M. Uniform stability of recovering Sturm-Liouville-type operators with frozen argument, Results Math. 78 (2023), no. 5, 169. [49] Bondarenko N.P. Spectral data asymptotics for the higher-order differential operators with distribution coefficients, J. Math. Sci. 266 (2022), no. 5, 794–815. [50] Buterin, S.A. On inverse spectral problem for non-selfadjoint Sturm-Liouville operator on a finite interval, J. Math. Anal. Appl. 335 (2007), no. 1, 739–749. [51] Buterin, S.A.; Shieh, C.-T.; Yurko, V.A. Inverse spectral problems for non-selfadjoint second-order differential operators with Dirichlet boundary conditions, Boundary Value Problems (2013), 2013:180. [52] Savchuk, A.M.; Shkalikov, A.A. Sturm-Liouville operators with distribution potentials, Transl. Moscow Math. Soc. 64 (2003), 143–192. Natalia Pavlovna Bondarenko 1. Department of Mechanics and Mathematics, Saratov State University, Astrakhanskaya 83, Saratov 410012, Russia, 2. Department of Applied Mathematics and Physics, Samara National Research University, Moskovskoye Shosse 34, Samara 443086, Russia, 3. Peoples’ Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya Street, Moscow, 117198, Russia, e-mail: bondarenkonp@info.sgu.ru
Effects of neutrino temperatures and mass hierarchies on the detection of supernova neutrinos Shao-Hsuan Chiu and T. K. Kuo chiu@physics.purdue.edutkkuo@physics.purdue.edu Department of Physics, Purdue University, West Lafayette, IN 47907 Abstract Possible outcomes of neutrino events at both Super-Kamiokande and SNO for a type-II supernova are analyzed considering the uncertainties in SN neutrino spectra (temperature) at emission, which may complicate the interpretation of the observed events. With the input of parameters deduced from the current solar and atmospheric experiments, consequences of direct-mass hierarchy $m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}}$ and inverted-mass hierarchy $m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$ are investigated. Even if the $\nu$ temperatures are not precisely known, we found that future experiments are likely to be able to separate the currently accepted solutions to the solar neutrino problem (SNP): large angle MSW, small angle MSW, and the vacuum oscillation, as well as to distinguish between the direct and inverted mass hierarchies of the neutrinos. pacs: 14.60.Pq, 13.15.+g, 97.60.Bw I Introduction During the past few decades, elaborate solar neutrino [1] and atmospheric neutrino [2] experiments have provided a wealth of convincing evidences for the existence of massive neutrinos and neutrino mixing, which could have an essential impact on particle physics, astrophysics and cosmology. Attentions have been focused on solving the puzzles of unexpected discrepancies between calculated and observed neutrino fluxes. Instead of the more difficult and unlikely solution from an improved solar model [3], the solar $\nu_{e}$ deficit could be reconciled with the prediction if neutrino oscillations occur either in vacuum or in the presence of solar matter. The flavor oscillation can be parameterized by the mass-squared differences of the neutrino mass eigenstates $\Delta m^{2}\equiv m_{i}^{2}-m_{j}^{2}\,(i,j=1,2,3)$ and $\theta_{ij}$, the mixing angles between weak eigenstates and mass eigenstates of the neutrinos ($\theta_{ij}\leq\frac{\pi}{4}$ is assumed). In terms of these parameters, the just-so vacuum oscillation [4] requires $6\times 10^{-11}\leq\Delta m^{2}\leq 60\times 10^{-11}$ eV${}^{2}$ and $\sin^{2}2\theta\simeq 1$, while the MSW resonant effect [5] in the Sun becomes important if $4\times 10^{-6}$ eV${}^{2}$ $\leq\Delta m^{2}\leq 7\times 10^{-5}$ eV${}^{2}$, $\sin^{2}2\theta\simeq 0.6-0.9$ (large angle solution), or $3\times 10^{-6}$ eV${}^{2}$ $\leq\Delta m^{2}\leq 12\times 10^{-6}$ eV${}^{2}$, $0.003\leq\sin^{2}2\theta\leq 0.01$ (small angle solution) [6]. Recent atmospheric neutrino data from the Super-Kamiokande [7] further provide a strong evidence in support of neutrino oscillation as the cause to deficit of muon neutrinos, provided $\Delta m^{2}\sim 10^{-2}-10^{-3}$ eV${}^{2}$ and $\sin^{2}2\theta>0.82$. It is clear that this solution to the neutrino anomaly in the atmosphere represents quite a distinct area in the parameter space as compared to that of the solar neutrino deficit. Based on the conclusive LEP experiment at CERN [8] that there are three flavors of light, active neutrinos participating in the weak interaction, a direct-mass hierarchy $m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}}$ , with $\Delta^{2}_{32}\simeq\Delta^{2}_{31}\gg\delta^{2}_{21}$ ( $\Delta^{2}_{32}\equiv m^{2}_{3}-m^{2}_{2}$ and $\delta^{2}_{21}\equiv m^{2}_{2}-m^{2}_{1}$) naturally accommodates the scales of both the two mass-squared differences and provides solutions to both puzzles: the conversion $\nu_{e}\rightarrow\nu_{\mu}$ causes the observed deficit in the solar $\nu_{e}$ flux and the vacuum oscillation $\nu_{\mu}\rightarrow\nu_{\tau}$ suppresses the $\nu_{\mu}$ flux in the atmosphere. In addition to the Sun and the atmosphere, type-II supernovae are also natural sources that emit neutrinos. Despite the first-ever observation of SN neutrino signals from SN 1987A [9], detailed neutrino spectral shapes have not yet been determined with certainty due to low statistics and the physical processes that are not well understood. This difficulty is accompanied by, for instance, the uncertainties in the characteristic temperatures $T_{\nu}$ as neutrinos were emitted from the neutrino-spheres. Consequently, the interpretation of future measurements of SN neutrinos would contain ambiguity in that the observed spectrum, which may have been deformed through conversion processes, could be simulated by different set of parameters at different temperatures. It is therefore worthwhile to investigate how the uncertainty in $T_{\nu}$ could impact the interpretation of events at terrestrial detectors. In this paper, the parameters that solve solar and atmospheric neutrino problems are taken as inputs, a natural choice as also adopted by some earlier works [10] [11]. In addition, with the uncertainty in $T_{\nu}$ considered, we study whether a particular set of parameters could be singled out by future observations of SN neutrinos. Unlike solar neutrinos, the initial neutrino flux from a supernova contains all flavors of neutrino: $\nu_{e},\nu_{\mu},\nu_{\tau}$ and their anti-particles. Under the direct-mass hierarchy of neutrinos, the original $\nu$ spectra will be modified by the MSW effect as neutrinos propagate through the resonance. The $\overline{\nu}$ spectra, on the contrary, is subject only to vacuum oscillation which yields an large averaged survival probability of $\overline{\nu}_{e}:P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\geq\frac% {1}{2}$. The high-energy $\overline{\nu}_{\mu}(\overline{\nu}_{\tau})$ would not be converted to the easily detectable $\overline{\nu}_{e}$ (for instance, at Super-Kamiokande) through the MSW effect unless neutrino masses are inverted, in which case the heavier mass eigenstate has a larger component in $\nu_{e}$ than in $\nu_{\mu}$ or $\nu_{\tau}$. Since the mixing angles are defined in the first octant, the weak eigenstates $\nu_{\tau},\nu_{\mu}$, and $\nu_{e}$ are predominant in the mass eigenstates $\nu_{3},\nu_{2}$, and $\nu_{1}$, respectively. Under the direct-mass hierarchy where $m_{\nu_{\tau}}>m_{\nu_{\mu}}>m_{\nu_{e}}$, the mass eigenstates follow the hierarchy $m_{3}>m_{2}>m_{1}$, while in the inverted-mass hierarchy, for instance, $m_{\nu_{e}}>m_{\nu_{\mu}}>m_{\nu_{\tau}}$, the pattern $m_{1}>m_{2}>m_{3}$ follows. Some models and phenomenological consequences involving inverted neutrino masses have been discussed [12]. Although current MSW solutions to the solar neutrino problem (SNP) have excluded $\nu_{e}$ as the heavier eigenstate, the inverse hierarchy could remain viable if the just-so vacuum oscillation is the solution for the SNP. If the inverted masses do apply and the resonance conditions for the anti-neutrinos are met, this could lead to an effective conversion between $\overline{\nu}_{e}$ and the higher-energy $\overline{\nu}_{\mu}(\overline{\nu}_{\tau})$ to yield copious $\overline{\nu}_{e}$-type events in the earth-bound detector. With the uncertainty in $T_{\nu}$ considered, it is our second goal to investigate influences of both the direct and inverted mass hierarchies to future observations of SN neutrinos and how future measurements can play a role in this unsettled issue of direct versus inverted neutrino masses. This paper is organized as follows. In section II we summarize the general features of stellar collapse and properties of the emitted neutrinos, and show how the uncertainty in neutrino temperature could affect the outcomes in the detector. Section III and Section IV contain more general results expected from the future observations at both Super-Kamiokande and SNO, for direct and inverted masses, respectively. Based on the measurements, possible schemes which could provide discrimination among input parameters and between the two mass hierarchies are proposed. Section V contains discussions and our concluding remarks. II Detection of supernova neutrinos II.1 SN neutrinos and neutrino parameters A massive star ($M\geq 8M_{\odot}$) becomes unstable at the last stage of its evolution. When the mass of the iron core reaches the Chandrasekhar limits ($\sim 1.4M_{\odot}$), it begins to collapse into a compact object of extremely high density, and the gravitational binding energy is released in the form of neutrinos. Mayle et al. [13] have pointed out that the total emitted energy, the averaged neutrino luminosity, and the mean neutrino energy are independent of the explosive mechanism but depend only on the mass of the initial iron core. Regardless of the details of collapsed and bounce, it is well established that to form a typical neutron star after the collapse, an amount of $\sim 3\times 10^{53}$ erg, about 99% of the binding energy would be released in the form of neutrinos. Each (anti)neutrino species will carry away about the same amount of energy. Neutrinos are emitted from a collapsed star through two different processes: neutronization burst during the pre-bounce phase and thermal emission in the post-bounce phase. The neutronization burst of a $\nu_{e}$ flux is produced by the electron capture on protons: $e^{-}+p\rightarrow n+\nu_{e}$. The thermal emission creates $\nu\overline{\nu}$ pairs of all three flavors via the annihilation of $e^{+}e^{-}$ pairs: $e^{+}e^{-}\rightarrow\nu_{\ell}+\overline{\nu}_{\ell}\,(\ell=e,\mu,\tau)$. The duration of neutronization burst lasts about a few millisecond and takes away 1%-10% of the total binding energy. The thermal emission phase has a much wider spread of time structure, in the order of 10 seconds. The initial neutrino spectrum is usually approximated by a Fermi-Dirac or a Boltzmann distribution with a constant temperature and zero chemical potential. To reduce the high-energy tail of the Fermi-Dirac distribution, some elaborate models introduce a nonzero chemical potential [14]. It is clear that the event numbers in a detector depend crucially on the $\nu$ temperature. However, the numerical calculations based upon various models and physical arguments give rise to relatively wide ranges of temperature for each neutrino species [15] and this uncertainty in $T_{\nu}$ could complicate the signatures concerning the oscillation of neutrinos from a supernova. One may refer to Ref. [16] for a review of neutrino oscillations. Although all the three flavors are emitted from a supernova, the phenomenon of SN neutrino oscillation can be well described through $P(\nu_{e}\rightarrow\nu_{e})$ and $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})$ [17]. Under the direct-mass hierarchy, the probability $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})$ is nearly independent of energy and is approximated by the vacuum oscillation expression. The probability $P(\nu_{e}\rightarrow\nu_{e})$ is energy-dependent and contains four parameters under a proper parameterization of the mixing matrix in the full 3-$\nu$ formalism: $\delta_{21}^{2},\Delta_{32}^{2}\simeq\Delta_{31}^{2},\theta_{21},\phi_{31}$. In what follows, these four parameters will simply be denoted as $\delta^{2},\Delta^{2},\theta$, and $\phi$, respectively. Among the above four parameters, the angle $\phi$ is special in certain aspect. In addition to the limit $\phi<12^{\circ}$ at 90% C.L. set by the CHOOZ [19] long baseline reactor in the disappearance mode $\overline{\nu}_{e}\rightarrow\overline{\nu}_{x}$, an analysis in Ref. [6] also has given allowed ranges of $\delta^{2}$ and $\sin^{2}2\theta$ for $\phi<20^{\circ}$. Note that to zeroth order of $\delta^{2}/\Delta^{2}$, the probability becomes [18] $$P(\nu_{e}\rightarrow\nu_{e})\simeq\cos^{4}\phi P_{2\nu}+\sin^{4}\phi.$$ (1) One may examine $\phi$ in more details through the iso-probability contours for SN neutrinos at several distinct scales of $\delta^{2}/E_{\nu}$, as shown in Figure 1. Within the interested range of $\theta$, ( $\sin^{2}2\theta\geq 0.003$, or $\log_{10}\tan^{2}2\theta\geq-2.5$ ), the $P(\nu_{e}\rightarrow\nu_{e})$ contours are almost independent of $\phi$ for $\phi<12^{\circ}$ ( $\log_{10}\tan^{2}2\phi<-0.7$ ). The parameter $\phi$ begins to show slight influence on $P(\nu_{e}\rightarrow\nu_{e})$ contours only at very small $\theta$ and very large $\phi$. Hence, for our purpose the choice of $\phi$ within $\phi<20^{\circ}$ would only affect the results slightly. The value $\phi=10^{\circ}$ will be adopted for definiteness. As for other input parameters, the following are taken: Large angle MSW solution(LA): $\delta^{2}=10^{-5}$ eV${}^{2},\sin^{2}2\theta=0.75.$ Small angle MSW solution(SA): $\delta^{2}=6\times 10^{-6}$ eV${}^{2},\sin^{2}2\theta=0.0075.$ Just-so vacuum solution(JS): $\delta^{2}=3\times 10^{-10}$ eV${}^{2},\sin^{2}2\theta\simeq 1$. II.2 The Complication In Observed Events In what follows, an initial flux described by a Fermi-Dirac spectrum with zero chemical potential will be assumed. The detailed time evolution during the cooling phase has been ignored, while the averaged magnitudes and effective temperatures of $\nu(\overline{\nu})$ flux are used instead  [20]. The event numbers at the detectors for the neutrino of type $\ell$ are estimated by $$N_{\ell}=\frac{Z\,L_{\ell}}{4\pi\,D^{2}}\int dE_{\nu}\,n_{\ell}(E_{\nu},T_{% \ell})\,\sigma_{\ell}(E_{\nu})\,P_{\ell}(E_{\nu}).$$ (2) Here $Z$ is the number of targets in the detector, $L_{\ell}$ is the initial number of $\nu_{\ell}$, $D$ is the distance between the supernova and the earth, $\sigma_{\ell}$(E${}_{\nu}$) is the cross section for the corresponding reaction, $P_{\ell}(E_{\nu})$ is the surviving probability for $\nu_{\ell}$, $T_{\ell}$ is the temperature for $\nu_{\ell}$, and $$n_{\ell}(E_{\nu},T_{\ell})\simeq 0.5546\frac{E_{\nu}^{2}}{T_{\ell}^{3}\,[1+exp% (\frac{E_{\nu}}{T_{\ell}})]}.$$ (3) In evaluating the surviving probability, the electron number per nucleon is assumed to remain a constant ($Y_{e}\approx 0.42$) and the density profile outside the neutrino-sphere ($r\geq 10^{7}$ cm) is described by the power-law $\rho\sim r^{-3}$. To grasp the picture on how uncertainties in neutrino temperatures could affect the interpretation of observed events, we may tentatively assume $T_{{\nu}_{e}}$=3 MeV, $T_{{\nu}_{x}}=T_{\overline{\nu}_{x}}$= 6 MeV ($x=\mu,\tau$), and compare outcomes from $T_{\overline{\nu}_{e}}$=3 MeV and $T_{\overline{\nu}_{e}}$=4.5 MeV. At Super-Kamiokande [21], contributions from the inverse beta decay $\overline{\nu}_{e}+p\rightarrow e^{+}+n$ predominate due to the high cross section. Events from other interactions: $\nu_{\ell}(\overline{\nu}_{\ell})+e^{-},(\ell=e,\mu,\tau)$ [22] and $\nu_{e}(\overline{\nu}_{e})+^{16}O$  [23], will also be included in our calculations although these events accumulate up to less than 5% of the $\overline{\nu}_{e}+p$ events. The threshold energy is taken to be 5 MeV and the detector efficiency is assumed to be 100%. For 32 kton of water, one expects roughly $\sim 10^{4}$ neutrino events for a type II supernova at the center of our galaxy ($\sim$ 10 kpc away). Since the cross section for $\overline{\nu}_{e}+p$ is proportional to $E_{\nu}^{2}$, and $E_{\nu}\simeq 3.1T_{\nu}$ for the Fermi-Dirac distribution, a larger temperature gap between $\overline{\nu}_{e}$ and $\overline{\nu}_{x}$ would cause a more severely distorted spectra from the original one. Hence the difference between $T_{\overline{\nu}_{e}}$ and $T_{\overline{\nu}_{x}}$ determines to what extend the events are enhanced by oscillation. For the direct masses where $m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}}$, possible results of the ratio $OSC/NO$ ($OSC$ indicates oscillation, and $NO$ indicates the case of no oscillation) using specific input parameters LA, SA, and JS are shown in Figure 2. The curves representing LA and SA are due to MSW effects of the $\nu$-type events and the vacuum oscillation of the $\overline{\nu}$-type events, while the JS curve is due to vacuum oscillations of both $\nu$- and $\overline{\nu}$-type events. One observes that JS parameters could raise event numbers most effectively, an increase of $\sim$ 55% is possible at $T_{\overline{\nu}_{e}}\simeq$3 MeV ( $\frac{T_{\overline{\nu}_{e}}}{T_{\overline{\nu}_{x}}}\simeq 0.5$ ). The enhancement decreases as $T_{\overline{\nu}_{e}}$ approaches $T_{\overline{\nu}_{x}}$. Near a particular point where $T_{\overline{\nu}_{e}}\simeq T_{\overline{\nu}_{x}}$, the conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$ would not alter the original $\overline{\nu}_{e}$ spectrum, all the scenarios yield $OSC/NO\simeq 1$ and are indistinguishable among each other. The complication arises from the fact that if, for instance, $OSC/NO=1.3$ is observed, this observation is then either due to LA parameters at $T_{\overline{\nu}_{e}}\simeq 3.6$ eV or the JS parameters at $T_{\overline{\nu}_{e}}\simeq 4$ eV. The uncertainties in neutrino temperatures would therefore render a wide range of predictions at the detectors. Informations such as the clues for oscillation and neutrino parameters would be hard to understand or even lost due to this complication. III General Consequences From The Direct Masses III.1 Super-Kamiokande For the observation of SN neutrinos, the extremely distinct time structures between the neutronization burst and the beginning of thermal emission would allow a clear separation at the $H_{2}O$ Cherenkov detector. These two groups of events are discussed separately. The spectral shape and the total energy of $\nu_{e}$ from the early pre-bounce burst is still poorly known. For the purpose of qualitative discussion, the spectrum is arbitrarily chosen to be the same as that of thermal $\nu_{e}$ (Fermi-Dirac) with the same mean energy and a total of 5% the binding energy of a typical neutron star ($E_{b}$). During this early phase, one expects to observe the forward directional events due to elastic scattering $\nu_{e}+e^{-}$ and the backward events from $\nu_{e}+^{16}O$. These neutronization events are summarized in Table I. We note that the forward events are relatively insensitive to the uncertainty in $\nu_{e}$ temperature. The oscillation signature manifests itself through the drastically reduced forward events as compared to the original one, although practically the separation among LA, SA, and JS using events observed during this early phase is difficult. The backward events on the other hand, are more sensitive to $T_{\nu_{e}}$. The difficulty associated with the backward events comes from the extremely small numbers. If $T_{\nu_{e}}=3$ MeV and the total neutrino energy at this stage is down to $\sim 1\%$ of $E_{b}$, the backward events are practically unobservable. Because of the rapidly increased cross section for $\nu_{e}+^{16}O$ at higher energy: $\sigma\sim(E-E_{th})^{2}$ , the situation could be improved if $T_{\nu_{e}}$ is higher or if the neutrinos emitted during this phase have larger energy partition, which is quite model-dependent. Unlike the backward events, the forward event numbers are roughly in the order of 10 even if $T_{\nu_{e}}=$ 3 MeV and the $\nu_{e}$ flux takes away as low as $\sim 1\%$ of $E_{b}$. By using the numerous $e^{+}$ emitted from the inverse beta decay, the distorted $\overline{\nu}_{e}$ spectrum would be determined with better statistics. To account for the uncertainties in $T_{\nu_{e}}$ and $T_{\nu_{x}}$, we may let $T_{\nu_{x}}=\alpha$ $T_{\overline{\nu}_{e}}$, $T_{\nu_{e}}=\beta$ $T_{\overline{\nu}_{e}}$ and compare the outcomes for $T_{\overline{\nu}_{e}}$= 4 MeV, 5 MeV, and 6 MeV. The parameters $\alpha$ and $\beta$ are allowed to vary within $1.4\leq\alpha\leq 1.8$ and $0.6\leq\beta\leq 1$ to roughly include the temperature ranges given by current models. Expected ranges for the ratios $R\equiv I/F$ are summarized in Table II, where $I$ includes events from the inverse beta decay and the neutrino interactions with oxygen, $F$ represents the forward scattering events. For $T_{\overline{\nu}_{e}}=4$ MeV, there are two overlapped areas in $R$: between NO, SA and between JS, LA. The same overlapping structure remains for $T_{\overline{\nu}_{e}}=5$ MeV and $T_{\overline{\nu}_{e}}=6$ MeV. Despite the wealthy information conveyed through the $e^{+}$ spectrum at Super-Kamiokande, from Table II it seems unlikely that a clear separation among input parameters could be achieved using the otherwise model-independent quantity $R$ unless the uncertainties in neutrino temperatures are reduced significantly. III.2 SNO The neutral current$(NC)$ breakup reactions of deuterium in SNO [24] are flavor-blind for neutrinos: $$\nu_{\ell}+d\rightarrow n+p+\nu_{\ell},\,(E_{th}=2.22MeV)\\ $$ (4) $$\overline{\nu}_{\ell}+d\rightarrow n+p+\overline{\nu}_{\ell},\,(E_{th}=2.22MeV% )\\ $$ (5) where $\ell=e,\nu,\tau$. The charged current reactions include two parts, $$CC_{1}:\nu_{e}+d\rightarrow p+p+e^{-},\,(E_{th}=1.44MeV)\\ $$ (6) $$CC_{2}:\overline{\nu}_{e}+d\rightarrow n+n+e^{+},\,(E_{th}=4.03MeV).\\ $$ (7) With 1 kton of $D_{2}O$ and a threshold energy $\sim$ 5 MeV ( 100% detection efficiency assumed), both $CC_{1}$ and $CC_{2}$ should roughly yield event numbers in the order of $10^{2}$. The ratios $r_{1}=\frac{NC}{CC_{1}}$ at SNO seem to provide a solution as to how a particular set of parameter could be singled out, as will be shown below. One may arbitrarily fix $T_{\nu_{e}}$ and parameterize other temperatures in a similar way: let $T_{\nu_{x}}=\lambda T_{\nu_{e}}$ and allow an uncertainty in $T_{\overline{\nu}_{e}}$ as well: $T_{\overline{\nu}_{e}}=\eta T_{\nu_{e}}$, with $1.8\leq\lambda\leq 2.6$ and $1.1\leq\eta\leq 1.7$. The ratio $r_{1}=\frac{NC}{CC_{1}}$ for $T_{\nu_{e}}=3,4$ and 5 MeV are listed in Table III. We found that even if the uncertainties in $T_{\nu_{x}}$ and $T_{\overline{\nu}_{e}}$ may complicate the interpretation of observed events, each of the candidates gives rise to a distinct region in $\frac{NC}{CC_{1}}$. In practical, if uncertainties in $\lambda$ and $\eta$ can be reduced in the future, it would enable a smaller spread in each $r_{1}$ for a better separation. IV Consequences From The Inverted Masses IV.1 Vacuum oscillation versus MSW effect In the light of MSW effect, distinctions between direct and inverted masses would most likely appear in the observed neutrino spectra. If the just-so vacuum oscillation is favored over the MSW oscillations as solution to the SNP, both direct ($m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}}$) and inverted-mass schemes ($m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$) are allowed since $\nu_{e}$ flux can also be converted to $\nu_{x}$ through the vacuum oscillation if neutrino masses are inverted. The $\overline{\nu}_{e}$ flux on the contrary, would go through the MSW resonance if the mass hierarchy is inverted. This conversion would presumably enlarge the $\overline{\nu}_{e}$-type event rates effectively at Super-Kamiokande. Without conflicting current solar and atmospheric neutrino data, we would focus on the JS parameters: $-\delta^{2}\sim 10^{-10}$eV${}^{2}$ and large $\theta$, for a further investigation under the inverted mass scheme $m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$. The possible SN $\overline{\nu}_{e}$ spectra are shown in Figure 3. Curve $A$ is the original $\overline{\nu}_{e}$ spectrum and curve $B$ represents the distorted one through the just-so vacuum oscillation under the direct-mass scheme while curve $C$ is obtained from the MSW conversion under the inverted-mass scheme. Curves $B$ and $C$ nearly overlap, implying that the matter effect is not as prominent as expected, and that the MSW effect under the inverted-mass hierarchy is almost identical to the vacuum oscillation under the direct-mass hierarchy for $\overline{\nu}_{e}$ at this particular region of parameter space. Furthermore, the extremely small observable difference at the detectors would make the identification between the two mass patterns very difficult. The reason becomes clear if the required conditions for a MSW resonance and an adiabatic transition to occur are both considered [25]: A density profile $\rho\sim r^{-3}$ would yield $|\delta^{2}|\sim 10^{-8}-10^{5}$ eV${}^{2}$ relevant to the MSW oscillation in the supernova. This mass scale is much larger than the mass scale of JS parameters ($|\delta^{2}|\sim 10^{-10}$ eV${}^{2}$). Therefore a very effective conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$ in a supernova is unlikely for either direct or inverted masses if the JS parameters are applied. The strong conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$ is actually disfavored by some analyses based on the SN 1987A data [26]. If either LA or SA MSW conversion is favored over the just-so vacuum oscillation, the case for the inverted hierarchy $m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$ would then become shaky or can even be ruled out. IV.2 Super-Kamiokande and SNO An alternative approach might shed some clues to the inverted-mass scheme and its outcomes. We may characterize consequences for the oscillation $\overline{\nu}_{e}\leftrightarrow\overline{\nu}_{x}$ by the surviving probability of $\overline{\nu}_{e}$ in three limit cases: $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\sim 1,P(\overline{\nu}_{e}% \rightarrow\overline{\nu}_{e})\sim\frac{1}{2}$, and $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\ll 1$. The case $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\sim$ 1 indicates that no conversion occurs among $\overline{\nu}_{e}$ and $\overline{\nu}_{x}$, which is equivalent to the outcome of having massless neutrinos, and the mass pattern would then unlikely be the main issue. The JS parameters, as already discussed, yield $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\sim\frac{1}{2}$ for the MSW conversion (or equivalently, the vacuum oscillation under the direct-mass scheme). One is therefore motivated to further study the consequences for a complete conversion of $\overline{\nu}_{e}$ flux to $\overline{\nu}_{x}$, where $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\ll 1$. As pointed out by Totani et al. [25], due to statistical uncertainties in experiments and the inconsistency among current analyses, one cannot completely exclude the possibility of full conversion. We may tentatively neglect details of the physical conditions and parameters that required for a complete conversion to occur, and assume that the probability $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})$ remains approximately a constant within interested range of the neutrino energy. To reasonably account for the contributions from $\nu_{e}$ and $\nu_{x}$ fluxes to the total events when a complete conversion occurs in the anti-neutrinos sector , one may first consider the contours of $OSC/NO$ for events from the inverse beta decay only. Under the inverted-mass scheme, a wide range of $-\delta^{2}$ and $\theta$ are shown in Figure 4. Given $T_{\overline{\nu}_{e}}=$ 4.5 MeV, $T_{\nu_{e}}$=3 MeV and $T_{\nu_{x}}=T_{\overline{\nu}_{x}}$=6 MeV, the JS parameters ($|\delta^{2}|\sim 10^{-10}$ eV${}^{2}$ and large $\theta$) roughly result in a 20% increment to the event number. After a full conversion of the anti-neutrinos in which $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\ll 1$, one would expect to observe a sizable increase in the ratio $OSC/NO$. Therefore a larger $|\delta^{2}|$ and a smaller $\tan^{2}2\theta$, at least several orders of magnitude, are required for a full conversion to occur. The smallness of $\theta$, alone with the small $\phi$, fix the surviving probability of $\nu_{e}$ very close to unity. Hence, a full conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$ would be accompanied by nearly unchanged $\nu_{e}$ and $\nu_{x}$ fluxes: $P(\nu_{e}\rightarrow\nu_{e})\sim 1$. The expected ratios $R\equiv I/F$ at the Super-Kamiokande are listed in Table IV. Despite the better statistic provided by the $\overline{\nu}_{e}$-type events at Super-Kamiokande, the detection is however not unique to the $\overline{\nu}_{e}$ flux, the uncertainties in $T_{{\nu}_{e}}$ and $T_{{\nu}_{x}}$ therefore make the separation between $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\ll 1$ (complete conversion) and $P(\overline{\nu}_{e}\rightarrow\overline{\nu}_{e})\sim\frac{1}{2}$ difficult. At the SNO detector, the charged-current channel $\overline{\nu}_{e}+d\rightarrow n+n+e^{+}$ is unique to $\overline{\nu}_{e}$. This channel can be distinguished from the neutral-current events and the other charged-current channel induced by $\nu_{e}$. Therefore the measurement of $\overline{\nu}_{e}+d$ events at SNO should be sensitive to the full conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$. Ratios of the neutral-current events to the charged-current events $\overline{\nu}_{e}+d$, denoted as $\frac{NC}{CC_{2}}$, are shown in Table V. We also present values of the same ratio under the direct-mass scheme in Table VI as a comparison. We observe that for a particular $T_{\nu_{e}}$, both direct and inverted schemes yield a nearly identical range of $\frac{NC}{CC_{2}}$ if the JS parameters are applied, this verifies a previous argument. The valuable message from this ratio is that for a particular $T_{\nu_{e}}$, a complete conversion of the $\overline{\nu}_{e}$ flux through MSW resonance represents a unique range of $\frac{NC}{CC_{2}}$ as compared to other scenarios, including that of the direct masses. Typical $\frac{NC}{CC_{2}}$ contours for inverted masses are shown in Figure 5, where $T_{\nu_{e}}$= 3 MeV, $T_{\overline{\nu}_{e}}$= 4.5 MeV, and $T_{\nu_{x}}=T_{\overline{\nu}_{x}}$= 6 MeV are assumed. Since the neutral-current reaction is blind to the oscillation, a complete swap of $\overline{\nu}_{e}$ and $\overline{\nu}_{x}$ fluxes would definitely yield smaller $\frac{NC}{CC_{2}}$. Calculations show that $\frac{NC}{CC_{2}}\sim 2.95$ for a complete conversion and indicate that $-\delta^{2}>10^{-2}$ eV${}^{2}$ and $\tan^{2}2\theta<10^{-2}$ are required for a near complete conversion to occur. Tables V and VI suggest that for the detection of SN neutrinos, the direct and inverted masses could be distinguishable if a nearly complete conversion of $\overline{\nu}_{e}$ to $\overline{\nu}_{x}$ occurs, which yields a low $\frac{NC}{CC_{2}}$ and signals the existence of inverted-mass pattern. Since the MSW effects become important for supernova neutrinos at $10^{-8}<\Delta m^{2}<10^{5}$ eV${}^{2}$, a future supernova would provide a test ground for $-\delta^{2}>10^{-2}$ eV${}^{2}$. If the nearly full conversion $\overline{\nu}_{e}\leftrightarrow\overline{\nu}_{x}$ is observed in the SN neutrino flux, the consequences may have certain implications in that the required parameter spaces for a full conversion are obviously disfavored by current solar neutrino data, while the future solar and atmospheric observations may not severely change the mass scales required to explain the solar and the atmospheric neutrino deficits. V Discussions and Conclusions Under the constraints of mass scale from solar and atmospheric neutrinos, the $3-\nu$ scenario naturally leads to four possible hierarchies (one direct and three inverted): $(1)m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}},(2)m_{\nu_{\mu}}>m_{\nu_{e}}\gg m% _{\nu_{\tau}}(3)m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}},(4)m_{\nu_{\tau}}% \gg m_{\nu_{e}}>m_{\nu_{\mu}}.$ Case 1 is the normal, direct mass scheme. For Case 2, $|\delta^{2}|$ in the mass scale of the MSW solution has been discussed [11]. In our analysis we have applied Case 3, in which the mass scale of $|\delta^{2}|$ is suitable for the vacuum solution of SNP ($10^{-10}$ eV${}^{2}$). Case 3 and Case 2 become equivalent if $|\delta^{2}|\sim 10^{-10}$ eV${}^{2}$ since the MSW and the vacuum oscillations for the $\overline{\nu}_{e}$ flux would be nearly identical at this mass scale, as shown in Section IV. For Case 4 to survive, $|\delta^{2}|$ needs to be in the order of $10^{-10}$ eV${}^{2}$ for the vacuum solution to apply. Therefore, consequences for Case 4 and Case 1 become equivalent in the detection of supernova neutrinos if $|\delta^{2}|\sim 10^{-10}$ eV${}^{2}$. In this work, responses at both Super-Kamiokande and SNO detectors to neutrino fluxes coming from a supernova are studied under the consideration of uncertainties in neutrino temperatures. In particular, some phenomenological consequences for direct-mass and inverted-mass patterns of neutrinos are compared. We may summarize our results as follows. (a) Uncertainties in neutrino temperatures can allow various interpretations of neutrino parameters. We have shown this through the expected outcomes at Super-Kamiokande for SN neutrinos. (b) The three candidates LA, SA, and JS manifest differently in the ratio $\frac{NC}{CC_{1}}$ at SNO even if the uncertainties in neutrino temperature are allowed. Future detection of SN neutrinos at SNO would be able to single out favored mass and mixing parameters from the three candidates. (c) In addition to the direct-mass pattern, the inverted-mass scenario $m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$ is investigated since it can allow the vacuum solution to the solar neutrino problem. By using the event ratio $\frac{NC}{CC_{2}}$ in SNO, the direct-mass($m_{\nu_{\tau}}\gg m_{\nu_{\mu}}>m_{\nu_{e}}$) and the inverted-mass($m_{\nu_{e}}>m_{\nu_{\mu}}\gg m_{\nu_{\tau}}$) could be distinguished if a nearly complete $\overline{\nu}_{e}\leftrightarrow\overline{\nu}_{x}$ conversion occurs in the anti-neutrino section. Acknowledgements.S.C. would like to thank Nien-Po Chen and Sadek Mansour for suggestions in preparing the manuscript. T. K. is supported in part by the DOE, Grant no. DE-FG02-91ER40681. References [1] R. Davis Jr., Prog. in Nucl. and Part. Phys. 32 (1994); SAGE Collaboration, J. N. Abdurashitov et al., Phys. Lett. B 328, 234 (1994); GALLEX Collaboration, P. Anselmann et al., Phys. Lett. B 357, 237 (1995); KamioKande Collaboration, K. Hirata et al., Phys. Rev. D 44, 2241 (1991). [2] KamioKande Collaboration, Phys. Lett. B 335, 237 (1994); IMB Collaboration, Phys. Rev. Lett. 66, 2561 (1989); Soudan 2 Collaboration, Nucl. Phys. B 38, 337 (1995) (Proc. Suppl.). [3] J. N. Bahcall and M. H. Pinsonneault, Rev. Mod. Phys. 64, 885 (1992). [4] S. L. Glashow, P. J. Kernan, and L. M. Krauss, Phys. Lett. B 445, 412 (1999). [5] L. Wolfenstein, Phys. Rev. D 17, 2369 (1978); S. P. Mikheyev and A. Yu. Smirnov, Yad. Fiz. 42, 1441 (1985) [ Sov. J. Nucl. Phys. 42, 913 (1986)]. [6] T. Sakai, O. Inagaki, and T. Teshima, Int. J. Mod. Phys. A 14, 1953 (1999). [7] Super-Kamiokande Collaboration, Phys. Rev. Lett. 81, 1562 (1998). [8] Particle Data Group, Phys. Rev. D 54, 1 (1996). [9] K. Hirata et al., Phys. Rev. Lett. 58, 1490 (1987); R. M. Bionta et al., Phys. Rev. Lett. 58, 1494 (1987). [10] Gautam Dutta, D. Indumathi, M. V. N. Murthy and G. Rajasekaran, hep-ph/9907372 (1999). [11] A. S. Dighe and A. Yu. Smirnov, hep-ph/9907423 (1999). [12] G. G. Raffelt and J. Silk, Phys. Lett. B 366, 429 (1996); D. O. Caldwell and R. N. Mohapatra, Phys. Lett. B 354, 371 (1995); G. M. Fuller, J. R. Primack and Y. -Z. Qian, Phys. Rev. D 52, 1288 (1995). [13] R. Mayle, J. R. Wilson, and D. Schramm, Astrophys. J. 318, 288 (1987). [14] B. Jegerlehner, F. Neubig, and G. Raffelt, Phys. Rev. D 54, 2784 (1996); T. Totani, K. Sato, H. Dalhed, and J. R. Wilson, Astrophys. J. 496, 216 (1998). [15] D. Schramm, Comments Nucl. Part. Phys. (1987); A. Burrows, D. Klein, and R. Gandhi, Phys. Rev. D 45, 3361 (1992); J. F. Beacom and P. Vogel, Phys. Rev. D 58, 053010 (1998), and references therein. [16] T. K. Kuo and J. Pantaleone, Rev. Mod. Phys. 61, 937 (1989). [17] T. K. Kuo and J. Pantaleone Phys. Rev. D 37, 298 (1988). [18] G. L. Fogli, E. Lisi, and D. Montanino, Phys. Rev. D 54, 2048 (1996). [19] M. Apollonio et al., CHOOZ Collaboration, Phys. Lett. B 420, 397 (1998). [20] E. Kh. Akhmedov and Z. G. Berezhiani, Nucl. Phys. B 373, 479 (1992). [21] Super-Kamiokande Collaboration, Phys. Lett. B 433, 9 (1998). [22] J. Arafune and M. Fukugita, Phys. Rev. Lett. 59, 367 (1987). [23] W. Haxton, Phys. Rev. D 36, 2283 (1987). [24] M. E. Moorhead, in Neutrino Astrophysics, ed. M. Altmann et al. (Ringberg, Germany, 1997). [25] T. Totani and K. Sato, Int. J. Mod. Phys. D 5, 519 (1996). [26] A. Yu. Smirnov, D. N. Spergel, and J. N. Bahcall, Phys. Rev. D 49, 1389 (1994); B. Jegerlehner, F. Neubig, and G. Raffelt, Phys. Rev. D 54, 1194 (1996).
Measurements of the Absolute Branching Fractions of $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ B. Aubert    R. Barate    D. Boutigny    F. Couderc    Y. Karyotakis    J. P. Lees    V. Poireau    V. Tisserand    A. Zghiche Laboratoire de Physique des Particules, F-74941 Annecy-le-Vieux, France    E. Grauges IFAE, Universitat Autonoma de Barcelona, E-08193 Bellaterra, Barcelona, Spain    A. Palano    M. Pappagallo    A. Pompili Università di Bari, Dipartimento di Fisica and INFN, I-70126 Bari, Italy    J. C. Chen    N. D. Qi    G. Rong    P. Wang    Y. S. Zhu Institute of High Energy Physics, Beijing 100039, China    G. Eigen    I. Ofte    B. Stugu University of Bergen, Inst. of Physics, N-5007 Bergen, Norway    G. S. Abrams    M. Battaglia    A. B. Breon    D. N. Brown    J. Button-Shafer    R. N. Cahn    E. Charles    C. T. Day    M. S. Gill    A. V. Gritsan    Y. Groysman    R. G. Jacobsen    R. W. Kadel    J. Kadyk    L. T. Kerth    Yu. G. Kolomensky    G. Kukartsev    G. Lynch    L. M. Mir    P. J. Oddone    T. J. Orimoto    M. Pripstein    N. A. Roe    M. T. Ronan    W. A. Wenzel Lawrence Berkeley National Laboratory and University of California, Berkeley, California 94720, USA    M. Barrett    K. E. Ford    T. J. Harrison    A. J. Hart    C. M. Hawkes    S. E. Morgan    A. T. Watson University of Birmingham, Birmingham, B15 2TT, United Kingdom    M. Fritsch    K. Goetzen    T. Held    H. Koch    B. Lewandowski    M. Pelizaeus    K. Peters    T. Schroeder    M. Steinke Ruhr Universität Bochum, Institut für Experimentalphysik 1, D-44780 Bochum, Germany    J. T. Boyd    J. P. Burke    N. Chevalier    W. N. Cottingham University of Bristol, Bristol BS8 1TL, United Kingdom    T. Cuhadar-Donszelmann    B. G. Fulsom    C. Hearty    N. S. Knecht    T. S. Mattison    J. A. McKenna University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z1    A. Khan    P. Kyberd    M. Saleem    L. Teodorescu Brunel University, Uxbridge, Middlesex UB8 3PH, United Kingdom    A. E. Blinov    V. E. Blinov    A. D. Bukin    V. P. Druzhinin    V. B. Golubev    E. A. Kravchenko    A. P. Onuchin    S. I. Serednyakov    Yu. I. Skovpen    E. P. Solodov    A. N. Yushkov Budker Institute of Nuclear Physics, Novosibirsk 630090, Russia    D. Best    M. Bondioli    M. Bruinsma    M. Chao    S. Curry    I. Eschrich    D. Kirkby    A. J. Lankford    P. Lund    M. Mandelkern    R. K. Mommsen    W. Roethel    D. P. Stoker University of California at Irvine, Irvine, California 92697, USA    C. Buchanan    B. L. Hartfiel    A. J. R. Weinstein University of California at Los Angeles, Los Angeles, California 90024, USA    S. D. Foulkes    J. W. Gary    O. Long    B. C. Shen    K. Wang    L. Zhang University of California at Riverside, Riverside, California 92521, USA    D. del Re    H. K. Hadavand    E. J. Hill    D. B. MacFarlane    H. P. Paar    S. Rahatlou    V. Sharma University of California at San Diego, La Jolla, California 92093, USA    J. W. Berryhill    C. Campagnari    A. Cunha    B. Dahmes    T. M. Hong    M. A. Mazur    J. D. Richman    W. Verkerke University of California at Santa Barbara, Santa Barbara, California 93106, USA    T. W. Beck    A. M. Eisner    C. J. Flacco    C. A. Heusch    J. Kroseberg    W. S. Lockman    G. Nesom    T. Schalk    B. A. Schumm    A. Seiden    P. Spradlin    D. C. Williams    M. G. Wilson University of California at Santa Cruz, Institute for Particle Physics, Santa Cruz, California 95064, USA    J. Albert    E. Chen    G. P. Dubois-Felsmann    A. Dvoretskii    D. G. Hitlin    J. S. Minamora    I. Narsky    T. Piatenko    F. C. Porter    A. Ryd    A. Samuel California Institute of Technology, Pasadena, California 91125, USA    R. Andreassen    G. Mancinelli    B. T. Meadows    M. D. Sokoloff University of Cincinnati, Cincinnati, Ohio 45221, USA    F. Blanc    P. Bloom    S. Chen    W. T. Ford    J. F. Hirschauer    A. Kreisel    U. Nauenberg    A. Olivas    W. O. Ruddick    J. G. Smith    K. A. Ulmer    S. R. Wagner    J. Zhang University of Colorado, Boulder, Colorado 80309, USA    A. Chen    E. A. Eckhart    A. Soffer    W. H. Toki    R. J. Wilson    Q. Zeng Colorado State University, Fort Collins, Colorado 80523, USA    D. Altenburg    E. Feltresi    A. Hauke    B. Spaan Universität Dortmund, Institut für Physik, D-44221 Dortmund, Germany    T. Brandt    J. Brose    M. Dickopp    V. Klose    H. M. Lacker    R. Nogowski    S. Otto    A. Petzold    J. Schubert    K. R. Schubert    R. Schwierz    J. E. Sundermann Technische Universität Dresden, Institut für Kern- und Teilchenphysik, D-01062 Dresden, Germany    D. Bernard    G. R. Bonneaud    P. Grenier    S. Schrenk    Ch. Thiebaux    G. Vasileiadis    M. Verderi Ecole Polytechnique, LLR, F-91128 Palaiseau, France    D. J. Bard    P. J. Clark    W. Gradl    F. Muheim    S. Playfer    Y. Xie University of Edinburgh, Edinburgh EH9 3JZ, United Kingdom    M. Andreotti    V. Azzolini    D. Bettoni    C. Bozzi    R. Calabrese    G. Cibinetto    E. Luppi    M. Negrini    L. Piemontese Università di Ferrara, Dipartimento di Fisica and INFN, I-44100 Ferrara, Italy    F. Anulli    R. Baldini-Ferroli    A. Calcaterra    R. de Sangro    G. Finocchiaro    P. Patteri    I. M. Peruzzi Also with Università di Perugia, Dipartimento di Fisica, Perugia, Italy    M. Piccolo    A. Zallo Laboratori Nazionali di Frascati dell’INFN, I-00044 Frascati, Italy    A. Buzzo    R. Capra    R. Contri    M. Lo Vetere    M. Macri    M. R. Monge    S. Passaggio    C. Patrignani    E. Robutti    A. Santroni    S. Tosi Università di Genova, Dipartimento di Fisica and INFN, I-16146 Genova, Italy    G. Brandenburg    K. S. Chaisanguanthum    M. Morii    E. Won    J. Wu Harvard University, Cambridge, Massachusetts 02138, USA    R. S. Dubitzky    U. Langenegger    J. Marks    S. Schenk    U. Uwer Universität Heidelberg, Physikalisches Institut, Philosophenweg 12, D-69120 Heidelberg, Germany    G. Schott Universität Karlsruhe, Institut für Experimentelle Kernphysik, D-76021 Karlsruhe, Germany    W. Bhimji    D. A. Bowerman    P. D. Dauncey    U. Egede    R. L. Flack    J. R. Gaillard    J. A. Nash    M. B. Nikolich    W. Panduro Vazquez Imperial College London, London, SW7 2AZ, United Kingdom    X. Chai    M. J. Charles    W. F. Mader    U. Mallik    A. K. Mohapatra    V. Ziegler University of Iowa, Iowa City, Iowa 52242, USA    J. Cochran    H. B. Crawley    V. Eyges    W. T. Meyer    S. Prell    E. I. Rosenberg    A. E. Rubin    J. Yi Iowa State University, Ames, Iowa 50011-3160, USA    N. Arnaud    M. Davier    X. Giroux    G. Grosdidier    A. Höcker    F. Le Diberder    V. Lepeltier    A. M. Lutz    A. Oyanguren    T. C. Petersen    S. Plaszczynski    S. Rodier    P. Roudeau    M. H. Schune    A. Stocchi    G. Wormser Laboratoire de l’Accélérateur Linéaire, F-91898 Orsay, France    C. H. Cheng    D. J. Lange    M. C. Simani    D. M. Wright Lawrence Livermore National Laboratory, Livermore, California 94550, USA    A. J. Bevan    C. A. Chavez    I. J. Forster    J. R. Fry    E. Gabathuler    R. Gamet    K. A. George    D. E. Hutchcroft    R. J. Parry    D. J. Payne    K. C. Schofield    C. Touramanis University of Liverpool, Liverpool L69 72E, United Kingdom    C. M. Cormack    F. Di Lodovico    W. Menges    R. Sacco Queen Mary, University of London, E1 4NS, United Kingdom    C. L. Brown    G. Cowan    H. U. Flaecher    M. G. Green    D. A. Hopkins    P. S. Jackson    T. R. McMahon    S. Ricciardi    F. Salvatore University of London, Royal Holloway and Bedford New College, Egham, Surrey TW20 0EX, United Kingdom    D. Brown    C. L. Davis University of Louisville, Louisville, Kentucky 40292, USA    J. Allison    N. R. Barlow    R. J. Barlow    C. L. Edgar    M. C. Hodgkinson    M. P. Kelly    G. D. Lafferty    M. T. Naisbit    J. C. Williams University of Manchester, Manchester M13 9PL, United Kingdom    C. Chen    W. D. Hulsbergen    A. Jawahery    D. Kovalskyi    C. K. Lae    D. A. Roberts    G. Simi University of Maryland, College Park, Maryland 20742, USA    G. Blaylock    C. Dallapiccola    S. S. Hertzbach    R. Kofler    V. B. Koptchev    X. Li    T. B. Moore    S. Saremi    H. Staengle    S. Willocq University of Massachusetts, Amherst, Massachusetts 01003, USA    R. Cowan    K. Koeneke    G. Sciolla    S. J. Sekula    M. Spitznagel    F. Taylor    R. K. Yamamoto Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, Massachusetts 02139, USA    H. Kim    P. M. Patel    S. H. Robertson McGill University, Montréal, Quebec, Canada H3A 2T8    A. Lazzaro    V. Lombardo    F. Palombo Università di Milano, Dipartimento di Fisica and INFN, I-20133 Milano, Italy    J. M. Bauer    L. Cremaldi    V. Eschenburg    R. Godang    R. Kroeger    J. Reidy    D. A. Sanders    D. J. Summers    H. W. Zhao University of Mississippi, University, Mississippi 38677, USA    S. Brunet    D. Côté    P. Taras    B. Viaud Université de Montréal, Laboratoire René J. A. Lévesque, Montréal, Quebec, Canada H3C 3J7    H. Nicholson Mount Holyoke College, South Hadley, Massachusetts 01075, USA    N. Cavallo Also with Università della Basilicata, Potenza, Italy    G. De Nardo    F. Fabozzi Also with Università della Basilicata, Potenza, Italy    C. Gatto    L. Lista    D. Monorchio    P. Paolucci    D. Piccolo    C. Sciacca Università di Napoli Federico II, Dipartimento di Scienze Fisiche and INFN, I-80126, Napoli, Italy    M. Baak    H. Bulten    G. Raven    H. L. Snoek    L. Wilden NIKHEF, National Institute for Nuclear Physics and High Energy Physics, NL-1009 DB Amsterdam, The Netherlands    C. P. Jessop    J. M. LoSecco University of Notre Dame, Notre Dame, Indiana 46556, USA    T. Allmendinger    G. Benelli    K. K. Gan    K. Honscheid    D. Hufnagel    P. D. Jackson    H. Kagan    R. Kass    T. Pulliam    A. M. Rahimi    R. Ter-Antonyan    Q. K. Wong Ohio State University, Columbus, Ohio 43210, USA    J. Brau    R. Frey    O. Igonkina    M. Lu    C. T. Potter    N. B. Sinev    D. Strom    J. Strube    E. Torrence University of Oregon, Eugene, Oregon 97403, USA    F. Galeazzi    M. Margoni    M. Morandin    M. Posocco    M. Rotondo    F. Simonetto    R. Stroili    C. Voci Università di Padova, Dipartimento di Fisica and INFN, I-35131 Padova, Italy    M. Benayoun    H. Briand    J. Chauveau    P. David    L. Del Buono    Ch. de la Vaissière    O. Hamon    M. J. J. John    Ph. Leruste    J. Malclès    J. Ocariz    L. Roos    G. Therin Universités Paris VI et VII, Laboratoire de Physique Nucléaire et de Hautes Energies, F-75252 Paris, France    P. K. Behera    L. Gladney    Q. H. Guo    J. Panetta University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA    M. Biasini    R. Covarelli    S. Pacetti    M. Pioppi Università di Perugia, Dipartimento di Fisica and INFN, I-06100 Perugia, Italy    C. Angelini    G. Batignani    S. Bettarini    F. Bucci    G. Calderini    M. Carpinelli    R. Cenci    F. Forti    M. A. Giorgi    A. Lusiani    G. Marchiori    M. Morganti    N. Neri    E. Paoloni    M. Rama    G. Rizzo    J. Walsh Università di Pisa, Dipartimento di Fisica, Scuola Normale Superiore and INFN, I-56127 Pisa, Italy    M. Haire    D. Judd    D. E. Wagoner Prairie View A&M University, Prairie View, Texas 77446, USA    J. Biesiada    N. Danielson    P. Elmer    Y. P. Lau    C. Lu    J. Olsen    A. J. S. Smith    A. V. Telnov Princeton University, Princeton, New Jersey 08544, USA    F. Bellini    G. Cavoto    A. D’Orazio    E. Di Marco    R. Faccini    F. Ferrarotto    F. Ferroni    M. Gaspero    L. Li Gioi    M. A. Mazzoni    S. Morganti    G. Piredda    F. Polci    F. Safai Tehrani    C. Voena Università di Roma La Sapienza, Dipartimento di Fisica and INFN, I-00185 Roma, Italy    H. Schröder    G. Wagner    R. Waldi Universität Rostock, D-18051 Rostock, Germany    T. Adye    N. De Groot    B. Franek    G. P. Gopal    E. O. Olaiya    F. F. Wilson Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom    R. Aleksan    S. Emery    A. Gaidot    S. F. Ganzhur    G. Graziani    G. Hamel de Monchenault    W. Kozanecki    M. Legendre    G. W. London    B. Mayer    G. Vasseur    Ch. Yèche    M. Zito DSM/Dapnia, CEA/Saclay, F-91191 Gif-sur-Yvette, France    M. V. Purohit    A. W. Weidemann    J. R. Wilson    F. X. Yumiceva University of South Carolina, Columbia, South Carolina 29208, USA    T. Abe    M. T. Allen    D. Aston    R. Bartoldus    N. Berger    A. M. Boyarski    O. L. Buchmueller    R. Claus    J. P. Coleman    M. R. Convery    M. Cristinziani    J. C. Dingfelder    D. Dong    J. Dorfan    D. Dujmic    W. Dunwoodie    S. Fan    R. C. Field    T. Glanzman    S. J. Gowdy    T. Hadig    V. Halyo    C. Hast    T. Hryn’ova    W. R. Innes    M. H. Kelsey    P. Kim    M. L. Kocian    D. W. G. S. Leith    J. Libby    S. Luitz    V. Luth    H. L. Lynch    H. Marsiske    R. Messner    D. R. Muller    C. P. O’Grady    V. E. Ozcan    A. Perazzo    M. Perl    B. N. Ratcliff    A. Roodman    A. A. Salnikov    R. H. Schindler    J. Schwiening    A. Snyder    J. Stelzer    D. Su    M. K. Sullivan    K. Suzuki    S. K. Swain    J. M. Thompson    J. Va’vra    N. van Bakel    M. Weaver    W. J. Wisniewski    M. Wittgen    D. H. Wright    A. K. Yarritu    K. Yi    C. C. Young Stanford Linear Accelerator Center, Stanford, California 94309, USA    P. R. Burchat    A. J. Edwards    S. A. Majewski    B. A. Petersen    C. Roat Stanford University, Stanford, California 94305-4060, USA    M. Ahmed    S. Ahmed    M. S. Alam    R. Bula    J. A. Ernst    M. A. Saeed    F. R. Wappler    S. B. Zain State University of New York, Albany, New York 12222, USA    W. Bugg    M. Krishnamurthy    S. M. Spanier University of Tennessee, Knoxville, Tennessee 37996, USA    R. Eckmann    J. L. Ritchie    A. Satpathy    R. F. Schwitters University of Texas at Austin, Austin, Texas 78712, USA    J. M. Izen    I. Kitayama    X. C. Lou    S. Ye University of Texas at Dallas, Richardson, Texas 75083, USA    F. Bianchi    M. Bona    F. Gallo    D. Gamba Università di Torino, Dipartimento di Fisica Sperimentale and INFN, I-10125 Torino, Italy    M. Bomben    L. Bosisio    C. Cartaro    F. Cossutti    G. Della Ricca    S. Dittongo    S. Grancagnolo    L. Lanceri    L. Vitale Università di Trieste, Dipartimento di Fisica and INFN, I-34127 Trieste, Italy    F. Martinez-Vidal IFIC, Universitat de Valencia-CSIC, E-46071 Valencia, Spain    R. S. Panvini Vanderbilt University, Nashville, Tennessee 37235, USA    Sw. Banerjee    B. Bhuyan    C. M. Brown    D. Fortin    K. Hamano    R. Kowalewski    J. M. Roney    R. J. Sobie University of Victoria, Victoria, British Columbia, Canada V8W 3P6    J. J. Back    P. F. Harrison    T. E. Latham    G. B. Mohanty Department of Physics, University of Warwick, Coventry CV4 7AL, United Kingdom    H. R. Band    X. Chen    B. Cheng    S. Dasu    M. Datta    A. M. Eichenbaum    K. T. Flood    M. Graham    J. J. Hollar    J. R. Johnson    P. E. Kutter    H. Li    R. Liu    B. Mellado    A. Mihalyi    Y. Pan    M. Pierini    R. Prepost    P. Tan    S. L. Wu    Z. Yu University of Wisconsin, Madison, Wisconsin 53706, USA    H. Neal Yale University, New Haven, Connecticut 06511, USA (December 5, 2020) Abstract We study the two-body decays of $B^{\pm}$ mesons to $K^{\pm}$ and a charmonium state, $X_{c\bar{c}}$, in a sample of 210.5 fb${}^{-1}$ of data from the BABAR  experiment. We perform measurements of absolute branching fractions $\cal B$($B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$) using a missing mass technique, and report several new or improved results. In particular, the upper limit $\cal B$$(B^{\pm}\rightarrow K^{\pm}X(3872))<3.2\times 10^{-4}$ at 90% CL and the inferred lower limit $\cal B$$(X(3872)\rightarrow J/\psi\pi^{+}\pi^{-})>4.2\%$ will help in understanding the nature of the recently discovered $X(3872)$. pacs: 13.25.Hw, 14.40.Gx ††preprint: BABAR Analysis Document #1205, Version 06r2 PRL draft for Final Notice BABAR-PUB-05/041 SLAC-PUB-11545 hep-ex/0510070 ††thanks: Deceased The BABAR Collaboration Several exclusive decays of $B$ mesons of the form $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ (where $X_{c\bar{c}}$ is one of the charmonium states $\eta_{c}$ , $J/\psi$ , $\chi_{c0}$ , $\chi_{c1}$ , $\eta_{c}^{\prime}$ , $\psi^{\prime}$ , $\psi^{\prime\prime}$ ), have been observed by reconstructing the charmonium state from its decay to some known final state, $f$ Aubert:2002hc ; Choi:2002na . In principle, such $B$ decays provide a direct probe of charmonium properties since the phase space is large for all known states and all should be produced roughly equally, in the absence of a strong selection rule Quigg:2004nv . However with this technique only the product of the two branching fractions $\cal B$$(B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}})\times$$\cal B$$(X_{c\bar{c}}\rightarrow f)$ is measured, thereby reducing the precision of $\cal B$($B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$) when the daughter branching fraction is poorly known. We describe here a complementary approach, based on the measurement of the kaon momentum spectrum in the $B$ center-of-mass frame, where two-body decays can be identified by their characteristic monochromatic line, allowing an absolute determination of $\cal B$$(B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}})$. Knowledge of the $B$ center-of-mass system is obtained by exclusive reconstruction of the other $B$ meson from a $\mathchar 28935\relax(4S)$ decay. In addition to obtaining new information on known charmonium states, this method is used to search for the $X(3872)$ state, recently observed in $B^{\pm}\rightarrow K^{\pm}X(3872)$ decays by Belle Choi:2003ue and BABAR  Aubert:2004ns , in the subsequent decay $X(3872)\rightarrow J/\psi\pi^{+}\pi^{-}$. The same method allows a search for charged partners of the $X(3872)$ in $B^{0}$ decays, independent of the $X(3872)^{\pm}$ decay mode. The nature of $X(3872)$ resonance is still unclear, different interpretations interpretations have been proposed but more experimental data will be needed to discriminate between them. For this analysis we use a data sample of 210.5 fb${}^{-1}$ integrated luminosity, corresponding to $231.8\times 10^{6}$ $B\bar{B}$ pairs. The data have been collected with the BABAR  detector at the SLAC PEP-II asymmetric-energy collider, where 9 GeV electrons and 3.1 GeV positrons collide at a center-of-mass energy 10.58 GeV, corresponding to the mass of the $\mathchar 28935\relax(4S)$ resonance. A detailed description of the BABAR  detector can be found in Aubert:2001tu . Charged tracks are reconstructed with a 5 layer silicon vertex tracker (SVT) and a 40 layer drift chamber (DCH), located in a 1.5 T magnetic field generated by a superconducting solenoid. The energy of photons and electrons is measured with an electromagnetic calorimeter made up of CsI(Tl) crystals. Charged hadron identification is done with ionization measurements in the SVT and DCH and with an internally reflecting ring imaging Cherenkov detector. The instrumented flux return of the solenoid is used to identify muons. The analysis is performed on a sample of events where a $B$ meson is fully reconstructed ($B_{recon}$). For these events, the momentum of the other $B$ ($B_{signal}$) can be calculated from the momentum of $B_{recon}$ and the beam parameters. We select events with a $K^{\pm}$ not used for the reconstruction of $B_{recon}$ and calculate its momentum ($p_{K}$) in the $B_{signal}$ center of mass system. $B_{recon}$ mesons are reconstructed in their decays to exclusive $D^{(*)}H$ final states, where $H$ is one of several combinations of $\pi^{\pm}$, $K^{\pm}$, $\pi^{0}$ and $K_{S}^{0}$ hadrons; a detailed description of the method can be found in Aubert:2003zw . The number of $B^{\pm}$ events in the data is determined with a fit to the distribution of the beam energy substituted mass $m_{ES}=\sqrt{E_{CM}^{2}/4-p_{B}^{2}}$, where $E_{CM}$ is the total center-of-mass energy, determined from the beam parameters, and $p_{B}$ is the measured momentum of $B_{recon}$ in the center-of-mass frame. The fit function is the sum of a Crystal Ball function crystalball describing the signal and an ARGUS function Albrecht:1993fr for each background component ($e^{+}e^{-}\rightarrow q\bar{q}$ where $q$ is $u$, $d$, $s$ or $c$ or misreconstructed $B$s), the relative weights of which are obtained from a Monte Carlo simulation (MC), while the total normalization factor is determined from the data. A total of $378580\pm 1110$ events with a fully reconstructed $B^{\pm}$ is obtained. Fifteen variables related to the $B_{recon}$ decay characteristics, its production kinematics, the topology of the full event, and the angular correlation between $B_{recon}$ and the rest of the event are used in a neural network (NN1) to reduce the large background, mainly due to non-$B$ events. The network has 80% signal efficiency while rejecting 90% of the background. The $m_{\rm ES}$ distribution after this selection is shown in Fig. 1. Only events with $5.275<m_{ES}<5.285$ GeV/$c^{2}$ are used in the analysis. We now consider only tracks not associated with $B_{recon}$. Most $K^{\pm}$ produced in $B^{\pm}$ decays originate from $D$ mesons and their spectrum, although broad, peaks at low $p_{K}$. In the $B^{\pm}$ rest frame, these $K^{\pm}$ are embedded in a “minijet” of $D$ decay products, while signal $K^{\pm}$ recoil against a massive (3–4 GeV/$c^{2}$) state and therefore tend to be more isolated. A second neural network (NN2) rejects background from secondary $K^{\pm}$, by using fifteen input variables describing the energy and track multiplicities measured in the $K^{\pm}$ hemisphere, the sphericity of the recoil system, and the angular correlations between the $K^{\pm}$ and the recoil system. These variables have been chosen to be independent of the particular decay topology of the recoil system. Since the topology of the event changes with the recoil mass, we have considered separately two recoil mass regions in the training of this neural network: the “high-mass” region, corresponding to 1.0$<p_{K}<$1.5 GeV/$c$ and the “low-mass” region, for 1.5$<p_{K}<$2.0 GeV/$c$. The signal training sample is $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ MC simulation while the background sample consists of simulated $K^{\pm}$ from $D$ meson decays in the same momentum range. The chosen cuts on the NN2 outputs correspond to 85% signal efficiency; the background rejection factor varies between 2.5 in the $X(3872)$ and $\psi^{\prime}$ region and 1.5 in the $J/\psi$ region. The selection criteria are optimized for MC signal significance with the high-mass region blinded. The kaon momentum distribution shows a series of peaks due to the two-body decays $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ corresponding to the different $X_{c\bar{c}}$ masses, superimposed on a smooth spectrum due to $K^{\pm}$ coming from multi-body $B^{\pm}$ decays, or non-$B^{\pm}$ background. The mass of the $X_{c\bar{c}}$ state ($m_{X}$) can be calculated directly from $p_{K}$ using $m_{X}=\sqrt{m_{B}^{2}+m_{K}^{2}-2E_{K}m_{B}}$, where $m_{B}$ and $m_{K}$ are the $B^{\pm}$ and $K^{\pm}$ masses and $E_{K}$ is the $K^{\pm}$ energy. The resonance width $\Gamma_{X}$ can be obtained from the Breit-Wigner width of the peak in the $p_{K}$spectrum $\Gamma_{K}$, obtained after deconvolution with the momentum resolution function, using $\Gamma_{X}=\Gamma_{K}\beta_{K}m_{B}/m_{X}$, where $\beta_{K}=p_{K}/E_{K}$. We determine the number of $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ events ($N_{X}$) from a fit to the $p_{K}$ distribution. The branching fraction for the decay channel is calculated as: $${\cal B}(B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}})=\frac{N_{X}}{\epsilon_{X}% \cdot N_{B}},$$ where $\epsilon_{X}$ is the efficiency determined from the MC and $N_{B}$ the number of $B^{\pm}$ mesons in the sample. An alternative method, which we use to improve the branching fraction measurement in the case of $\eta_{c}$ , is to normalize to the channel $B^{\pm}\rightarrow K^{\pm}J/\psi$, which is well-measured in the literature Eidelman:2004wy , according to: $${\cal B}(B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}})=\frac{N_{X}}{N_{J/\psi}}\cdot% \frac{\epsilon_{J/\psi}}{\epsilon_{X}}\cdot{\cal B}(B^{\pm}\rightarrow K^{\pm}% J/\psi).$$ In this relative measurement, the systematic errors that are common to both resonances cancel in the ratio. The two methods are combined to extract ${\rm{\cal B}}(B^{\pm}\rightarrow K^{\pm}\eta_{c})$, taking into account the correlations between them. We fit the $p_{K}$ spectrum using an unbinned maximum likelihood method. The background is well modeled by a third degree polynomial and each signal is a Breit-Wigner function folded with a resolution function. The masses and widths of the $\eta_{c}$ and $\eta_{c}^{\prime}$ mesons are left free; all others are fixed to values from reference Eidelman:2004wy . The resolution function has two parts: a Gaussian with $\sigma$ varying from 6 MeV/$c$ at $p_{K}\simeq 1.1$ GeV/$c$ to 12 MeV/$c$ at $p_{K}\simeq 1.7$ GeV/$c$ describes the 72.5% of the signal where $B_{recon}$ is correctly reconstructed; if $B_{recon}$ is incorrect, but has $m_{\rm ES}$ within our range, the $p_{K}$ resolution is a bifurcated Gaussian with $\sigma=$ 78 and 52 MeV/$c$ on the left and right hand side of the peak respectively. The spectrum in the low-mass region is expected to exhibit two peaks, at $p_{K}=1.683$ GeV/$c$ corresponding to the $J/\psi$ , and at $p_{K}=1.754$ GeV/$c$ for the $\eta_{c}$ meson. These two peaks are clearly seen in Fig. 2(a); both have a significance of $\sim 7\sigma$. The number of events under each peak obtained from the fit is $N(J/\psi)=259\pm 41$ and $N(\eta_{c})=273\pm 43$. The spectrum in the high-mass region is fitted with a background and seven signal functions, corresponding to the following states: $\psi^{\prime}$, $\chi_{c0}$, $\chi_{c1}$, $\chi_{c2}$, $\psi^{\prime\prime}$, $\eta_{c}^{\prime}$ and $X(3872)$. The resulting fit is shown in Fig. 2(b), with the yields given in Table 1. The $h_{c}$ charmonium state lies near the $\chi_{c1}$, and it is difficult to distinguish the peaks from these two decays. A fit including the $h_{c}$ yields a number of $h_{c}$ events consistent with zero, and a fit performed with free $\chi_{c1}$ mass and width gives values consistent with a narrow $\chi_{c1}$, therefore we have no evidence for $h_{c}$ production. Several sources of systematic error affecting these measurements have been evaluated. The relative errors on absolute measurements are the same for all states; many of these cancel partially in relative measurements, and all are summarized in Table 2. “$B$ counting” refers to uncertainties in the fit parametrization used to determine the number of fully reconstructed $B^{\pm}_{recon}$. It is one of the largest errors in absolute measurements, and cancels in ratios. The mass scale is verified to a precision of 1.5 MeV/$c$ in $p_{K}$ by floating the masses of the well-measured $J/\psi$ , $\chi_{c1}$ and $\psi^{\prime}$ peaks; we assign a systematic error corresponding to this shift. We also consider variations in the background and signal model parametrizations, which partially cancel in the case of ratios. Errors in the $K^{\pm}$ track reconstruction and identification efficiency are evaluated by comparing data and MC control samples. The systematic error in the NN1 and NN2 selections is evaluated by comparing efficiencies and distributions in data and MC, and studying efficiency variation with $p_{K}$. We verified that the NN2 selection is not dependent on visible energy or multiplicity of the recoil part of the $B$ meson decay. Adding in quadrature, the total relative error on an absolute measurement is 9.0%. The total is reduced to 3.3% for the relative measurement of $J/\psi$ and $\eta_{c}$ , and to 5.9% for states in the high-mass region relative to $J/\psi$ . For the extraction of relative branching fractions, an additional 4% error, labeled (ext) in the following, comes from the present uncertainty of $\cal B$$(B^{\pm}\rightarrow K^{\pm}J/\psi)=(10.0\pm 0.4)\times 10^{-4}$ Eidelman:2004wy . In the high-mass region, clear signals are found for $\chi_{c1}$ and $\psi^{\prime}$ (with significance 6.0 and 3.2$\sigma$ respectively), an excess of events is present for $\eta_{c}^{\prime}$ and $\psi^{\prime\prime}$  Abe:2003zv , while no signal is found for $\chi_{c0}$ , $\chi_{c2}$ and $X(3872)$. The branching fractions and upper limits are summarized in Table 1. In the low-mass region, our $J/\psi$ measurement is consistent with the world average. From the $\eta_{c}$ and $J/\psi$ yields and the reference branching fraction we can derive the result with the relative measurement method ${\cal B}(B^{\pm}\rightarrow K^{\pm}\eta_{c})_{rel}=(10.6\pm 2.3({\rm stat})\pm 0% .4({\rm sys})\pm 0.4({\rm ext}))\times 10^{-4}$. We combine this result with the absolute measurement of Table 1, taking the correlated errors into account, to obtain ${\cal B}(B^{\pm}\rightarrow K^{\pm}\eta_{c})=(8.7\pm 1.5)\times 10^{-4}$. We obtain from our fits the $\eta_{c}$ and $\eta_{c}^{\prime}$ masses and widths and find $m_{\eta_{c}}=2982\pm 5$ MeV/$c^{2}$, $\Gamma_{\eta_{c}}<43$ MeV and $m_{\eta_{c}^{\prime}}=3639\pm 7$ MeV/$c^{2}$, $\Gamma_{\eta_{c}^{\prime}}<23$ MeV, where the width limits are both at 90% CL. Taking $\cal B$$(B^{\pm}\rightarrow K^{\pm}X(3872))<3.2\times 10^{-4}$, and using an average of the Belle Choi:2003ue and BABAR  Aubert:2004ns measurements of $\cal B$($B^{\pm}\rightarrow K^{\pm}X(3872)$)$\times$$\cal B$($X(3872)\rightarrow J/\psi\pi^{+}\pi^{-}$) we set a lower limit $\cal B$($X(3872)\rightarrow J/\psi\pi^{+}\pi^{-}$)$>4.2\%$ at 90% CL. This branching fraction, for which there are not yet any predictions, is sensitive to the distribution of charm quarks inside the $X(3872)$. A search for charged partners of the $X(3872)$ is performed by examining $K^{\pm}$ recoiling from a sample of 245.6k reconstructed $B^{0}$ decays. No signal is seen and we find $\cal B$$(B^{0}\rightarrow K^{\pm}\ X(3872)^{\mp})<5\times 10^{-4}$ at 90% CL. We combine our $\cal B$($B^{\pm}\rightarrow K^{\pm}\eta_{c}$) with a previous BABAR  measurement of $\cal B$($B^{\pm}\rightarrow K^{\pm}\eta_{c}$)$\times$$\cal B$($\eta_{c}\rightarrow K\bar{K}\pi$) Aubert:2004gc to obtain $\cal B$($\eta_{c}\rightarrow K\bar{K}\pi$)=$(8.5\pm 1.8)\%$, significantly improving the precision of the world average Eidelman:2004wy . Since this branching fraction is used as a reference for all $\eta_{c}$ yield measurements, our result will lead to more precise $\eta_{c}$ partial widths and more stringent comparisons with theoretical models. For example, from an average of $\cal B$($J/\psi\rightarrow\gamma\eta_{c}$)$\times$$\cal B$($\eta_{c}\rightarrow K\bar{K}\pi$) measured by Mark-III Baltrusaitis:1985mr , DM2 Bisello:1990re and BES Bai:2003tr , we obtain $\cal B$($J/\psi\rightarrow\gamma\eta_{c}$)=(0.79$\pm$0.20)%, and using the value $\Gamma$($\eta_{c}\rightarrow\gamma\gamma$)$\times$$\cal B$($\eta_{c}\rightarrow K\bar{K}\pi$)=$0.48\pm 0.06$ keV Eidelman:2004wy we calculate $\Gamma$($\eta_{c}\rightarrow\gamma\gamma$)=(5.6$\pm$1.4) keV. Both results are more precise than the world average Eidelman:2004wy . Similarly, we obtain $\cal B$($\eta_{c}^{\prime}$ $\rightarrow K\bar{K}\pi)$=(8$\pm$5)% and $\Gamma$($\eta_{c}^{\prime}$ $\rightarrow\gamma\gamma$)=(0.9$\pm$0.5)keV. In conclusion, a novel technique is used to measure directly the absolute branching fractions of the various charmonium states $X_{c\bar{c}}$ in two-body decays $B^{\pm}\rightarrow K^{\pm}X_{c\bar{c}}$ (Table 1). The results for $X_{c\bar{c}}=\eta_{c},J/\psi,\psi^{\prime}$ are in agreement with previous measurements, and the $\eta_{c}$ result significantly improves the present world average. Upper limits are set for $\chi_{c0}$ and $\chi_{c2}$ , confirming factorization suppression Meng:2005 . Measurements of $B^{\pm}\rightarrow K^{\pm}\eta_{c}^{\prime}$ and $B^{\pm}\rightarrow K^{\pm}\psi^{\prime\prime}$ branching fractions are reported, although with poor significance. Upper limits are given for $X(3872)$ and for production of a possible charged partner in $B^{0}$ decays. We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality. This work is supported by DOE and NSF (USA), NSERC (Canada), IHEP (China), CEA and CNRS-IN2P3 (France), BMBF and DFG (Germany), INFN (Italy), FOM (The Netherlands), NFR (Norway), MIST (Russia), and PPARC (United Kingdom). Individuals have received support from CONACyT (Mexico), A. P. Sloan Foundation, Research Corporation, and Alexander von Humboldt Foundation. References (1) B. Aubert et al. [BABAR Collaboration], Phys. Rev. D 67, 032002 (2003) [arXiv:hep-ex/0207097]. (2) S. K. Choi et al. [Belle collaboration], Phys. Rev. Lett.  89, 102001 (2002) [Erratum-ibid.  89, 129901 (2002)] [arXiv:hep-ex/0206002]. (3) C. Quigg, arXiv:hep-ph/0403187. and references therein. (4) S. K. Choi et al. [Belle Collaboration], Phys. Rev. Lett.  91, 262001 (2003) [arXiv:hep-ex/0309032]. (5) B. Aubert et al. [BABAR Collaboration], arXiv:hep-ex/0406022. (6) T. Barnes and S. Godfrey, Phys. Rev. D 69 (2004) 054008 [arXiv:hep-ph/0311162]. E. J. Eichten, K. Lane and C. Quigg, Phys. Rev. D 69, 094019 (2004) [arXiv:hep-ph/0401210]. E. S. Swanson, Phys. Lett. B 588, 189 (2004) [arXiv:hep-ph/0311229]. N. A. Tornqvist, Phys. Lett. B 590, 209 (2004) [arXiv:hep-ph/0402237]. L. Maiani, F. Piccinini, A. D. Polosa and V. Riquer, Phys. Rev. D 71, 014028 (2005) [arXiv:hep-ph/0412098]. (7) B. Aubert et al. [BABAR Collaboration], Nucl. Instrum. Meth. A 479, 1 (2002) [arXiv:hep-ex/0105044]. (8) B. Aubert et al. [BABAR Collaboration], Phys. Rev. Lett.  92, 071802 (2004) [arXiv:hep-ex/0307062]. (9) The Crystal Ball function is a gaussian with a small power-law term added to the left, used to describe the mass spectrum in exclusive $B$ decays. T. Skwarnicki et al. [Crystal Ball Collaboration], DESY F31-86-02, 1986 (unpublished). (10) The Argus function is commonly used to describe continuum background in $B$ mass spectra. H. Albrecht et al. [ARGUS Collaboration], Phys. Lett. B 316, 608 (1993). (11) S. Eidelman et al. [Particle Data Group], Phys. Lett. B 592, 1 (2004). (12) K. Abe et al. [Belle Collaboration], Phys. Rev. Lett.  93, 051803 (2004) [arXiv:hep-ex/0307061]. (13) B. Aubert et al. [BABAR Collaboration], Phys. Rev. D 70, 011101 (2004). [arXiv:hep-ex/0403007]. (14) R. M. Baltrusaitis et al. [Mark-III Collaboration], Phys. Rev. D 33, 629 (1986). (15) D. Bisello et al. [DM2 collaboration], Nucl. Phys. B 350, 1 (1991). (16) J. Z. Bai et al. [BES Collaboration], Phys. Lett. B 578, 16 (2004) [arXiv:hep-ex/0308073]. (17) C. Meng, Y. J. Gao and K. T. Chao, arXiv:hep-ph/0502240, arXiv:hep-ph/0506222.
Efficient non-parametric n-body force fields from machine learning Aldo Glielmo aldo.glielmo@kcl.ac.uk Department of Physics, King’s College London, Strand, London WC2R 2LS, United Kingdom    Claudio Zeni claudio.zeni@kcl.ac.uk Department of Physics, King’s College London, Strand, London WC2R 2LS, United Kingdom    Alessandro De Vita Department of Physics, King’s College London, Strand, London WC2R 2LS, United Kingdom Dipartimento di Ingegneria e Architettura, Università di Trieste, via A. Valerio 2, I-34127 Trieste, Italy Abstract We provide a definition and explicit expressions for $n$-body Gaussian Process (GP) kernels which can learn any interatomic interaction occurring in a physical system, up to $n$-body contributions, for any value of $n$. The series is complete, as it can be shown that the “universal approximator" squared exponential kernel can be written as a sum of $n$-body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the $n$-body kernels can be “mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first non-trivial (3-body) kernel of the series, and show that this reproduces the GP-predicted forces with meV/Å accuracy while being orders of magnitude faster. These results open the way to using novel force models (“M-FFs") that are computationally as fast as their corresponding standard parametrised $n$-body force fields, while retaining the non-parametric character, the ease of training and validation, and the accuracy of the best recently proposed machine learning potentials. I Introduction Since their conception, first-principles molecular dynamics (MD) simulations Car and Parrinello (1985) based on density functional theory (DFT) Hohenberg and Kohn (1964); Kohn and Sham (1965) have proven extremely useful to investigate complex physical processes that require quantum accuracy. These simulations are computationally expensive, and thus still typically limited to hundreds of atoms and the picosecond timescale. For larger systems that are non-uniform and thus intractable using periodic boundary conditions, multiscale embedding (“QM/MM”) approaches can sometimes be used successfully. This is possible if full quantum accuracy is only needed in a limited (QM) zone of the system, while a simpler molecular mechanics (MM) description suffices everywhere else. Very often, however, target problems require minimal model system sizes and simulation times so large that the calculations must be exclusively based on classical force fields i.e., force models that use the position of atoms as the only explicitly represented degrees of freedom. In the remainder of this introductory section we briefly review the relative strengths and weaknesses of standard parametrized (P-FFs) and machine learning force fields (ML-FFs). We then consider how accurate P-FFs are hard to develop but eventually fully exploit useful knowledge on the systems, while GP-based ML-FFs offer a general mathematical framework for handling training and validation, but are significantly slower (Section I A). These shortcomings motivate an analysis of how prior knowledge such as symmetry has been so far incorporated in GP kernels (Section I B) and points to features still missing in ML kernels, which are commonplace in the more standard, highly efficient P-FFs based on truncated $n$-body expansions (Section I C). This suggests the possibility of defining a series of $n$-body GP kernels (Section II B), providing a scheme to construct them (Section II C and D) and, after the best value of $n$ for the given target system has been identified with appropriate testing (Section II E), exploiting their dimensionally-reduced feature spaces to massively boost the execution speed of force prediction (Section III). I.1 Parametrized and machine learning force fields Producing accurate and fully transferable force fields is a remarkably difficult task. The traditional way to do this involves adjusting the parameters of carefully chosen analytic functions in the hope to match extended reference data sets obtained from experiments or quantum calculations Stillinger and Weber (1985); Tersoff (1988). The descriptive restrictiveness of the parametric functions used is both a drawback and a strength of this methodology. The main difficulty is that developing a good parametric function requires a great deal of chemical intuition and patient effort, guided by trial and error steps with no guarantee of success Brenner (2000). However, for systems and processes where the approach is fruitful, the development effort is amply rewarded by the opportunity to provide extremely fast and accurate force models Mishin (2004); van Duin et al. (2001). The identified functional forms will in these cases contain valuable knowledge on the target system, encoded in a compact formulation that still accurately captures the relevant physics. Such knowledge is furthermore often transferable to novel (while similar) systems as a “prior” piece of information, i.e., constitutes a good working hypothesis on how these systems will behave. When QM data on the novel system become available, this can be simply used to fine-tune the parameters of the functional form to a new set of best-fit values that maximise prediction accuracy. Following a different approach, “non-parametric” ML force fields can be constructed, whose dependence on the atomic position is not constrained to a particular analytic form. An implementation and tests exploring the feasibility of ML to describe atomic interactions can be found e.g., in pioneer work by Skinner and Broughton Skinner and Broughton (1995) proposing to use ML models to reproduce first principles potential energy surfaces. More recent works implementing this general idea have been based on Neural Networks Behler and Parrinello (2007) and Gaussian Process (GP) regression Bartók et al. (2010). Current work aims at making these learning algorithms both faster and more accurate Li et al. (2015); Glielmo et al. (2017); Botu and Ramprasad (2015); Ferré et al. (2016). As processing power and data communication bandwidth increase, and the cost of data storage decreases, modeling based on ML and direct inference promises to become an increasingly attractive option, compared with more traditional classical force field approaches. However, although ML schemes are general and have shown to be remarkably accurate interpolators in specific systems, so far they have not become as widespread as it might have been expected. This is mainly because “standard” classical potentials are still orders of magnitude faster than their ML counterpart Boes et al. (2016). Moreover, ML-FFs also involve a more complex mathematical and algorithmic machinery than the traditional compact functional forms of P-FFs, whose arguments are physically descriptive features that remain easier to visualize and interpret. I.2 Prior knowledge and GP kernels These shortcomings provide motivation for the present work. The high computational cost of many ML models is a direct consequence of the general inverse relation between the sought flexibility and the measured speed of any algorithm capable of learning. Highly flexible ML algorithms by definition assume very little or no prior knowledge of the target systems. In a Bayesian context, this involves using a general prior kernel, typically aspiring to preserve the full universal approximator properties of e.g., the square exponential kernel Williams and Rasmussen (2006); Bishop (2006). The price of such a kernel choice is that the ML algorithm will need large training databases Kearns and Vazirani (1994), slowing down computations as the prediction time grows linearly with the database size. Large database sizes are not, however, unavoidable, and any data-intensive and fully flexible scheme to potential energy fitting is suboptimal by definition, as it exploits no prior knowledge of the system. This completely “agnostic” approach is at odds with the general lesson from classical potential development, indicating that it is essential for efficiency to incorporate in the force prediction model as much prior knowledge of the target system as can be made available. In this respect, GP kernels can be tailored to bring some form of prior knowledge to the algorithm. It is for example possible to include any symmetry information of the system. This can be done by using descriptors that are independent of rotations, translations and permutations Li et al. (2015); Rupp et al. (2015). Alternatively, one can construct scalar-valued GP kernels that are made invariant under rotation (see e.g., Bartók et al. (2013); Glielmo et al. (2017)) or matrix-valued GP vectors made covariant under rotation (Glielmo et al. (2017), an idea that can be extended to higher-order tensors Bereau et al. (2017); Grisafi et al. (2017)). Invariance or covariance are in these cases obtained starting from a non-invariant representation by appropriate integration over the $SO(3)$ rotation group Glielmo et al. (2017); Bartók et al. (2013). Symmetry aside, progress can be made by attempting to use kernels based on simple, descriptive features corresponding to low-dimensional feature spaces. Taking inspiration from parametrized force fields, these descriptors could e.g., be chosen to be interatomic distances taken singularly or in triplets, yielding kernels based on 2- or 3-body interactions Glielmo et al. (2017); Szlachta et al. (2014); Huo and Rupp (2017). Since low-dimensional feature spaces allow efficient learning (convergence is reached using small databases), to the extent that simple descriptors capture the correct physics, the GP process will be a relatively fast, while still very accurate, interpolator. I.3 Scope of the present work There are, however, two important aspects that have not as yet been fully explored while trying to develop efficient kernels based on dimensionally reduced feature spaces. Both aspects will be addressed in the present work. First, a systematic classification of rotationally invariant (or covariant, if matrix valued) kernels, representative of the feature spaces corresponding to $n$-body interactions is to date still missing. Namely, no definition or general recipe has been proposed for constructing $n$-body kernels, or for identifying the actual value (or effective interval of values) of $n$ associated with already available kernels. This would be clearly useful, however, as the discussion above strongly suggests that the kernel corresponding to the lowest value of $n$ compatible with the physics of the target system will be the most informationally efficient one for carrying out predictions: striking the right balance between speed and accuracy. Second, for any ML approach based on a GP kernel and a fixed database, the GP predictions for any target configuration are also fixed once and for all. For an $n$-body kernel, these predictions do not need, however, to be explicitly carried out as sums over the training dataset, as they could be approximated with arbitrary precision by “mapping” the GP prediction on a new representation based on the underlying $n$-body feature space. We note that this approximation step would make the final prediction algorithm independent of the database size, and thus in principle as fast as any classical $n$-body potential based on functional forms, while still parameter free. The remainder of this work explores these two issues, and it is structured as follows. In the next Section II, after introducing the terminology and the notation (II A), we provide a definition of $n$-body kernel (II B) and propose a systematic way of constructing $n$-body kernels of any order $n$, showing how previously proposed approaches can be reinterpreted within this scheme (II C and D). We furthermore show, by extensive testing on a range realistic materials, how the optimal interaction order can be chosen as the lowest $n$ compatible with the required accuracy and the available computational power (II E). In the following Section III we describe how the predictions of n-body GP kernels can be recast (mapped) with arbitrary accuracy into very fast non-parametric force fields based on machine learning (M-FFs) which fully retain the n-body character of the GP process they were derived from. The procedure is carried out explicitly for a 3-body kernel, and we find that evaluating atomic forces is orders of magnitude faster than the corresponding GP calculation. II n-body expansions with n-body kernels II.1 Notation and terminology GP-based potentials are usually constructed by assigning an energy $\varepsilon$ to a given atomic configuration $\rho$, typically including a central atom and all its neighbors up to a suitable cutoff radius. The existence of a corresponding local energy function $\varepsilon(\rho)$ is generally assumed, in order to provide a total energy expression and guarantee a linear scaling of the predictions with the total number of atoms in the system. Within GP regression this function is calculated from a database $\mathcal{D}=\{\rho_{d},\varepsilon_{d},\mathbf{f}_{d}\}_{d=1}^{N}$ of reference data, typically obtained by quantum mechanical simulations, and usually consisting of a set of $N$ atomic configurations $\{\rho_{d}\}$ together with their relative energies $\{\varepsilon_{d}\}$ and/or forces $\{\mathbf{f}_{d}\}$. It is worth noting here that although there is no well defined local atomic energy in a reference quantum simulation, one can always use gradient information (atomic forces, which are well defined local physical quantities) to machine-learn a potential energy function. This can be done straightforwardly using derivative kernels (cf. e.g., Ref. Williams and Rasmussen (2006), Section 9.4) to learn and predict forces. Alternatively, one can learn forces directly without an intermediate energy expression, as done in Refs. Li et al. (2015); Botu and Ramprasad (2014) or more recently in Ref. Glielmo et al. (2017). A necessary condition for any of these approaches to produce energy-conserving force fields (i.e., fields that make zero work on any closed trajectory loop) is that the database is constructed once and for all, and never successively updated. After training on the given fixed database, the GP prediction on a target configuration $\rho$ consists of a linear combination of the kernel function values measuring the similarity of the target configuration with each database entry: $$\varepsilon(\rho)=\sum_{d=1}^{N}k(\rho,\rho_{d})\alpha_{d},$$ (1) where the coefficients $\alpha_{d}$ are obtained by means of inversion of the covariance matrix Williams and Rasmussen (2006) and can be shown to minimise the regularised quadratic error between GP predictions and reference calculations. II.2 Definition of an n-body kernel Classical interatomic potentials are often characterized by the number of atoms (“bodies”) they let interact simultaneously. To translate this concept into the realm of GP regression, we assume that the target configuration $\rho(\{\mathbf{r}_{i}\})$ represents the local atomic environment of an atom fixed at the origin of a Cartesian reference frame, expressed in terms of the relative positions $\mathbf{r}_{i}$ of the surrounding atoms. We define the order of a kernel $k_{n}(\rho,\rho^{\prime})$ as the smallest integer $n$ for which the following property holds true: $$\frac{\partial^{n}k_{n}(\rho,\rho^{\prime})}{\partial\mathbf{r}_{i_{1}}\cdots% \partial\mathbf{r}_{i_{n}}}=0\hskip 10.0pt\hskip 10.0pt\forall\mathbf{r}_{i_{1% }}\neq\mathbf{r}_{i_{2}}\neq\dots\neq\mathbf{r}_{i_{n}},$$ (2) where $\mathbf{r}_{i_{1}},\dots,\mathbf{r}_{i_{n}}$ are the positions of any choice of a set of $n$ different surrounding atoms. By virtue of linearity, the local energy in Eq. (1) will also satisfy the same property if $k_{n}$ does. Thus, Eq. (2) implies that the central atom in a local configuration interacts with up to $n-1$ other atoms simultaneously, making the interaction energy term $n$-body. For instance, using a 2-body kernel, the force on the central atom due to atom $\mathbf{r}_{j}$ will not depend on the position of any other atom $\mathbf{r}_{l\neq j}$ belonging to the target configuration $\rho(\{\mathbf{r}_{i}\})$. Eq. (2) can be used directly to check through either numeric or symbolic differentiation if a given kernel is of order $n$, a fact which might be far from obvious from its analytic form, depending on how the kernel is built. II.3 Building n-body kernels I: SO(3) integration Following a standard route Ferré et al. (2015); Bartók et al. (2013); Glielmo et al. (2017), we begin by representing each local atomic configuration as a sum of Gaussian functions $\mathcal{N}$ with a given variance $\sigma^{2}$, centered on the $M$ atoms of the configuration: $$\rho(\mathbf{r},\{\mathbf{r}_{i}\})=\sum_{i=1}^{M}\mathcal{N}(\mathbf{r}\mid% \mathbf{r}_{i},\sigma^{2}),$$ (3) where $\mathbf{r}$ and $\{\mathbf{r}_{i}\}_{i=1}^{M}$ are position vectors relative to the central atom of the configuration. This representation guarantees by construction invariance with respect to translations and permutations of atoms (here assumed to be of a single chemical species). As described in Glielmo et al. (2017), a covariant 2-body force kernel can be constructed from the non-invariant scalar (“base”) kernel obtained as a dot product overlap integral of the two configurations: $$\displaystyle k_{2}(\rho,\rho^{\prime})$$ $$\displaystyle=\int d\mathbf{r}\,\rho(\mathbf{r})\rho^{\prime}(\mathbf{r})$$ $$\displaystyle=L\sum_{\begin{subarray}{c}i\in\rho,j\in\rho^{\prime}\end{% subarray}}\mathrm{e}^{-(\mathbf{r}_{i}-\mathbf{r}_{j}^{\prime})^{2}/4\sigma^{2% }},$$ (4) where $L$ is an unessential constant factor, omitted for convenience from now on. That (4) is a 2-body kernel consistent with the definition of Eq. (2) can be checked straightforwardly by explicit differentiation (see Appendix A). Its $2$-body structure is also readable from the fact that $k_{2}$ is a sum of contributions comparing pairs of atoms in the two configurations, the first pair located at the two ends of vector $\mathbf{r}_{i}$ in the target configuration $\rho$, and consisting of the central atom and atom $i$, and the second pair similarly represented by the vector $\mathbf{r}_{j}^{\prime}$ in the database configuration $\rho^{\prime}$. A rotation-covariant matrix-valued force kernel can at this point be constructed by Haar integration Nachbin (1965); Schulz-Mirbach (1994) as an integral over the $SO(3)$ manifold Glielmo et al. (2017): $$\displaystyle\mathbf{K}_{2}^{s}(\rho,\rho^{\prime})$$ $$\displaystyle=\int_{SO(3)}d\mathcal{R}\,\mathbf{R}\,k_{2}(\rho,\mathcal{R}\rho% ^{\prime}).$$ (5) This kernel can be used to infer forces on atoms using a GP regression vector formula analogous to Eq. (1) (see Ref. Glielmo et al. (2017)). These forces belong to a 2-body force field purely as a consequence of the base kernel property in Eq. (2). It is interesting to notice that there is no use or need for an intermediate energy expression to construct this 2-body force field, which is automatically energy-conserving. Higher order $n$-body base kernels can be constructed as finite powers of the 2-body base kernel (4): $$k_{n}(\rho,\rho^{\prime})=k_{2}(\rho,\rho^{\prime})^{n-1}$$ (6) where the $n$-body property (Eq. (2)) can once more be checked by explicit differentiation (see Appendix A). Furthermore, taking the exponential of the kernel in Eq. (4) gives rise to a fully many-body base kernel, as all powers of $k_{2}$ are contained in the exponential formal series expansion: $$\displaystyle k_{MB}(\rho,\rho^{\prime})$$ $$\displaystyle=\mathrm{e}^{k_{2}(\rho,\rho^{\prime})/\theta^{2}}$$ $$\displaystyle=1+\frac{1}{\theta^{2}}k_{2}+\frac{1}{2!\theta^{4}}k_{3}+\dots.$$ (7) One can furthermore check that the simple exponential many-body kernel $k_{MB}$ defined above is, up to normalisation, equivalent to the squared exponential kernel Williams and Rasmussen (2006) on the natural distance induced by the dot product kernel $k_{2}(\rho,\rho^{\prime})$: $$\displaystyle\mathrm{e}^{-d^{2}(\rho,\rho^{\prime})/2\theta^{2}}=$$ $$\displaystyle N(\rho)N(\rho^{\prime})k_{MB}(\rho,\rho^{\prime})$$ (8) $$\displaystyle d^{2}(\rho,\rho^{\prime})=$$ $$\displaystyle k_{2}(\rho,\rho)+k_{2}(\rho^{\prime},\rho^{\prime})-2k_{2}(\rho,% \rho^{\prime}).$$ (9) To check on these ideas, we next test the accuracy of these kernels in learning the interactions occurring in a simple 1D model consisting of $n^{\prime}$ particles interacting via an ad-hoc $n^{\prime}$-body potential (see Appendix B.). We first let the particles interact to generate a configuration database, and then attempt to machine-learn these interactions using the kernels just described. Figure 1 illustrates the average prediction errors on the local energies of this system incurred by the GP regression based on four different kernels as a function of the interaction order $n^{\prime}$. It is clear from the graph that a force field that lets interact the $n^{\prime}$ particles simultaneously can only be learned accurately with a ($n\geq n^{\prime}$)-body kernel (6), or with the many-body exponential kernel (7) which contains all interaction orders. To construct $n$-body kernels useful for applications to real 3D systems we need to include rotational symmetry by averaging over the rotation group. For our present scopes, it is sufficient to discuss the case of rotation-invariant $n$-body scalar energy kernels, for which the integral (formally a transformation integration Haasdonk and Burkhardt (2007)) is readily obtained from Eq. (5) by simply dropping the $\mathbf{R}$ matrix in the integrand: $$\displaystyle k_{n}^{s}(\rho,\rho^{\prime})$$ $$\displaystyle=\int_{SO(3)}d\mathcal{R}\>k_{n}(\rho,\mathcal{R}\rho^{\prime}).$$ (10) The integration could be carried out approximately for instance using appropriate functional expansions as proposed in Ref. Bartók et al. (2013). Alternatively, one can exploit the Gaussian nature of the configuration expansion (3) and use an analytically exact formula, as done further below. The resulting symmetrized $n$-body kernel $k_{n}^{s}$ will learn faster than its non-symmetrized counterpart $k_{n}$, as the rotational degrees of freedom have been integrated out. This is because a non-symmetrized $n$-body kernel ($k_{n}$) must learn functions of $3n-3$ variables (translations are taken into account by the local representation based on relative position in Eq. (3)). After integration, the new kernel $k_{n}^{s}$ defines a smaller and more physically-based space of functions of $3n-6$ variables, which is the rotation-invariant functional domain of $n$ interacting particles. The symmetrization integral (10) can be written down for the many-body base kernel $k_{MB}$ (Eq. (7)), to define a new many-body kernel $k^{s}_{MB}$ invariant under all physical symmetries: $$\displaystyle k_{MB}^{s}(\rho,\rho^{\prime})$$ $$\displaystyle=\int_{SO(3)}d\mathcal{R}\>k_{MB}(\rho,\mathcal{R}\rho^{\prime}).$$ (11) By virtue of the universal approximation theorem Hornik (1993); Williams and Rasmussen (2006) this kernel would be able to learn arbitrary physical interactions with arbitrary accuracy, if provided with sufficient data. Unfortunately, the exponential kernel (7) has to date resisted all attempts to carry out the analytic integration over rotations (11), leaving as the only open options numerical integration, or discrete summation over a relevant point group of the system Glielmo et al. (2017). On the other hand, the analytic integration of 2- or 3-body kernels has been successfully carried out in different ways. This may use integration over rotations evaluated by an exact analytic expression Glielmo et al. (2017) or approximated using a suitably truncated exact expansion Bartók et al. (2013). In particular, it is readily seen that the widely used SOAP integral over rotations Bartók et al. (2013); Szlachta et al. (2014); Thompson et al. (2015); De et al. (2016); Rowe et al. (2017) is, in fact, a symmetrized 3-body kernel which becomes a higher order $n$-body kernel if raised to integer powers $\zeta\geq 2$ (see next subsection). Integrating $k_{n}$ over $SO(3)$ becomes significantly challenging for $n>3$. However, an analytic expression for the general case can be derived. To see this, write the $n$-body base kernel Eq. (6) as an explicit product of $(n-1)$ 2-body kernels. The Haar integral (10) can then be written as $$\displaystyle k_{n}^{s}(\rho,\rho^{\prime})=\!\!\!\!\!\!\sum_{\begin{subarray}% {c}\mathbf{i}=(i_{1},\dots,i_{n-1})\in\rho\\ \mathbf{j}=(j_{1},\dots,j_{n-1})\in\rho^{\prime}\end{subarray}}\!\!\!\!\!\!% \tilde{k}_{\mathbf{i},\mathbf{j}}$$ (12) $$\displaystyle\tilde{k}_{\mathbf{i},\mathbf{j}}=$$ $$\displaystyle\int d\mathcal{R}\,\mathrm{e}^{-\frac{\|\mathbf{r}_{i_{1}}-% \mathbf{R}\mathbf{r}_{j_{1}}^{\prime}\|^{2}}{4\sigma^{2}}}\dots\mathrm{e}^{-% \frac{\|\mathbf{r}_{i_{n-1}}-\mathbf{R}\mathbf{r}_{j_{n-1}}^{\prime}\|^{2}}{4% \sigma^{2}}}$$ (13) where now for each of the two configurations $\rho,\rho^{\prime}$, the sum runs over all $n$-plets of atoms that include the central atom (whose indices $i_{0}$ and $j_{0}$ are thus omitted). Expanding the exponents as $(\mathbf{r}_{i}-\mathbf{R}\mathbf{r}_{j}^{\prime})^{2}=r_{i}^{2}+r_{j}^{\prime 2% }-2\text{\rm{Tr}}(\mathbf{R}\mathbf{r}_{j}^{\prime}\mathbf{r}_{i}^{\rm{T}})$ allows extracting from the integral (13) a rotation independent constant $\mathcal{C}_{\mathbf{i},\mathbf{j}}$, and to express the rotation-dependent scalar products sum as a trace of a matrix product: $$\displaystyle\tilde{k}_{\mathbf{i},\mathbf{j}}$$ $$\displaystyle=\mathcal{C}_{\mathbf{i},\mathbf{j}}\mathcal{I}_{\mathbf{i},% \mathbf{j}}$$ (14) $$\displaystyle\mathcal{C}_{\mathbf{i},\mathbf{j}}$$ $$\displaystyle=\mathrm{e}^{-(r_{i_{1}}^{2}+r_{j_{1}}^{\prime 2}+\dots r_{i_{n-1% }}^{2}+r_{j_{n-1}}^{\prime 2})/4\sigma^{2}}$$ (15) $$\displaystyle\mathcal{I}_{\mathbf{i},\mathbf{j}}$$ $$\displaystyle=\int d\mathcal{R}\,\mathrm{e}^{\rm{Tr}(\mathbf{R}\mathbf{M}_{% \mathbf{i},\mathbf{j}})}$$ (16) where the matrix $\mathbf{M}_{\mathbf{i},\mathbf{j}}$ is the sum of the outer products of the ordered vector couples in the two configurations: $\mathbf{M}_{\mathbf{i},\mathbf{j}}=(\mathbf{r}_{j_{1}}^{\prime}\mathbf{r}_{i_{% 1}}^{\rm{T}}+\dots+\mathbf{r}_{j_{n-1}}^{\prime}\mathbf{r}_{i_{n-1}}^{\rm{T}})% /2\sigma^{2}$. The integral (16) occurs in the context of multivariate statistics as the generating function of the non-central Wishart distribution Anderson (1946). As shown in James (1955), it can be expressed as a power series in the symmetric polynomials ($\alpha_{1}=\sum_{i}^{3}\mu_{i},\alpha_{2}=\sum_{i<j}^{3}\mu_{i}\mu_{j},\alpha_% {3}=\mu_{1}\mu_{2}\mu_{3}$) of the eigenvalues $\{\mu_{i}\}_{i=1}^{3}$ of the symmetric matrix $\mathbf{M}_{\mathbf{i},\mathbf{j}}^{\rm{T}}\mathbf{M}_{\mathbf{i},\mathbf{j}}$: $$\displaystyle\mathcal{I}_{\mathbf{i},\mathbf{j}}$$ $$\displaystyle=\sum_{p_{1},p_{2},p_{3}}A_{p_{1}p_{2}p_{3}}\alpha_{1}^{p_{1}}% \alpha_{2}^{p_{2}}\alpha_{3}^{p_{3}}$$ (17) $$\displaystyle A_{p_{1}p_{2}p_{3}}$$ $$\displaystyle=\frac{\pi\,2^{-(1+2p_{1}+4p_{2}+6p_{3})}(p_{1}+2p_{2}+4p_{3})!}{% p_{1}!p_{2}!p_{3}!\Gamma(\frac{3}{2}+p_{1}+2p_{2}+3p_{3})\Gamma(1+p_{2}+2p_{3})}$$ $$\displaystyle\quad\times\frac{1}{\Gamma(\frac{1}{2}+p_{3})(p_{1}+2p_{2}+3p_{3}% )!}.$$ (18) Remarkably, in this result (whose exactness is checked numerically in Figure 2) the integral over rotations does not depend on the order $n$ of the base kernel, once the matrix $\mathbf{M}_{\mathbf{i},\mathbf{j}}$ is computed. This is not the case for previous approaches to integrating over rotations Bartók et al. (2013); Glielmo et al. (2017) that need to be reformulated with increasing and eventually prohibitive difficulty each time the order $n$ needs to be increased. However, the reference expressions in Eqs. (14-18) are still a relatively complex and computationally heavy functions of the atomic positions. Such complexity can be largely avoided if equally accurate kernels can be built by physical intuition at least for low $n$ orders, as discussed in the next section. II.4 Building n-body kernels II: n-body feature spaces and uniqueness issues The practical effect of the Haar integration (10) is the elimination of the three spurious rotational degrees of freedom. The same result can always be achieved by selecting a group of symmetry- invariant degrees of freedom for the system, typically including the distances and/or bond angles found in local atomic environments, or simple functions of these. Appropriate symmetrized kernels can then simply be obtained by defining a similarity measure directly on these invariant quantities Li et al. (2015); Rupp et al. (2012); Szlachta et al. (2014). To construct symmetry invariant $n$-body kernels with $n=2$ and $n=3$ we can choose these degrees of freedom to be just interparticle distances: $$\displaystyle\ k_{2}^{s}(\rho,\rho^{\prime})$$ $$\displaystyle=\sum_{\begin{subarray}{c}i\in\rho\\ j\in\rho^{\prime}\end{subarray}}\tilde{k}_{2}(r_{i},r_{j})$$ (19) $$\displaystyle k_{3}^{s}(\rho,\rho^{\prime})$$ $$\displaystyle=\sum_{\begin{subarray}{c}i_{1},i_{2}\in\rho\\ j_{1},j_{2}\in\rho^{\prime}\end{subarray}}\tilde{k}_{3}((r_{i_{1}},r_{i_{2}},r% _{i_{1}i_{2}}),(r_{j_{1}}^{\prime},r_{j_{2}}^{\prime},r_{j_{1}j_{2}}^{\prime}))$$ (20) where the $\tilde{k}$ are kernel functions that directly specify the correlation of distances, or triplets of distances, found within the two configurations. Since these kernels learn functions of low-dimensional spaces, their exact analytic form is not essential for performance, as any fully non-linear function $\tilde{k}$ will give equivalent converged results in the rapidly reached large-database limit. This equivalence can be neatly observed in Figure 3, which reports the performance of 2- and 3-body kernels built either directly over the set of distances (Eqs. (19) and (20)) or via the exact Haar integral (Eqs. (12-18)). As the test system is crystalline Silicon, 3-body kernels are better performing. However, since convergence of the 2- and 3-body feature space is quickly achieved (at about $N=50$ and $N=100$ respectively), there is no significant performance difference between $SO(3)$-integrated $n$-body kernels and physically motivated ones. Consequently, for low interaction orders, simple and computationally fast kernels like the ones in Eqs. (19, 20) are always preferable to more complex (and heavier) alternatives obtained via integration over rotations (e.g., the one defined by Eqs. (12-18) or those found in Refs. Bartók et al. (2013); Glielmo et al. (2017). We note at this point that Eq. (19) can be generalized to construct a symmetric $n$-body kernel $$k_{n}^{s}(\rho,\rho^{\prime})=\sum_{\begin{subarray}{c}i_{1},\dots,i_{n-1}\in% \rho\\ j_{1},\dots,j_{n-1}\in\rho^{\prime}\end{subarray}}\,\tilde{k}_{n}(\mathbf{q}_{% i_{1},\dots,i_{n-1}},\mathbf{q}_{j_{1},\dots,j_{n-1}}^{\prime}),$$ (21) where the components of the feature vectors $\mathbf{q}$ are the chosen symmetry-invariant degrees of freedom describing the $n$-plet of atoms. The $\mathbf{q}$ feature vectors are required to be $(3n-6)$ dimensional for all $n$, except for $n=2$, where they become scalars. In practice, for $n>3$ selecting a suitable set of invariant degrees of freedom is not trivial. For instance, for $n=4$ the set of six unordered distances between four particles do not specify their relative positions unambiguously, while for $n>4$ the number of distances associated with $n$ atoms exceeds the target feature space dimension $3n-6$. Meanwhile, the computational cost of evaluating the full sum in Eq. (21) very quickly becomes prohibitively large as the number of elements in the sum grows exponentially with $n$. The order of an already symmetric $n$-body kernel can however be augmented with no computational overhead by generating a derived kernel through simple exponentiation to an integer power, at the cost of losing the uniqueness Bartók et al. (2013); Huang and von Lilienfeld (2016); von Lilienfeld et al. (2015) of the representation. This can be easily understood by means of an example (graphically illustrated in Figure 4). Let us consider the 2-body symmetric kernel $k_{2}^{s}$ (Eq. (19)) which learns a function of just a single distance, and therefore treats the $r_{i}$ distances between the central atom and its neighbors independently. Its square is the kernel $$k_{3}^{\neg u}(\rho,\rho^{\prime})=\sum_{\begin{subarray}{c}i_{1},i_{2}\in\rho% \\ j_{1},j_{2}\in\rho^{\prime}\end{subarray}}\tilde{k}_{2}(r_{i_{1}},r_{j_{1}}^{% \prime})\tilde{k}_{2}(r_{i_{2}},r_{j_{2}}^{\prime})$$ (22) which will be able to learn functions of two distances $r_{i_{1}},r_{i_{2}}$ from the central atom of the target configuration $\rho$ (see Figure 4) and thus will be a $3$-body kernel in the sense of Eq. (2). However, this kernel cannot resolve angular information, as rotating the atoms in $\rho$ around the origin by independent, arbitrary angles will yield identical predictions. Extending this line of reasoning, it is easy to show that squaring a symmetric $3$-body kernel yields a kernel that can capture interactions up to 5-body, although again non-uniquely. This has often been done in practice by squaring the SOAP integral Deringer and Csányi (2017); Rowe et al. (2017). In general, raising a 3-body “input” kernel to an arbitrary integer power $p$ yields an $n$-body output kernel of order $2p+1$, $k_{n=2p+1}^{\neg u}=k_{3}^{s}(\rho,\rho^{\prime}){}^{p}$, that is non-unique. Substituting 3 with any $n^{\prime}$ for the order of the symmetrized input kernel will similarly generate a $k_{n}^{\neg u}=k_{n^{\prime}}^{s}(\rho,\rho^{\prime}){}^{p}$ kernel of order $n=(n^{\prime}-1)p+1$. None of the kernels obtained as finite powers of some symmetric lower-order kernels is a many-body one (they will all satisfy Eq. (2) for some finite $n$). However, an attractive immediate generalization consists of substituting any squaring or cubing with full exponentiation. For instance, exponentiating a symmetrized 3-body kernel we obtain the non-unique many-body kernel $k_{MB}^{\neg u}=\exp[k_{3}^{s}(\rho,\rho^{\prime})]$. It is clear from the infinite expansion in Eq. (7) that this kernel is a true many-body one in the sense of Eq. (2), and is also fully symmetric. As is also the case for all finite-power kernels, its computational cost will be defined by the order $n^{\prime}$ of the input kernel (3 in the present example) as the sum in Eq. (21) only runs on the atomic $n^{\prime}$-plets (here, triplets) in $\rho$ and $\rho^{\prime}$. While still non-unique, this new kernel is not a priori known to neglect any order of interaction that might occur in a physical system and thus be encoded in a reference QM training database. To summarise, we provided a definition for a $n$-body kernel, and proposed a general formalism for building $n$-body kernels by exact Haar integration over rotations. We then defined a class of simpler kernels based on rotation invariant features that are also $n$-body according to the previous definition. As both approaches become computationally expensive for high values of $n$, we proposed a method to build $n$-body kernels as powers of lower-order input $n^{\prime}$-body kernels, with no additional computational overhead. While this comes at the cost of sacrificing the unicity property of the descriptor, the procedure suggests how to build, by full exponentiation, a many-body symmetric kernel. For many applications, however, using a finite-order kernel will provide the best option. II.5 Optimal n-kernel choice In general, choosing a higher order $n$-body kernel will improve accuracy at the expense of speed. The optimal kernel choice for a given application will correspond to the best tradeoff between computational cost and representation power, which will depend on the physical system investigated. The properties of some of the kernels discussed above are summarized in Table 1, while their performance is tested on a range of materials in Figure 5. The figure reveals some general trends. 2-body kernels can be trained very quickly, as good convergence can be attained already with $\sim$100 training configurations. The 2-body representation is a very good descriptor for a few materials under specific conditions, while their overall accuracy is ultimately limited. This will yield e.g., excellent force accuracy for a close-packed bulk system like crystalline Nickel (inset (a)), and reasonable accuracy for a defected $\alpha$-Fe system whose bcc structure is however metastable if just pair potentials are used (inset (b)). Accuracy improves dramatically once angular information is acquired by training 3-body kernels. These can accurately describe forces acting on iron atoms in the bulk $\alpha$-Fe system containing a vacancy (inset (b)) and those acting on carbon atoms in both diamond and graphite (inset (c)). However, 3-body GPs need larger training databases. Also, atoms participate in many more triplets than simple bonds in their standard environments contained in the database, which will make 3-body kernels slower than 2-body ones for making predictions by GP regression. Both problems would extend, getting worse, to higher values of $n$, as summing over all database configurations and all feature $n$-plets in each database configuration will make GP predictions progressively slower. However, complex materials where high-order interactions presumably play a significant role should be expected to be well described by ML-FF based on a many-body kernel. This is verified here in the case of amorphous Silicon (inset (d)). Figure 5 (b) also shows the performance of some non-unique kernels. As discussed above, these are options to increase the order of an input kernel avoiding the need to sum over the correspondingly higher order $n$-plets. Our tests indicate that the ML-FFs generated by non-unique kernels sometimes improve appreciably on the input kernels’ performance: e.g., the error incurred by the 2-body kernel of Eq. (19) in the Fe-vacancy system is higher than that associated with its square, the non-unique 3-body kernel of Eq. (22). Unfortunately, but not surprisingly, the improvement can be in other cases modest or nearly absent, as exemplified by comparing the errors associated with the 3-body kernel and its square -the non-unique 5-body kernel-, in the same system. Overall, the analysis of Figure 5 suggests that an optimal kernel can be chosen by comparing the learning curves of the various $n$-body kernels and the many-body kernel over the available QM database: the comparison will reveal the simplest (most informative, lowest $n$) description that is still compatible with the error level deemed acceptable in the simulation. Trading transferability for accuracy by training the kernels on a QM database appropriately tailored for the target system (e.g., restricted to just bulk or simply-defected system configurations sampled at the relevant temperatures as done in the Ni and Fe-systems of Figure 5) will enable surprisingly good accuracy even for low $n$ values. This should be expected to systematically improve on the accuracy performance of classical potentials involving non-linear parameter fitting, as exemplified by comparing the errors associated with $n$-body kernel models and the average errors of state-of-the-art embedded atom model (EAM) P-FFs Mishin (2004); Mendelev et al. (2003) (insets (a) and (b)). The next section further explores the performance of GP-based force prediction, to address the final issue of what execution speed can be expected for ML-based force fields, once the optimally accurate choice of kernel has been made. III Mapped Force Fields (M-FFs) Once a GP kernel is recognized as being $n$-body, it automatically defines an $n$-body force field corresponding to it, for any given choice of training set. This will be an $n$-body function of atomic positions satisfying Eq. (2), whose values can be computed by GP regression sums over the training set as done by standard ML-FF implementations, but do not have to be computed this way. In particular, the execution speed of a machine learning-derived $n$-body force field might be expected to depend on its order $n$ (e.g., it will involve sums over all atomic triplets, like any 3-body P-FF, if $n$=3), but should otherwise be independent of the training set size. It should therefore be possible to construct a mapping procedure yielding a machine learning-derived, non-parametric force field (an efficient “M-FF”) that allows a very significant speed-up over calculating forces by direct GP regression. We note that non-unique kernels obtained as powers of $n^{\prime}$-body input kernels exceed their reference $n^{\prime}$-body feature space and thus could not be similarly sped up by mapping their predictions onto an M-FF of equal order $n^{\prime}$, while mapping onto an M-FF of the higher output order $n$ would still be feasible. For convenience, we will analyze a 3-body kernel case, show that a 3-body GP exactly corresponds to a classical 3-body M-FF, and show how the mapping yielding the M-FF can be carried out in this case, using a 3D-spline approximator. The generalization to any order $n$ is straightforward provided that a good approximator can be identified and implemented. We begin by inserting the general form of a 3-body kernel (Eq. (21)) into the GP prediction expression (Eq. (1)), to obtain $$\displaystyle\varepsilon(\rho)$$ $$\displaystyle=\sum_{d=1}^{N}\sum_{\begin{subarray}{c}i_{1},i_{2}\in\rho\\ j_{1},j_{2}\in\rho_{d}\end{subarray}}\tilde{k}_{3}(\mathbf{q}_{i_{1},i_{2}},% \mathbf{q}_{j_{1},j_{2}}^{d})\alpha_{d}.$$ (23) Inverting the order of the sums over the database and atoms in the target configurations yields a general expression for the 3-body potential: $$\displaystyle\varepsilon(\rho)$$ $$\displaystyle=\sum_{i_{1},i_{2}\in\rho}\tilde{\varepsilon}(\mathbf{q}_{i_{1},i% _{2}})$$ (24) $$\displaystyle\tilde{\varepsilon}(\mathbf{q}_{i_{1},i_{2}})$$ $$\displaystyle=\sum_{d=1}^{N}\sum_{j_{1},j_{2}\in\rho_{d}}\tilde{k}_{3}(\mathbf% {q}_{i_{1},i_{2}},\mathbf{q}_{j_{1},j_{2}}^{d})\alpha_{d}.$$ (25) Eq. (24) reveals that the GP implicitly defines the local energy of a configuration as a sum over all triplets containing the central atom, where the function $\tilde{\varepsilon}$ represents the energy associated with each triplet in the physical system. The triplet energy is calculated by three nested sums, one over the $N$ database entries and two running over the $M$ atoms of each database configuration ($M$ may slightly vary over configurations, but can be assumed to be constant for the present purpose). The computational cost of a single evaluation of the triplet energy (25) scales consequently as $\mathcal{O}(NM^{2})$. Clearly, improving the GP prediction accuracy by increasing $N$ and $M$ will make the prediction slower. However, such computational burden can be avoided, bringing the complexity of the triplet energy calculation (25) to $\mathcal{O}(1)$. Since the triplet energy $\tilde{\varepsilon}$ is a function of just three variables (the effective symmetry-invariant degrees of freedom associated with three particles in three dimensions), we can calculate and store its values on an appropriately distributed grid of points within its domain. This procedure effectively maps the GP predictions on the relevant 3-body feature space: once completed, the value of the triplet energy at any new target point can be calculated via a local interpolation, using just a subset of nearest tabulated grid points. If the number of grid points $N_{g}$ is made sufficiently high, the mapped function will be essentially identical to the original one but, by virtue of the locality of the interpolation, the cost of evaluating it will not depend on $N_{g}$. The 3-body configuration energy of Eq. (24) also includes 2-body contributions coming from the terms in the sum for which the indices $i_{1}$ and $i_{2}$ are equal. When $i_{1}=i_{2}=i$ the term $\varepsilon(\mathbf{q}_{i,i})$ can be interpreted as the pairwise energy associated with the central atom and atom $i$. The term can consequently be mapped onto a 1D 2-body feature space whose coordinate is the single independent component of the $\mathbf{q}_{i,i}$ feature vector, typically the distance between atom $i$ and the central atom. In the same way, an $n$-body kernel naturally defines a set of $n$-body energy terms of order comprised between $2$ and $n$, depending on the number of repeated indices. Figure 6 shows the convergence of the mapped forces derived from the 3-body kernel in Eq. (20) for a database of DFTB atomic forces for the a-Si system. The interpolation is carried out using a 3D cubic spline for different 3D mesh sizes. Comparison with the reference forces produced by the GP allows to generate, for each mesh size, the distribution of the absolute deviation of the force components from their GP-predicted values. The standard deviation of the interpolation error distribution is shown in the insert on a log-log scale, as a function of $N_{g}$. Depending on the specific reference implementation, the speed-up in calculating the local energy (Eq. (24)) provided by the mapping procedure can vary widely, while it will always grow linearly with $N$ and quadratically with $M$ (see Figure 7), and it will be always substantial: in typical testing scenarios we found this to be of the order of $10^{3}-10^{4}$. An example of an M-FF, obtained for a-Si with $n=3$ is shown in Figure 8. As its profile is not prescribed by any particular functional form, the potential is free to optimally adapt to the information contained in the QM training set, to best reproduce the quantum interactions that produced it. Figure 8 contains some expected features e.g., a radial minimum at about $r\simeq 2.4\text{\AA}$ in the 2-body section (upper panel), the corresponding angular minimum at $\theta_{0}\simeq 110^{\circ}$ (lower panel), which is approximately equal to the $sp^{3}$ hybridization angle of 109.47${}^{\circ}$, and rapid growth for small radii (upper panel) and angles (lower panel). Less intuitive features are also visible, like the shallow maximum in the 2-body section at $r\simeq 3.1\text{\AA}$. IV Concluding remarks The results presented in this work exemplify how physical priors built in the GP kernels restrict their descriptiveness, while improving their convergence speed as a function of training dataset size. This provides a framework to optimise efficiency. Comparing the performance of $n$-body kernels allows us to identify the lowest order $n$ that is compatible with the required accuracy as a consequence of the physical properties of the target system. As a result, accuracy can in each case be achieved using the most efficient kernel e.g., a 2-body kernel for bulk Ni, or a 3-body for carbon and graphite, for a $\sim$ 0.1 eV/Å target force accuracy, see Figure 5. As should be reasonably expected, relying on low-dimensional feature spaces will limit the maximum achievable accuracy if higher-order interactions, missing in the representation, occur in the target system. On the other hand, we find that once the optimal order $n$ has been identified and the $n$-kernel has been trained (whichever its form e.g., whether defined as a function of invariant descriptors as in (21), or constructed as a power of such a function, or derived as an analytic integral over rotations using Eqs. (12-18)), it becomes possible to map its prediction on the appropriate $n$-dimensional domain, and thus generate a $n$-body M-FF machine-learned atomic force field that predicts atomic forces at essentially the same computational cost of a classical $n$-body P-FF parametric force field. The GP predictions allow for a natural intrinsic measure of uncertainty -the GP predicted variance-, and the same mapping procedure used for the former can also be applied to the latter. Thus, like their ML-FF counterparts, and unlike P-FFs, M-FFs offer a tool which could be used to monitor whether any extrapolation is taking place that might involve large prediction errors. In general, our results suggest a possible three-step procedure to build fast non-parametric M-FFs whenever a series of kernels $k_{n}$ can be defined with progressively higher complexity/descriptive power and well-defined feature spaces with $n$-dependent dimensionality. However the series is constructed (and whether or not it converges to a universal descriptor) this will involve (i) GP training of $n$-kernels for different values of $n$, in each case using as many database entries relevant to the target system as needed to achieve convergence of the $n$-dependent prediction error; (ii) identification of the minimal sufficient value of $n$, corresponding to the simplest description of the systems’s interactions compatible with the target accuracy, to maximise the incorporation of prior knowledge; (iii) mapping of the ML-FF force field GP-predicted using $k_{n}$ onto an efficient M-FF, using a suitably fast approximator function defined over the relevant feature space. A major limitation of the M-FFs obtained this way is that, similar to P-FFs, they can be used only in “interpolation mode”, that is when the target configurations are all well represented in the fixed database used. This is not the case in molecular dynamics simulations potentially revealing new chemical reaction paths, or whenever online learning or the use of dynamically-adjusted database subsets are necessary to avoid unvalidated extrapolations and maximise efficiency. In such cases, “learn on the fly” (LOTF) algorithms can be deployed, which have the ability to incorporate novel QM data into the database used for force prediction. In such schemes, the new data are either developed at runtime by new QM calculations, or are adaptably retrieved as the most relevant subset of a much larger available QM database Li et al. (2015). While the complication of continuously mapping the GP predictions to reflect a dynamically updated training database makes on the fly M-FF generation a less attractive option, the availability of an array of $n$-body kernels is very useful for this class of algorithms, which provides further motivation for their development. In particular, distributing the use of $n$-body kernels non-uniformly in both space and time along the system’s trajectory has the potential to provide an optimally efficient approach to accurate MD simulations using the LOTF scheme. Acknowledgements The authors acknowledge funding by the Engineering and Physical Sciences Research Council (EPSRC) through the Centre for Doctoral Training “Cross Disciplinary Approaches to Non-Equilibrium Systems” (CANES, Grant No. EP/L015854/1) and by the Office of Naval Research Global (ONRG Award No. N62909-15-1-N079). ADV acknowledges further support by the EPSRC HEmS Grant No. EP/L014742/1 and by the European Union’s Horizon 2020 research and innovation program (Grant No. 676580, The NOMAD Laboratory, a European Centre of Excellence). We are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1). We furthermore thank Gábor Csányi, from the Engineering Department, University of Cambridge, for the Carbon database and Samuel Huberman, from the MIT Nano-Engineering Group for the initial geometry used in the a-Si simulations. Finally, we want to thank Ádám Fekete for stimulating discussions and precious technical help. References Car and Parrinello (1985) R. Car and M. Parrinello, Physical Review Letters 55, 2471 (1985). Hohenberg and Kohn (1964) P. Hohenberg and W. Kohn, Physical review  (1964). Kohn and Sham (1965) W. Kohn and L. J. Sham, Physical review 140, A1133 (1965). Stillinger and Weber (1985) F. H. Stillinger and T. A. Weber, Physical review B31, 5262 (1985). Tersoff (1988) J. Tersoff, Physical Review B 37, 6991 (1988). Brenner (2000) D. W. Brenner, physica status solidi(b)  (2000). Mishin (2004) Y. Mishin, Acta Materialia 52, 1451 (2004). van Duin et al. (2001) A. C. T. van Duin, S. Dasgupta, F. Lorant,  and W. A. Goddard, The Journal of Physical Chemistry A 105, 9396 (2001). Skinner and Broughton (1995) A. J. Skinner and J. Q. Broughton, Modelling and Simulation in Materials Science and Engineering  (1995). Behler and Parrinello (2007) J. Behler and M. Parrinello, Physical Review Letters 98, 146401 (2007). Bartók et al. (2010) A. P. Bartók, M. C. Payne, R. Kondor,  and G. Csányi, Physical Review Letters 104, 136403 (2010). Li et al. (2015) Z. Li, J. R. Kermode,  and A. De Vita, Physical Review Letters 114, 096405 (2015). Glielmo et al. (2017) A. Glielmo, P. Sollich,  and A. De Vita, Physical Review B 95, 214302 (2017). Botu and Ramprasad (2015) V. Botu and R. Ramprasad, Physical Review B 92, 094306 (2015). Ferré et al. (2016) G. Ferré, T. Haut,  and K. Barros,   (2016), 1612.00193v1 . Boes et al. (2016) J. R. Boes, M. C. Groenenboom, J. A. Keith,  and J. R. Kitchin, International Journal of Quantum Chemistry 116, 979 (2016). Williams and Rasmussen (2006) C. K. I. Williams and C. E. Rasmussen, the MIT Press  (2006). Bishop (2006) C. M. Bishop, Pattern Recognition and Machine Learning, Information Science and Statistics (Springer, New York, NY, 2006). Kearns and Vazirani (1994) M. J. Kearns and U. V. Vazirani, An Introduction to Computational Learning Theory (MIT Press, 1994). Rupp et al. (2015) M. Rupp, R. Ramakrishnan,  and O. A. von Lilienfeld, The Journal of Physical Chemistry Letters 6, 3309 (2015). Bartók et al. (2013) A. P. Bartók, R. Kondor,  and G. Csányi, Physical Review B 87, 184115 (2013). Bereau et al. (2017) T. Bereau, R. A. DiStasio Jr, A. Tkatchenko,  and O. A. von Lilienfeld, arXiv.org  (2017), 1710.05871v1 . Grisafi et al. (2017) A. Grisafi, D. M. Wilkins, G. Csányi,  and M. Ceriotti, arXiv.org  (2017), 1709.06757v1 . Szlachta et al. (2014) W. J. Szlachta, A. P. Bartók,  and G. Csányi, Physical Review B 90, 104108 (2014). Huo and Rupp (2017) H. Huo and M. Rupp, arXiv.org  (2017), 1704.06439v1 . Botu and Ramprasad (2014) V. Botu and R. Ramprasad, International Journal of Quantum Chemistry  (2014). Ferré et al. (2015) G. Ferré, J. B. Maillet,  and G. Stoltz, The Journal of Chemical Physics 143, 104114 (2015). Nachbin (1965) L. Nachbin, The Haar integral (Van Nostrand, Princeton, 1965). Schulz-Mirbach (1994) H. Schulz-Mirbach, in Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3 - Conference C: Signal Processing (Cat. No.94CH3440-5 (IEEE, 1994) pp. 387–390 vol.2. Haasdonk and Burkhardt (2007) B. Haasdonk and H. Burkhardt, Machine Learning 68, 35 (2007). Hornik (1993) K. Hornik, Neural networks 6, 1069 (1993). Thompson et al. (2015) A. P. Thompson, L. P. Swiler, C. R. Trott, S. M. Foiles,  and G. J. Tucker, Journal of Computational Physics 285, 316 (2015). De et al. (2016) S. De, A. P. Bartók, G. Csányi,  and M. Ceriotti, Physical Chemistry Chemical Physics 18, 13754 (2016). Rowe et al. (2017) P. Rowe, G. Csányi, D. Alfè,  and A. Michaelides, arXiv.org  (2017), 1710.04187v2 . Anderson (1946) T. W. Anderson, The Annals of Mathematical Statistics 17, 409 (1946). James (1955) A. T. James, Proceedings of the Royal Society of London Series a-Mathematical and Physical Sciences 229, 367 (1955). Rupp et al. (2012) M. Rupp, A. Tkatchenko, K. R. Müller,  and O. A. von Lilienfeld, Physical Review Letters 108, 058301 (2012). Huang and von Lilienfeld (2016) B. Huang and O. A. von Lilienfeld, The Journal of Chemical Physics 145, 161102 (2016). von Lilienfeld et al. (2015) O. A. von Lilienfeld, R. Ramakrishnan, M. Rupp,  and A. Knoll, International Journal of Quantum Chemistry 115, 1084 (2015). Deringer and Csányi (2017) V. L. Deringer and G. Csányi, Physical Review B 95, 094203 (2017). Mendelev et al. (2003) M. I. Mendelev, S. Han, D. J. Srolovitz, G. J. Ackland, D. Y. Sun,  and M. Asta, Philosophical Magazine 83, 3977 (2003). Perdew et al. (1996) J. P. Perdew, K. Burke,  and M. Ernzerhof, Physical Review Letters 77, 3865 (1996). APPENDIX IV.1 Kernel order by explicit differentiation We first prove that the kernel given in Eq. (4) is 2-body in the sense of Eq. (2). For this it is sufficient to show that its second derivative with respect to the relative position of two different atoms of the target configuration $\rho$ always vanishes. The first derivative is $$\displaystyle\frac{\partial k_{2}(\rho,\rho^{\prime})}{\partial\mathbf{r}_{i_{% 1}}}$$ $$\displaystyle=\sum_{ij}\frac{\partial}{\partial\mathbf{r}_{i_{1}}}\mathrm{e}^{% -\|\mathbf{r}_{i}-\mathbf{r}_{j}^{\prime}\|^{2}/4\sigma^{2}}$$ $$\displaystyle=\sum_{ij}\mathrm{e}^{-\|\mathbf{r}_{i}-\mathbf{r}_{j}^{\prime}\|% ^{2}/4\sigma^{2}}\frac{(\mathbf{r}_{i}-\mathbf{r}_{j}^{\prime})}{2\sigma^{2}}% \delta_{ii_{1}}$$ $$\displaystyle=\sum_{j}\mathrm{e}^{-\|\mathbf{r}_{i_{1}}-\mathbf{r}_{j}^{\prime% }\|^{2}/4\sigma^{2}}\frac{(\mathbf{r}_{i_{1}}-\mathbf{r}_{j}^{\prime})}{2% \sigma^{2}}.$$ This depends only the atom located at $\mathbf{r}_{i_{1}}$ of the configuration $\rho$. Thus, differentiating with respect to the relative position $\mathbf{r}_{i_{2}}$ of any other atom of the configuration gives the relation in Eq. (2) for 2-body kernels: $$\frac{\partial^{2}k_{2}(\rho,\rho^{\prime})}{\partial\mathbf{r}_{i_{1}}% \partial\mathbf{r}_{i_{2}}}=0.$$ We next show that the kernel defined in Eq. (6) is an $n$-body in the sense of Eq. (2). This follows naturally from the result above, given that $k_{n}$ is defined as $k_{n}=k_{2}^{n-1}$. We can thus write down its first derivative as $$\frac{\partial k_{n}}{\partial\mathbf{r}_{i_{1}}}=(n-1)k_{2}^{n-2}\frac{% \partial k_{2}}{\partial\mathbf{r}_{i_{1}}}.$$ Since the second derivative of $k_{2}$ is null, the second derivative of $k_{n}$ is simply $$\displaystyle\frac{\partial^{2}k_{n}}{\partial\mathbf{r}_{i_{1}}\partial% \mathbf{r}_{i_{2}}}$$ $$\displaystyle=(n-2)(n-1)k_{2}^{n-3}\frac{\partial k_{2}}{\partial\mathbf{r}_{i% _{1}}}\frac{\partial k_{2}}{\partial\mathbf{r}_{i_{2}}}$$ and after $n-1$ derivations we similarly obtain $$\displaystyle\frac{\partial^{2}k_{2}^{n-1}}{\partial\mathbf{r}_{i_{1}}\cdots% \partial\mathbf{r}_{i_{n}}}$$ $$\displaystyle=(n-1)!\,k_{2}^{0}\,\frac{\partial k_{2}}{\partial\mathbf{r}_{i_{% 1}}}\dots\frac{\partial k_{2}}{\partial\mathbf{r}_{i_{n-1}}}.$$ Since $k_{2}^{0}=1$, the final derivative with respect to the $n_{th}$ particle position $\mathbf{r}_{i_{n}}$ is zero as required by Eq. (2). IV.2 1D n’-body model To test the ideas behind the $n$-body kernels, we used a 1D $n^{\prime}$-particle model reference system where a (“central”) particle is kept fixed at the coordinate axis origin (consistent with the local configuration convention of Eq. (3)). The force on the central atom in the model is $$f=\sum_{i_{1}\dots i_{{n^{\prime}}-1}}J\,x_{i_{1}}\dots x_{i_{{n^{\prime}}-1}}$$ where $\{x_{i_{p}}\}_{p=1}^{{n^{\prime}}-1}$ are the relative positions of ${n^{\prime}}-1$ particles, and $J$ is an interaction constant set to $0.5$ for the generation of Figure 1. IV.3 Databases details The bulk Ni and Fe databases were obtained from simulations using a $4\times 4\times 4$ periodically repeated unit cell, modelling the electronic exchange and correlation interactions via the PBE/GGA approximation Perdew et al. (1996), and controlling the temperature (set at $\rm{500K}$) by means of a weakly-coupled Langevin thermostat (the DFT trajectories are available from the KCL research data management system at the link http://doi.org/10.18742/RDM01-92). The C database comprises bulk diamond and AB- and ABC-stacked graphene layer structures. These structures were obtained from DFT simulations at varying temperatures and pressures, using a fixed 3$\times$3$\times$2 periodic cell geometry for graphite, and simulation cells ranging from 1$\times$1$\times$1 to 2$\times$2$\times$2 unitary cells for diamond, the relative DFT trajectories can be found in the “libAtoms” data repository via the following link http://www.libatoms.org/Home/DataRepository. The amorphous Si database was obtained from a microcanonical DFTB 64-atom simulation carried out in periodic boundaries, with average kinetic energy corresponding to a temperature of $T=650K$. The radial cutoffs used to create the local environments for the four elements considered are: 4.0 Å (Ni), 4.45 Å (Fe), 3.7 Å (C) and 4.5 Å (Si). IV.4 Details on the kernels used for testing All energy kernels presented in the work can be used to learn/predict forces after generating a standard derivative kernel (Ref. Williams and Rasmussen (2006), section 9.4, also cf. Section II A of the main text.) In particular, for each scalar energy kernel $k$ a matrix-valued force kernel $\mathbf{K}$ can be readily obtained by double differentiation with respect to the positions ($\mathbf{r}_{0}$ and $\mathbf{r}^{\prime}_{0}$) of the central atoms in the target and database configurations $\rho$ and $\rho^{\prime}$: $$\mathbf{K}(\rho,\rho^{\prime})=\dfrac{\partial^{2}k(\rho,\rho^{\prime})}{% \partial\mathbf{r}_{0}\partial\mathbf{r}_{0}^{\prime\rm{T}}}.$$ The kernels $\tilde{k}_{2}$ and $\tilde{k}_{3}$ (Eqs. (19,20)) were chosen as simple squared exponentials in the tests shown. Noting as $\mathbf{q}$ (or $q$) the vector (or scalar) containing the effective degrees of freedom of the atomic $n$-plet considered (see Eq. (21)), the two kernels read: $$\displaystyle\tilde{k}_{2}(q_{i},q_{j}^{\prime})$$ $$\displaystyle=\mathrm{e}^{-(q_{i}-q^{\prime}_{j})^{2}/2\sigma^{2}}$$ $$\displaystyle\tilde{k}_{3}(\mathbf{q}_{i_{1},i_{2}},\mathbf{q}^{\prime}_{j_{1}% ,j_{2}})$$ $$\displaystyle=\sum_{\mathbf{P}\in\mathcal{P}_{c}}\mathrm{e}^{-\|\mathbf{q}_{i_% {1},i_{2}}-\mathbf{P}\mathbf{q}^{\prime}_{j_{1},j_{2}}\|^{2}/2\sigma^{2}},$$ where $\mathcal{P}_{c}$ (|$\mathcal{P}_{c}|=3$) is the set of cyclic permutations of three elements. Summing over the permutation group is needed to guarantee permutation symmetry of the energy. As discussed in the main text, the exact form of these low-order kernels is not essential as the large database limit is quickly reached. The many-body force kernel referred to in Fig. 5 was built as a covariant discrete summation of the many-body energy base-kernel (7) over the $O_{48}$ crystallographic point group, using the procedure of Ref. Glielmo et al. (2017). This procedure yields an approximation to the full covariant integral of the many-body kernel (7) given in Eq. (5). All kernel hyperparameters used in this work were independently optimised by cross validation for each dataset.
Automating Staged Rollout with Reinforcement Learning Shadow Pritchard University of TulsaTulsaOklahomaUSA swp7196@utulsa.edu ,  Vidhyashree Nagaraju University of TulsaTulsaOklahomaUSA vidhyashree-nagaraju@utulsa.edu  and  Lance Fiondella University of Massachusetts DartmouthDartmouthMassachusettsUSA lfiondella@umassd.edu (2022) Abstract. Staged rollout is a strategy of incrementally releasing software updates to portions of the user population in order to accelerate defect discovery without incurring catastrophic outcomes such as system wide outages. Some past studies have examined how to quantify and automate staged rollout, but stop short of simultaneously considering multiple product or process metrics explicitly. This paper demonstrates the potential to automate staged rollout with multi-objective reinforcement learning in order to dynamically balance stakeholder needs such as time to deliver new features and downtime incurred by failures due to latent defects. DevOps, Staged Rollout, Reinforcement Learning, Software Reliability ††journalyear: 2022††copyright: acmcopyright††conference: New Ideas and Emerging Results ; May 21–29, 2022; Pittsburgh, PA, USA††booktitle: New Ideas and Emerging Results (ICSE-NIER’22), May 21–29, 2022, Pittsburgh, PA, USA††price: 15.00††doi: 10.1145/3510455.3512782††isbn: 978-1-4503-9224-2/22/05††ccs: Software and its engineering Software testing and debugging 1. Introduction Researchers recognize the importance of software performance engineering and application performance management (Brunnert et al., 2015) during the development and operation phases, including interoperability between different tools and techniques. However, it is also important to consider process and product quality metrics in an integrated manner as well as any tradeoffs imposed. For example, staged rollout (Zhao et al., 2018) seeks to accelerate deployment of new features while preserving safety, yet this strategy of introducing an update to a subset of users may pose tradeoffs such as the time to deliver a new feature and the downtime experienced by users. Techniques to automate the staged rollout process could reduce the level of expertise required to balance these stakeholder needs. Tarvo et al. (Tarvo et al., 2015) defined and collected metrics on software updates, but required the user to make release decisions. Zhao et al. (Zhao et al., 2018) introduced a framework to release software to subsets of users based on time, power, and risk-based scheduling, which rely on predetermined thresholds or time windows. The thesis of Velayutham (Velayutham, 2021) treated staged rollout as a time series problem with autoregressive integrated moving average and long short-term memory to automate decision making, but tradeoffs were not explicitly considered. Potential reinforcement learning approaches to model staged rollout include non-stationary (Khetarpal et al., 2020) methods, which encompass time-varying rewards. Abdallah et al. (Abdallah and Kaisers, 2016) introduced repeated Q-learning to update reward tables more frequently for seldom visited states in order to improve performance under non-stationary conditions. Multi-objective reinforcement learning (Liu et al., 2014) attempts to balance rewards associated with multiple objectives through a weighted, linear reward (Krass, 1990). Additional methods to balance multiple objectives include geometric steering (Vamplew et al., 2017) or hyper volume action selection (Van Moffaert et al., 2013). This paper presents a model of the staged rollout problem as a non-stationary Markov decision process (MDP). Multi-objective Q-learning with upper confidence bound exploration is applied to solve the MDP. The approach is demonstrated on a data set from the software reliability engineering literature, illustrating reinforcement learning as a promising approach to make dynamic decisions during staged rollout. The remainder of the paper is organized as follows: Section 2 formulates a state model of the staged rollout process, discussing tradeoffs and reward function modeling. Section 3 summarizes Q-learning with upper confidence bound as well as naive policy enumeration. Section 4 develops metrics to assess policies identified by reinforcement learning. Section 5 presents preliminary results. Section 6 describes plans for future research. 2. Staged Rollout Modeling and Tradeoff Assessment This section presents a state model of the staged rollout problem that can be interpreted as a Markov decision process (MDP), where a multi-objective reinforcement learning agent makes decisions to balance two primary factors, including (i) downtime and (ii) delivery time according to stakeholder preference. For example, safety critical software industries place greater emphasis on minimizing failures, whereas many other industries prioritize minimizing delivery time to get a product to market quickly. 2.1. Staged Rollout Model Staged rollout of software (Zhao et al., 2018) has been recognized as a strategy to field new functionality on an ongoing basis without incurring failures that induce system outages, widespread unavailability of services, economic losses, and user dissatisfaction. The rationale for staged rollout is to publish an updated software possessing new functionality for use by a subset of the user base to avoid the major problems described above, but also to accelerate the discovery of defects. The development team then attempts to correct the source of the problem and begins the process of staged rollout anew. Figure 1 shows a simple state diagram of a staged rollout process. State $Dev$ represents the development state, where software is tested by an internal team. An elementary model of the traditional approach to software updates simply transitions to the $Ops$ state, once the software is deemed satisfactory with respect to functional requirements, reliability, and other desired attributes, where the $Ops$ state exposes the software to the entire base of $n_{Ops}$ users. Staged rollout, instead, transitions from the $Dev$ state to state $i_{1}$, which represents the first stage of staged rollout, where the software is published for use by $p_{i_{1}}$ percent or $n_{i_{1}}=p_{i_{1}}\times n_{Ops}$ of the user base. Multiple stages of staged rollout between development and full deployment are also possible. This more general case possesses $m$ intermediate staged rollout states, where it may be reasonable to assume that each transition from $i_{j}$ to $i_{j+1}$ increases the fraction of the user base exposed to the software such that $p_{i_{j}}<p_{i_{j+1}}$ and $n_{i_{j}}<n_{i_{j+1}}$. Failure in any state transitions to the $Dev$ state, where root cause analysis and defect removal are attempted. The state model described in Figure 1 enables explicit consideration of the tradeoffs between downtime and delivery time. Downtime is determined by the state in which the failure occurs and is proportional to the fraction of the user base $n_{i_{j}}$ and mean time to resolve ($MTTR$). Thus, failure in the $j^{th}$ state of staged rollout ($i_{j}$) contributes less to downtime than failure in the state $i_{j+1}$. However, the defect exposure rate in the $j^{th}$ state of staged rollout is also less than the defect exposure rate in the $(j+1)^{th}$ state, meaning that the downtime experienced, and the time required to discover and remove all defects are modeled as competing constraints. Thus, minimizing downtime by remaining in the $Dev$ state until all defects have been detected and removed will likely delay delivery time. Similarly, unrestrained transition to the Ops state immediately after each defect is resolved is likely to exacerbate downtime. Therefore, it may not be possible to simultaneously minimize downtime and delivery time, posing a multi-objective problem. Moreover, organizations implementing staged rollout may express different levels of tolerance for these undesirable outcomes. Subsequently, it is unlikely that a single optimal policy or ”one size fits all” approach to staged rollout exists. Instead, it is necessary to select transition times $t_{j,j+1}=\frac{1}{\lambda_{j,j+1}}$ that balance downtime and delivery time in a manner that is satisfactory to the customer. Intuitively, a high defect discovery rate in the $Dev$ state is likely to indicate that additional defects will be uncovered. Hence, staged rollout should not be performed because it would risk greater downtime. Therefore, the problem is to select numerical values of transition rates $\lambda_{j,j+1}=\frac{1}{t_{j,j+1}}$ that achieve the desired balance between downtime and delivery time. In the case where these rates vary as a function of time, the problem is said to be non-stationary. Since the transitions $\lambda_{Dev,i_{1}}$ and $\lambda_{i_{1},Ops}$ of Figure 1 are decisions, the staged rollout state diagram is a Markov decision process to which reinforcement learning can be applied in order to solve for a range of policies to take actions (make transition decisions) from a given state. Traditional software reliability models (Farr, 1996) assume that the rate of defect discovery is nonhomogeneous. For example, if the rate of defect discovery is high in the early stages of testing, then the optimal action will be to stay in the $Dev$ state, where the penalty for failure is lowest. However, later in testing when fewer undiscovered defects remain, the optimal action will be to transition to state $i_{1}$ or $Ops$ in order to reduce delivery time. Therefore, the model is non-stationary because the rate at which defects are discovered changes as testing progresses. Thus, this change in the defect detection rate will also cause the reward to change and the optimal policy for a given state will change over time. 2.2. Process Tradeoffs and Reward Modeling This section defines quantitative metrics for delivery time and downtime in terms of the state model shown in Figure 1 as well as how these metrics are incorporated into a multi-objective reward model suitable for application of reinforcement learning. 2.2.1. Delivery Time To model the impact of staged rollout on delivery time, we assume that one unit of time in the $Dev$ state advances the timeline by one unit, whereas time in the staged rollout and $Ops$ state accelerate the rate at which time advances proportional to the percentage of the user base. Therefore, staged rollout may be regarded as a modern form of accelerated life testing (Nelson, 2009) for software. For example, if the complete user base is composed of $n_{Ops}=10,000$ users and staged rollout exposes new functionality to $p_{i_{1}}=0.1$ or $10\%$ of the user base, then $n_{i_{1}}=1000$. Similarly, if $n_{Dev}=50$, then a simple method to compute the acceleration factor in each state of staged rollout is the ratio between the number of users in a state over the baseline in the $Dev$ state such that the acceleration factor in the staged rollout and $Ops$ states are $\frac{n_{i_{1}}}{n_{Dev}}=20$ and $\frac{n_{Ops}}{n_{Dev}}=200$ respectively. This simplifying assumption can be improved, since testers are familiar with the functionality and intentionally stress the program to expose defects. Modeling these ratios is a research question that requires staged rollout data. Nevertheless, the simplifying assumptions made here enable a quantitative framework upon which to improve. The preliminary assumption of linear acceleration factors described above provides a concrete starting point to measure the cumulative time required to reach a release decision. Specifically, delivery time may be defined as the time to discover and eliminate defects associated with a test set plus the time to transition from the $Dev$ to $Ops$ state ($t_{Dev,i_{1}}+t_{t_{1},Ops}$) because no additional defects are discovered under the simplifying assumption that the final defect is resolved immediately. Modeling advances that explicitly consider the time between defect discovery and resolution (Lo and Huang, 2006; Nafreen et al., 2020) can further enhance the realism of the staged rollout deployment model. 2.2.2. Downtime To model the impact of staged rollout on downtime, we assume that failure in the $Dev$ state does not incur downtime, since only internal testing is performed at this stage. However, downtime incurred in the staged rollout and $Ops$ states are proportional to the fraction of the user base multiplied by the mean time to resolve defects such that the accumulated downtime increases by $p_{i_{1}}*MTTR$ or $p_{Ops}\times MTTR$. This simplifying assumption may be conservative, since not all users exposed to the functionality will necessarily experience the failure. Similar to delivery time, the downtime experienced must be modeled from staged rollout data and has important implications for identifying an optimal deployment policy for transition times, since conservative assumptions may unnecessarily increase delivery time. Thus, our preliminary model expresses the total downtime as the weighted sum $MTTR\times\sum_{i=1}^{n}p_{s(i)}$, where $s(i)$ denotes the state in which the $i^{th}$ failure occurs. 2.2.3. Reward Modeling To model staged rollout as a reinforcement learning problem, this section defines the reward function according to techniques from multi-objective reinforcement learning (Liu et al., 2014). Specifically, a linear combination of the delivery and down time measures defined in Sections 2.2.1 and 2.2.2 at each time step. To facilitate the selection of weights in an intuitive manner, our ongoing research seeks to specify constrained optimization problems to natural forms such as (i) minimize delivery time while limiting downtime to a specified constraint or (ii) minimizing downtime while limiting delivery time to a specified constraint. 3. Algorithms This section describes the Q-learning (Watkins, 1989) algorithm to estimate the rewards of state-action pairs, the upper confidence bound exploration strategy, and a naive policy enumeration approach to establish a baseline with which to compare the performance of reinforcement learning. 3.1. Q-Learning Reinforcement learning (Sutton and Barto, 2018) employs a Markov decision process to estimate the reward of each state-action pair denoted $E[r(s,a)]$, which determines a policy $\pi(s)$ that informs the best action to take in any given state. Q-learning (Watkins, 1989) is a model free algorithm to estimate the reward of a state-action pair according to the following update equation (1) $$Q(s_{t},a)=Q(s_{t},a)+\alpha(r+\gamma\max_{a^{\prime}\in\mathbb{A}}Q(s_{t+1},a^{\prime})-Q(s_{t},a))$$ where $s_{t}$ is the state at time step $t$, $a$ the action taken, $\alpha\in(0,1)$ the learning rate, $r$ the reward, $\gamma\in(0,1)$ the time discount, and $a^{\prime}$ the action in the next state ($t+1$) chosen from all available actions ($\mathbb{A}$). A time discount close to one expresses a willingness to defer rewards farther into the future, whereas small values of gamma increase preference for immediate rewards. The learning rate $\alpha$ is a multiplier which determines how much emphasis to place on the most recently observed reward when updating the reward estimate of a state-action pair. 3.2. Upper Confidence Bound (UCB) The upper confidence bound exploration strategy combines the $Q$-value with an adaptation from statistical upper confidence limits to choose the next action in a given state. (2) $$a^{\prime}=\max_{a\in\mathbb{A}}Q(s,a)+c\times\sqrt{\log\left(\frac{t+1}{n(s,a)}\right)}$$ where $c>0$ is a scaling constant, $t$ is the present time step, and $n$ is the number of times that a state-action pair has been selected previously. Thus, the UCB exploration strategy selects an action according to the estimated reward as well as the uncertainty embodied in the finite sample size associated with state-action pairs. The upper confidence term is larger for smaller values of $n$, encouraging exploration of less frequently taken state-action pairs. Small values of $c$ tend toward exploitation of the action with the highest $Q$-value, whereas large values of $c$ encourage exploration of the least frequently taken actions. 3.3. Naive Policy Enumeration Naive policy enumeration does not use reinforcement learning, is not capable of being used in an online fashion, and is therefore not a competitive alternative. It is a deterministic approach based on emulation with the full data to approximate the Pareto optimal front of tradeoffs between delivery time and downtime from a range of stationary policies in order to objectively compare the performance of a reinforcement learning approach in terms of its distance from optimality. Specifically, state transition policy vector $\bm{\lambda}=\langle\lambda_{Dev,i_{1}},\lambda_{i_{1},Ops}\rangle$ for Figure 1 is applied with a range of deterministic values such as the cross product of policies, where $\lambda_{Dev,i_{1}}$ and $\lambda_{i_{1},Ops}$ respectively denote the time to spend in the $Dev$ and staged rollout state without failure before transitioning to the staged rollout or $Ops$ state. Thus, naive enumeration applies a range of deterministic policy vectors on the same data set and computes a tuple including the downtime and delivery time for each policy vector. A plot consisting of only the tuples that correspond to non-dominated policies (lowest delivery time for a fixed downtime or lowest downtime for a fixed delivery time) constitute the Pareto optimal front for comparison with reinforcement learning strategies. 4. Metrics To impose greater rigor and promote objective comparison of reinforcement learning approaches, we define two quantitative metrics, including range, which measures the flexibility of an approach, and average suboptimality, which may be regarded as a performance-oriented metric. Both of these metrics are computed relative to a baseline Pareto optimal curve determined from naive policy enumeration. The discussion that follows introduces these metrics, provides a mathematical formula, and explains what it quantifies as well as how the metric supports objective comparison. 4.1. Range The range of an approach ($a$) with respect to an objective ($o$) is (3) $$range_{a,o}=\frac{\max{a,o}-\min{a,o}}{\max{naive,o}-\min{naive,o}}.$$ Thus, Equation (3) is the range of values an approach achieves with respect to an objective divided by the range of the naive approach, which is treated as a baseline for the sake of normalization. The range of the naive approach is therefore $1.0$. Approaches that attain a range greater than $1.0$ for an objective produce a wider Pareto front than the naive approach, which directly impacts policy selection, since it will not be possible to identify a Pareto optimal policy below the minimum or above the maximum of a specific objective with an approach. 4.2. Suboptimality The suboptimality of a specific policy obtained by an approach with respect to an objective is (4) $$subopt_{p,a,o}=\frac{o_{p,a}}{o_{p,naive}}$$ where $o_{p,a}$ is the value of the objective achieved by policy $p$ obtained with approach $a$ and $o_{p,naive}$ is the value of the objective achieved by a policy determined from the naive approach. Thus, Equation (4) is simply the ratio of the value achieved by an approach for a specified objective divided by an equivalent value for the naive approach when all other objectives are held constant. In other words, we divide the value associated with a point on the Pareto curve of an approach by an equivalent point on the curve for the naive approach. Often, there is no naive policy that possesses the exact same value as the point for the policy under consideration. We therefore approximate suboptimality by linearly interpolating two points on the naive Pareto curve in order to obtain a precisely matching value. The average suboptimality ($E[subopt_{p,a,o}]$) is a summary statistic, which computes the supoptimality for each point (individual policy) of a specified approach and objective and then calculates the average of these values. 5. Results To demonstrate the potential to automate staged rollout with reinforcement learning, the SYS1 data set (Lyu, 1996), which documents $n=136$ times at which unique defects were detected during approximately 25 hours of testing was employed. The weight on delivery time ($w_{0}$) was varied in the interval $(0,1)$, while the remaining weight was placed on downtime such that $w_{1}=1-w_{0}$. For each combination of weights, Q-learning with the upper confidence bound exploration strategy given in Equation (2) was applied with parameters $c=0.15$, learning rate $\alpha=0.15$, and time discount $\gamma=0.999999$. It should be noted that Q-learning with the UCB strategy learned as progress was made along the testing timeline of the SYS1 data set. Therefore, this method did not require separate data sets for training. Naive policy enumeration was performed with pairs of values from the cross product of $t_{Dev,i_{1}}=\{1,100,200,\dots,10000\}$ and $t_{i_{1},Ops}=\{1,100,200,\dots,10000\}$. For the sake of illustration, parameters of the staged rollout model were set to $MTTR=10$, $n_{Dev}=50$, $n_{i}=1000$, and $n_{Ops}=10000$. Figure 2 compares the range of policies (points) identified by UCB as well as those determined by naive policy enumeration, which can only serve as a baseline because it is deterministic, unlike reinforcement learning, which makes decisions as data becomes available. Figure 2 indicates that UCB policies with low delivery times and downtime greater than $300$ are competitive with naive enumeration. However, there is a visible gap between the downtime achieved by UCB compared to naive enumeration for downtimes less than $300$. One possible explanation is that UCB must perform some amount of exploration. While parameter tuning could certainly reduce this gap, a more important enhancement will be to incorporate software engineering specific testing factors that reinforcement learning can use to drive staged rollout decisions. Moreover, one may regard the results obtained by naive policy enumeration as “lucky”, since they simply specify fixed values for times to wait before transitioning between states when no failure is experienced. The average suboptimality metric given in Equation (4) summarizes this gap as a single number. For example, the average suboptimality of UCB with respect to downtime and delivery time were $2.79$ and $2.72$, indicating the potential margin for improvement. Figure 2 also shows that the best policy to minimize downtime that UCB could identify was approximately $70$ (upper left most grey dot on Pareto curve). Thus, naive enumeration was able to achieve less downtime at the expense of higher delivery times. The range metric given in Equation (3) summarizes the width of the UCB policies relative to naive enumeration, which were $0.80$ and $0.83$ for downtime and delivery time respectively, indicating that policies identified by UCB were approximately $80\%$ as wide as the corresponding naive policies. 6. Future Plans Our preliminary results obtained for Q-learning combined with the upper confidence bound approach suggest that reinforcement learning is a viable approach to automate staged rollout. Our ongoing work will consider alternative state of the art reinforcement learning methods to match or exceed naive policies and provide a greater range of flexibility. Techniques to specify a desired balance of objectives as a constrained optimization problem will provide a layer of abstraction to automatically select and apply a suitable policy on the Pareto front. Toward this end, simple and efficient techniques such as binary search can be used to identify a policy that best matches stakeholder needs. Acknowledgment This material is based upon work supported by the National Science Foundation under Grant Number 1749635 and the Office of Naval Research under Award Number N00014-22-1-2012. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or Office of Naval Research. References (1) Abdallah and Kaisers (2016) Sherief Abdallah and Michael Kaisers. 2016. Addressing environment non-stationarity by repeating Q-learning updates. The Journal of Machine Learning Research 17, 1 (2016), 1582–1612. Brunnert et al. (2015) Andreas Brunnert, André van Hoorn, Felix Willnecker, Alexandru Danciu, Wilhelm Hasselbring, Christoph Heger, Nikolas Herbst, Pooyan Jamshidi, Reiner Jung, Joakim von Kistowski, et al. 2015. Performance-oriented DevOps: A research agenda. arXiv preprint arXiv:1508.04752 (2015). Farr (1996) W. Farr. 1996. Handbook Of Software Reliability Engineering. McGraw-Hill, New York, NY, chapter Software Reliability Modeling Survey, 71–117. Khetarpal et al. (2020) Khimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. 2020. Towards continual reinforcement learning: A review and perspectives. arXiv preprint arXiv:2012.13490 (2020). Krass (1990) Dmitry Krass. 1990. Contributions to the theory and applications of Markov decision processes. Ph.D. Dissertation. The Johns Hopkins University. Liu et al. (2014) Chunming Liu, Xin Xu, and Dewen Hu. 2014. Multiobjective reinforcement learning: A comprehensive overview. IEEE Transactions on Systems, Man, and Cybernetics: Systems 45, 3 (2014), 385–398. Lo and Huang (2006) J. Lo and C. Huang. 2006. An integration of fault detection and correction processes in software reliability analysis. Journal of Systems and Software 79, 9 (2006), 1312–1323. Lyu (1996) Michael Lyu. 1996. Handbook of software reliability engineering (2 ed.). IEEE computer society press CA. Nafreen et al. (2020) Maskura Nafreen, Melanie Luperon, Lance Fiondella, Vidhyashree Nagaraju, Ying Shi, and Thierry Wandji. 2020. Connecting Software Reliability Growth Models to Software Defect Tracking. In International Symposium on Software Reliability Engineering. IEEE, 138–147. Nelson (2009) Wayne Nelson. 2009. Accelerated testing: statistical models, test plans, and data analysis. Vol. 344. John Wiley & Sons. Sutton and Barto (2018) Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA. Tarvo et al. (2015) Alexander Tarvo, Peter F Sweeney, Nick Mitchell, VT Rajan, Matthew Arnold, and Ioana Baldini. 2015. CanaryAdvisor: a statistical-based tool for canary testing. In International Symposium on Software Testing and Analysis. 418–422. Vamplew et al. (2017) Peter Vamplew, Rustam Issabekov, Richard Dazeley, Cameron Foale, Adam Berry, Tim Moore, and Douglas Creighton. 2017. Steering approaches to Pareto-optimal multiobjective reinforcement learning. Neurocomputing 263 (2017), 26–38. Van Moffaert et al. (2013) Kristof Van Moffaert, Madalina M Drugan, and Ann Nowé. 2013. Hypervolume-based multi-objective reinforcement learning. In International Conference on Evolutionary Multi-Criterion Optimization. Springer, 352–366. Velayutham (2021) Girish Velayutham. 2021. Artificial Intelligence assisted Canary Testing of Cloud Native RAN in a mobile telecom system. Watkins (1989) Christopher John Cornish Hellaby Watkins. 1989. Learning from delayed rewards. (1989). Zhao et al. (2018) Zhenyu Zhao, Mandie Liu, and Anirban Deb. 2018. Safely and quickly deploying new features with a staged rollout framework using sequential test and adaptive experimental design. In International Conference on Computational Intelligence and Applications. IEEE, 59–70.
Andreev-Lifshitz Hydrodynamics Applied to an Ordinary Solid under Pressure Matthew R. Sears    Wayne M. Saslow wsaslow@tamu.edu Department of Physics, Texas A&M University, College Station, TX 77843-4242 (December 7, 2020) Abstract We have applied the Andreev-Lifshitz hydrodynamic theory of supersolids to an ordinary solid. This theory includes an internal pressure $P$, distinct from the applied pressure $P_{a}$ and the stress tensor $\lambda_{ik}$. Under uniform static $P_{a}$, we have $\lambda_{ik}=(P-P_{a})\delta_{ik}$. For $P_{a}\neq 0$, Maxwell relations imply that $P\sim P_{a}^{2}$. The theory also permits vacancy diffusion but treats vacancies as conserved. It gives three sets of propagating elastic modes; it also gives two diffusive modes, one largely of entropy density and one largely of vacancy density (or, more generally, defect density). For the vacancy diffusion mode (or, equivalently, the lattice diffusion mode) the vacancies behave like a fluid within the solid, with the deviations of internal pressure associated with density changes nearly canceling the deviations of stress associated with strain. We briefly consider pressurization experiments in solid ${}^{4}$He at low temperatures in light of this lattice diffusion mode, which for small $P_{a}$ has diffusion constant $D_{L}\sim P_{a}^{2}$. The general principles of the theory – that both volume and strain should be included as thermodynamic variables, with the result that both $P$ and $\lambda_{ik}$ appear – should apply to all solids under pressure, especially near the solid-liquid transition. The lattice diffusion mode provides an additional degree of freedom that may permit surfaces with different surface treatments to generate different responses in the bulk. pacs: 67.80.B-, 67.80.bd, 05.70.Ln, 63.10.+a I Introduction Since the late 1960’s there have been theoretical suggestions that solids might display flow behavior similar to what is found in superfluids.AL69 ; Thouless69 ; Chester70 ; Leggett70 For that reason there has been a great deal of interest in solid ${}^{4}$He as a candidate supersolid.BalCau08 The first experimental indication of superflow was the appearance of a non-classical moment of inertia (NCRI), first observed by Chan’s group, since confirmed by many other laboratories, and strongly linked to disorder.KC1 ; KC2 ; RR1 ; Shira1 ; Koj1 ; Penzev ; RR2 ; RR3 ; Lin07 ; ClarkWestChan07 In addition, the shear modulus shows anomalous behavior,DayBeamishNature07 although not enough to explain the NCRI experiments.ChanScience08 Non-NCRI superflow has been searched for but not observed.Day06 We also note recent experiments that argue against any supersolid signature above approximately 55 mK.ShevDayBeam10 Further works casting doubt on supersolidity are a study of bcc ${}^{4}$He that shows unusual NCRI behavior at higher temperatures,SatanPEPolturak and a study showing that the NCRI behavior due to plasticity has different properties than due to quenching.Reppy10 In a recent experiment on a pancake-shaped sample, a capacitance gauge monitored the pressure as a function of temperature $T$.RitRep09 Samples were produced by both the slow-cooling blocked capillary method and by the more rapid quench-cooling method, which gives more disordered samples. In one set of measurements the sample was quench-cooled below 1 K in 144 s, during which time the pressure decreased. This is perhaps an indication that vacancies, formed during the quench, were leaving the sample. For a blocked-capillary sample the temperature was lowered below 500 mK while the pressure was monitored vs $T$. The sample was then annealed at 1.65 K, where the pressure increased, perhaps an indication that vacancies now were entering the sample. A second cooldown yielded, by a reduced $T^{2}$ term in the pressure, an indication that the sample was less disordered, but that disorder remained. Even at a constant temperature of 19 mK the pressure continued to relax, which is consistent with vacancies equilibrating. The fact that the observed relaxation times do not saturate at the temperatures studied indicates that the temperature is not yet low enough that quantum relaxation processes dominate thermal relaxation processes. This experiment can perhaps be interpreted under the assumption that the system is not supersolid. We have therefore undertaken a theoretical study of the macroscopic flow properties of a one-component ordinary solid. Our basis is the theory of Andreev and Lifshitz (AL) for the macroscopic behavior of a supersolid. They included volume $V$ as an extensive variable, in addition to $W_{ik}\equiv Vw_{ik}$, where $w_{ik}$ is the non-symmetrized strain. This permitted them to continuously go to the superfluid limit as $w_{ik}$ becomes irrelevant. The point of the present work is that on eliminating the superfluid variables, the theory should apply to an ordinary solid.AL69 We employ a variation on the notation of Ref. Saslow77, , which gives a more explicit derivation of the equations of motion and extends Ref. AL69, to include nonlinear terms.SaslowNote ; Liu Note also the theory of Fleming and CohenFC76 for an ordinary solid, which gives equations with a similar structure, and similar modes, but uses a very different notation (and does not consider an applied pressure $P_{a}$). Both Ref. AL69, and Ref. FC76, implicitly assume that uniform vacancy number-changing bulk processes are negligible, and neglect interstitials and impurities. Recently Yoo and DorseyYooDorsey considered the effect of a lattice diffusion mode on light scattering by a supersolid, but also briefly considering an ordinary solid. As noted by Martin, Parodi, and Pershan,MartinParodi the normal system has eight degrees of freedom, given by two scalar thermodynamic quantities (which can be taken to be the mass density $\rho$ and the entropy density $s$) and two vector quantities: the lattice vector $u_{i}$ and the velocity $v_{i}$ associated with the momentum density $g_{i}=\rho v_{i}$. (With $m_{4}$ the atomic mass and $n$ the number density of ${}^{4}$He atoms, we have $\rho=m_{4}n$.) As a consequence there are eight normal modes. For a uniform infinite system these modes are three pairs of propagating elastic waves and two diffusive modes, one primarily of the temperature $T$ and the other primarily of $\partial_{i}u_{i}$. In the absence of lattice defects, for a variation $\delta u_{i}$ the relationship $$\displaystyle\partial_{i}(\delta u_{i})\approx-\delta\rho/\rho$$ (1) holds, giving the system one fewer degree of freedom, and thus one fewer mode. One can think of this missing mode, associated with the dynamical violation of (1), as being associated with vacancies, as noted in Ref. MartinParodi, . The present work obtains the diffusion constant and the physical properties of this diffusive mode, for both zero and non-zero $P_{a}$. (${}^{4}$He must be under $P_{a}\approx$ 25 atmospheres to solidify.) We find that the physical character of the mode is that it involves essentially zero stress deviation, because the fluid-like stress (associated with changes in mass density) nearly cancels the solid-like stress (associated with changes in strain). Allowing vacancies to move permits mass change without lattice motion.BardeenHerring This allows one to take the fluid limit of zero crystallinity, and study the evolution of the sound velocity as the system evolves from the perfect solid to perfect liquid. By perfect solid we mean one with no defects and a one-to-one relationship between lattice points and atoms; by perfect liquid we mean one with no lattice structure or, equivalently, one with no sensitivity to an imaginary lattice structure. A gel has properties of both, but is multi-component.DLJohnson82 Section II gives the form of AL supersolid theory when restricted to a normal solid, including the possibility of lattice defects. Although we specifically have vacancies in mind, ${}^{3}$He impurities could be accounted for if its density were included as an additional thermodynamic variable, which would require extension of the AL theory. Note also the case of (two-component) superionic conductors, which includes certain high-temperature alkali halides, where the larger halide ions remain in a lattice but the lattice of the smaller alkali ions “melts.” Section III discusses elasticity and internal pressure for a crystal under static and uniform applied pressure $P_{a}$, and calculates internal pressure $P$ and strain. We find that $P\sim P_{a}^{2}$, so that for small $P_{a}$ the effect of $P$ is very small; see eq. (34). For small $P_{a}$ the strain is largely linear in $P_{a}$, as expected, but there is a $P_{a}^{2}$ correction. Section IV derives the normal modes for the ordinary solid. Section V considers how such modes can be generated (including the possible effect of different surface treatments), and applies the theory to the pressurization experiments.RitRep09 Section VI provides a summary and our conclusions. Appendix A gives the thermodynamics and dynamics of the AL theory for the supersolid. Appendix B calculates some thermodynamic derivatives that appear in the normal modes in terms of $P_{a}$. II Andreev-Lifshitz Normal Solid with Defects We employ the primary quantities energy density $\epsilon$, lattice displacement $u_{i}$, and non-symmetrized strain $$\displaystyle w_{ik}=\partial_{i}u_{k}.$$ (2) We consider a normal solid by setting $\rho_{s}=0$, $\rho_{n}=\rho$, ${\vec{v}}_{n}=\vec{v}$, and eliminating the superfluid equation from the equations for the supersolid (given in Appendix A). II.1 Thermodynamics The appropriate thermodynamic equations are $$\displaystyle d\epsilon$$ $$\displaystyle=$$ $$\displaystyle Tds+\lambda_{ik}dw_{ik}+\mu d\rho+\vec{v}\cdot d\vec{g},$$ (3) $$\displaystyle\epsilon$$ $$\displaystyle=$$ $$\displaystyle-P+Ts+\lambda_{ik}w_{ik}+\mu\rho+\vec{v}\cdot\vec{g},$$ (4) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle-dP+sdT+w_{ik}d\lambda_{ik}+\rho d\mu+\vec{g}\cdot d\vec{v}.$$ (5) Here $\lambda_{ik}$ of AL is an elastic tensor density (with units of pressure $P$), and $\mu$ is the chemical potential (with units of velocity squared); $\lambda_{ik}$ is the same as $\sigma_{ik}$ of Ref. LLElasticity, . II.2 Dynamics The appropriate linearized equations of motion for the independent variables $s$, $u_{i}$, $\rho$, and $v_{i}$ are $$\displaystyle\partial_{t}s+\partial_{i}f_{i}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (6) $$\displaystyle\partial_{t}u_{i}$$ $$\displaystyle=$$ $$\displaystyle U_{i},$$ (7) $$\displaystyle\partial_{t}\rho+\partial_{i}g_{i}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (8) $$\displaystyle\partial_{t}g_{i}+\partial_{k}\Pi_{ik}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (9) where the fluxes $f_{i}$ (of entropy), $\Pi_{ik}$ (of momentum), $g_{i}$ (of mass), and the “source” $U_{i}$ (terminology introduced here) are given by $$\displaystyle f_{i}$$ $$\displaystyle=$$ $$\displaystyle sv_{i}-\frac{\kappa_{ij}}{T}\partial_{j}T-\frac{\alpha_{ij}}{T}% \partial_{l}\lambda_{lj},$$ (10) $$\displaystyle U_{i}$$ $$\displaystyle=$$ $$\displaystyle v_{i}+\frac{\alpha_{ij}}{T}\partial_{j}T+\beta_{ij}\partial_{l}% \lambda_{lj},$$ (11) $$\displaystyle\Pi_{ik}$$ $$\displaystyle=$$ $$\displaystyle(P\delta_{ik}-\lambda_{ik})-\eta_{iklm}\partial_{m}v_{l},$$ (12) $$\displaystyle g_{i}$$ $$\displaystyle=$$ $$\displaystyle\rho v_{i}.$$ (13) AL use $\sigma_{ik}\approx-\Pi_{ik}$.NotationFootnote The term in (11) proportional to $\beta_{ij}$ allows the lattice velocity $\dot{u}_{i}$ to differ from the velocity $v_{i}$ associated with mass flow. It leads, as we show, to a lattice diffusion mode for which $\dot{u}_{i}\neq v_{i}$ and neither is zero. Both the $\alpha_{ij}$ and $\beta_{ij}$ terms can be rewritten as flux terms. Linearizing about equilibrium, with primes denoting deviations from equilibrium, yields $$\displaystyle\partial_{t}u_{i}+\partial_{j}S_{ij}=v_{i}^{\prime},$$ (14) where $$\displaystyle S_{ij}\equiv-\frac{\alpha_{ij}}{T}T^{\prime}-\beta_{il}\lambda_{% jl}^{\prime},$$ (15) In (14), $v_{i}$ can be thought of as a “lattice source”, and $S_{ij}$ as a “lattice flux.” The $\beta_{il}$ term gives, in principle, anisotropic vacancy diffusion. Recall that a diffusion constant $D$ is proportional to a characteristic velocity times a characteristic mean-free path, so it has units of m${}^{2}$/sec. In terms of a $D$, the dissipative coefficients have the following units: $\kappa_{ij}$ has units of $s$ times $D$; $\alpha_{ij}$ has units of $D$; $\beta_{ij}$ has units of inverse pressure times $D$; and $\eta_{iklm}$ has units of $\rho$ times $D$. III Crystal Under Pressure III.1 Internal Pressure and Elasticity The momentum conservation equation (9) implicitly contains the term $\lambda_{ik}-P\delta_{ik}$, which determines the force on the surface of the solid. An internal pressure $P$ does not appear in the thermodynamics of Ref. LLElasticity, , which does not consider either a lattice under applied pressure or the presence of defects. However, the extensive energy $E=\epsilon V$, which depends on the extensive variables $(S,V,N,W_{ik}\equiv Vw_{ik},V\vec{g})$, has second derivatives that satisfy the Maxwell relation $$-\frac{\partial P}{\partial W_{ik}}=\frac{\partial\lambda_{ik}}{\partial V},$$ (16) where the appropriate variables are held constant. For solid ${}^{4}$He under an applied pressure $P_{a}$, this makes $P$ non-zero. In principle we may let $E$ depend on the number of vacancies $N_{V}$, with associated “chemical potential” $\phi_{V}=\partial E/\partial N_{V}$ (with units of energy, rather than velocity squared). Then the additional Maxwell relation $$-\frac{\partial P}{\partial N_{V}}=\frac{\partial\phi_{V}}{\partial V}$$ (17) follows, with the appropriate variables held constant. If the vacancies are not in equilibrium (i.e., $\phi_{V}\neq 0$), this also makes $P$ non-zero. The general results of the present work (e.g., a nonzero lattice diffusion constant) can thus be made applicable to a solid not under $P_{a}$ but having vacancies out of local thermal equilibrium. Terms found here to depend on $P_{a}$ may in that case depend on the difference between actual concentration of vacancies and the equilibrium concentration of vacancies. However, we expect a $P_{a}$ of $\sim 25$ atm to dominate the effect of vacancies, and thus we neglect their effect on $P$. Although Refs. AL69, , FC76, and MartinParodi, introduce the internal pressure $P$, they do not calculate $P$ or its thermodynamic derivatives. Ref. AL69, and the present work neglect the possibility of interstitial atoms.DefectFootnote For a reference that considers interstitials, see Ref. Zippelius, . As employed by Ref. AL69, , this pressure term, in contrast to $\lambda_{ik}$ alone (Ref. LLElasticity, does not include $P$), permits one to continuously approach the superfluid limit, when the lattice disappears. In the present case, it permits one to continuously approach the ordinary liquid limit. The consequences of a nonzero $P$ include, but are not limited to, a mode where vacancies are permitted to diffuse. Thermodynamic derivatives of $P$ are essential for defect diffusion, and also affect the elastic modes. Moreover, they are needed to obtain the pure liquid limit for longitudinal sound on letting the crystallinity go to zero. We first use a Maxwell relation to find an explicit expression for $P$ as a function of strain. III.2 Internal Pressure $P$ Since holding $(V,N)$ constant is equivalent to holding $(V,\rho)$ constant, and similarly for $(S,N)$ and $(\sigma=s/\rho,N)$, we use these sets interchangeably. We rewrite (16) as $$\displaystyle-\left.\frac{\partial P}{\partial W_{ik}}\right|_{V,S,N}=-\frac{1% }{V}\left.\frac{\partial P}{\partial w_{ik}}\right|_{V,\sigma,\rho}=\left.% \frac{\partial\lambda_{ik}}{\partial V}\right|_{W_{ik},S,N}.$$ (18) For constant $W_{ik}$ we have $$\displaystyle 0=dW_{ik}=w_{ik}dV+Vdw_{ik},$$ (19) so that $$\displaystyle\left.\frac{dw_{ik}}{\partial V}\right|_{W_{ik},S,N}=-\frac{w_{ik% }}{V}.$$ (20) Then $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial V}\right|_{W_{ik},S,N}% =\left.\frac{\partial\lambda_{ij}}{\partial V}\right|_{w_{ik},\sigma,N}-\frac{% w_{jl}}{V}\left.\frac{\partial\lambda_{ik}}{\partial w_{jl}}\right|_{V,\sigma,% \rho},$$ (21) and (18) gives $$\displaystyle\left.\frac{\partial P}{\partial w_{ik}}\right|_{V,\sigma,\rho}=$$ $$\displaystyle-V\left.\frac{\partial\lambda_{ik}}{\partial V}\right|_{W_{ik},S,N}$$ $$\displaystyle=$$ $$\displaystyle-V\left.\frac{\partial\lambda_{ij}}{\partial V}\right|_{w_{ik},% \sigma,N}+{w_{jl}}\left.\frac{\partial\lambda_{ik}}{\partial w_{jl}}\right|_{V% ,\sigma,\rho}.$$ (22) We employ Ref. LLElasticity, for the elasticity tensor $\lambda_{ik}$ in an isotropic solid. Using superscript $(0)$ to denote the equilibrium value of $\lambda_{ik}$ and the strain $w_{ik}$, we have $$\displaystyle\lambda_{ik}^{(0)}=\left(K-\frac{2}{3}\mu_{V}\right)\delta_{ik}w_% {ll}^{(0)}+\mu_{V}\left(w_{ik}^{(0)}+w_{ki}^{(0)}\right),$$ (23) where $K$ and $\mu_{V}$ are the bulk and shear moduli, and both $\lambda_{ik}^{(0)}$ and $w_{ik}^{(0)}$ are to be determined under a given applied pressure $P_{a}$. Eq. (22) then gives $$\displaystyle\left.\frac{\partial P}{\partial w_{ik}}\right|_{V,\sigma,\rho}=$$ $$\displaystyle\left(K^{*}-\frac{2}{3}\mu_{V}^{*}\right)\delta_{ik}w_{ll}^{(0)}+% \mu_{V}^{*}\left(w_{ik}^{(0)}+w_{ki}^{(0)}\right),$$ (24) where $$\displaystyle K^{*}=K-V\left.\frac{\partial K}{\partial V}\right|_{w_{ik},% \sigma,N},\quad\mu_{V}^{*}=\mu_{V}-V\left.\frac{\partial\mu_{V}}{\partial V}% \right|_{w_{ik},\sigma,N}.$$ (25) Under uniform $P_{a}$ we expect an isotropic response, so $$\displaystyle w_{ik}^{(0)}=\frac{\delta_{ik}}{3}w_{ll}^{(0)}.$$ (26) Then (24) becomes $$\displaystyle\left.\frac{\partial P}{\partial w_{ik}}\right|_{V,\sigma,\rho}=$$ $$\displaystyle K^{*}\delta_{ik}w_{ll}^{(0)}.$$ (27) Integration of (27) with respect to $w_{ik}$ gives the part of the internal pressure dependent on the strain to be $$\displaystyle P=\frac{1}{2}K^{*}\left(w_{ll}^{(0)}\right)^{2},$$ (28) where we take the integration constant to be zero.NonzeroPFootnote For ${w_{11}^{(0)}}={w_{22}^{(0)}}={w_{33}^{(0)}}$, we then have $$\displaystyle P=\frac{9}{2}K^{*}{w_{11}^{(0)}}^{2}.$$ (29) This result applies to the case of a strongly crystalline material. In the opposite limit where the crystallinity disappears and the particles are weakly interacting, part of $P$ would be given by the ideal gas law. III.3 Strain $w_{ik}$ As discussed above, under an applied pressure the force on the surface of a solid is $$\displaystyle\lambda_{ik}^{(0)}-P\delta_{ik}=-P_{a}\delta_{ik}.$$ (30) Taking the trace yields $$\displaystyle\frac{\lambda_{ll}^{(0)}}{3}-P=-P_{a}.$$ (31) Substitution from (23) and (29) gives $$\displaystyle 3K{w_{11}^{(0)}}-\frac{9}{2}K^{*}{w_{11}^{(0)}}^{2}=-P_{a}.$$ (32) Since an applied pressure should cause a negative strain, only the solution for ${w_{11}^{(0)}}<0$ is physical. For solid ${}^{4}$He, we expect both ${w_{11}^{(0)}}$ and $P_{a}/K$ to be small. The solution of (32) to second order in $P_{a}$ is $$\displaystyle{w_{11}^{(0)}}\approx-\frac{P_{a}}{3K}+\frac{P_{a}^{2}K^{*}}{6K^{% 3}}.$$ (33) The first term is what one would get on neglecting $P$ in (31). To second order in $P_{a}/K$, eq. (29) then gives $$\displaystyle P=K^{*}\frac{P_{a}^{2}}{2K^{2}},$$ (34) a result that appears to be new. Further, $\lambda_{ik}^{(0)}=\delta_{ik}\lambda_{11}^{(0)}$, where $$\displaystyle\lambda_{11}^{(0)}=-P_{a}+K^{*}\frac{P_{a}^{2}}{2K^{2}}.$$ (35) The first term in $\lambda_{11}^{(0)}$ is what one obtains on neglecting $P$ in (31), and in agreement with Ref. LLElasticity, . IV Normal Modes of Andreev-Lifshitz Normal Solid with Defects As noted earlier, this system has eight variables: $s$, $\rho$, $g_{i}$ and $u_{i}$. Disturbances from equilibrium will be denoted by primes, so we use $s^{\prime}$, $\rho^{\prime}$, $g^{\prime}_{i}\approx\rho v^{\prime}_{i}$, and $u^{\prime}_{i}$. There are correspondingly eight normal modes. For an infinite system we assume a disturbance of the form $\exp[i(\vec{k}\cdot\vec{r}-\omega t)]$, where the real wavevector $\vec{k}$ is considered to be known, but $\omega$ is unknown. For the disturbance to decay in time, $Im(\omega)<0$. Six modes come in three degenerate pairs, with $g_{i}^{\prime}$ and $u_{i}^{\prime}$ strongly coupled, and correspond to ordinary elasticity. The other two modes are diffusive, with temperature diffusion nearly decoupled from lattice diffusion. To ensure this decoupling we set the (off-diagonal) temperature-lattice transport coefficient $\alpha_{ij}=0$, and set the distinct but similar-looking thermal expansion coefficient $\alpha=0$.LLElasticity ; LLStatistical We consider an isotropic solid, for which $\kappa_{ij}=\kappa\delta_{ij}$ and $\beta_{ij}=\beta\delta_{ij}$, this $\beta$ not to be confused with the identical symbol sometimes used for the thermal expansion coefficient.LLFluid We also neglect the tensor viscosity $\eta_{iklm}$, which to lowest order in $k$ does not contribute to the modes. The fluctuation of the tensor $\Pi_{ik}^{\prime}$ (12) has a term from the viscosity $\sim\eta_{iklm}k_{m}v_{l}^{\prime}$ and a term from the stress tensor $\sim\lambda^{\prime}_{ik}\approx(\partial\lambda_{ik}/\partial w_{jl})w_{jl}^{\prime}$. Then, by (7), $\lambda^{\prime}_{ik}\sim w^{\prime}_{jl}\sim k_{j}v^{\prime}_{l}/\omega$. Thus, for both propagating modes ($\omega\sim k$) and diffusive modes ($\omega\sim k^{2}$), the term in $\Pi_{ik}^{\prime}$ due to viscosity is, at the least, of order $k$ relative to the term $\lambda_{ik}^{\prime}$, and is therefore neglected in the long wavelength limit. IV.1 Thermal Diffusion For the normal solid it is convenient to work with $\rho$ and $\sigma=s/\rho$ as variables, because $\sigma$ diffuses but does not flow, and therefore is nearly conserved. To see this note that, to lowest order in deviations from equilibrium, eq. (6) and (8) yield $$\partial_{t}\sigma^{\prime}=\frac{1}{\rho}\partial_{i}\left(\frac{\kappa}{T}% \partial_{i}T^{\prime}\right)\approx\frac{\kappa}{T(\partial\sigma/\partial T)% _{\rho}}\nabla^{2}\sigma^{\prime},$$ (36) where we have used $\alpha=0$.LLFootnote This equation describes entropy diffusion, with $\sigma^{\prime}\neq 0$ and $$\omega=-iD_{T}k^{2},\qquad D_{T}=\frac{\kappa}{\rho T(\partial\sigma/\partial T% )_{\rho}}.$$ (37) For this mode $u^{\prime}_{i}=v^{\prime}_{i}=\rho^{\prime}=0$. If $\alpha$ is small but non-zero the frequency will not change to lowest order in $\alpha$, but from the equations for $\rho$, $\vec{g}$, and $\vec{u}$ these quantities would develop amplitudes proportional to $\sigma^{\prime}$ and $\alpha$, and thus have negligible amplitude as $\alpha\rightarrow 0$. We consider only the case where the effects of $\alpha$ can be neglected. IV.2 Elastic Modes We obtain the elastic modes by taking $\sigma^{\prime}=0$ and neglecting dissipative and nonlinear terms in (7)-(9). Thus, eq. (7) gives $\dot{u}_{i}^{\prime}=v_{i}^{\prime}$. In the remainder of this work, all thermodynamic derivatives are taken at constant $\sigma$, and derivatives with respect to $\rho$ are taken at constant $w_{ik}$ and vice-versa, unless otherwise specified. Further, when derivatives with respect to a specific component of $w_{ik}$ are taken, the other components of $w_{ik}$ are held fixed. Then by (12) and (13), eqs. (9) and (8) becomeALfootnote $$\displaystyle 0$$ $$\displaystyle=\rho\ddot{u}^{\prime}_{i}+\frac{\partial P}{\partial\rho}% \partial_{i}\rho^{\prime}+\frac{\partial P}{\partial w_{jl}}\partial_{i}w^{% \prime}_{jl}-\frac{\partial\lambda_{ik}}{\partial\rho}\partial_{k}\rho^{\prime% }-\frac{\partial\lambda_{ik}}{\partial w_{jl}}\partial_{k}w^{\prime}_{jl},$$ (38) $$\displaystyle 0$$ $$\displaystyle=\dot{\rho}^{\prime}+\rho\partial_{i}\dot{u}^{\prime}_{i}.$$ (39) Clearly, $\sigma^{\prime}$ does not couple to the other variables. On linearizing, eq. (39) gives $\rho^{\prime}=-\rho\partial_{i}u^{\prime}_{i}$, so with (2), eq. (38) becomes $$\displaystyle 0=\rho\ddot{u}^{\prime}_{i}-\rho\frac{\partial P}{\partial\rho}% \partial_{i}\partial_{k}u^{\prime}_{k}+\frac{\partial P}{\partial w_{jl}}% \partial_{i}\partial_{j}u^{\prime}_{l}$$ $$\displaystyle\qquad\qquad\qquad\qquad+\rho\frac{\partial\lambda_{ik}}{\partial% \rho}\partial_{k}\partial_{j}u^{\prime}_{j}-\frac{\partial\lambda_{ik}}{% \partial w_{jl}}\partial_{k}\partial_{j}u^{\prime}_{l}.$$ (40) The second term gives the pure fluidlike (longitudinal) response, which occurs for $P\neq 0$ (e.g., an imperfect solid or a solid under $P_{a}$), and the fifth term gives the pure solidlike (longitudinal and transverse) response. Appendix B shows that, for uniform static $P_{a}$, certain quantities are isotropic. This permits us to define $$\displaystyle\frac{\partial P}{\partial w_{jl}}\equiv\frac{\partial P}{% \partial w}\delta_{jl},\quad\frac{\partial\lambda_{jl}}{\partial\rho}\equiv% \frac{\partial\lambda}{\partial\rho}\delta_{jl},\quad\frac{\partial\lambda}{% \partial w}\equiv K+\frac{4}{3}\mu_{V}.$$ (41) Appendix B also shows that $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial w_{jl}}\right|_{\rho,% \sigma}=$$ $$\displaystyle\frac{\partial\lambda}{\partial w}\delta_{ik}\delta_{jl}+\mu_{V}% \left(\delta_{ij}{\delta_{kl}}+\delta_{kj}\delta_{il}-2\delta_{ik}\delta_{jl}% \right).$$ (42) Thus (40) gives $$\displaystyle 0$$ $$\displaystyle\approx\rho\ddot{u}^{\prime}_{i}-\rho\frac{\partial P}{\partial% \rho}\partial_{i}\partial_{k}u^{\prime}_{k}+\frac{\partial P}{\partial w}% \partial_{i}\partial_{k}u_{k}^{\prime}$$ $$\displaystyle+\rho\frac{\partial\lambda}{\partial\rho}\partial_{i}\partial_{k}% u_{k}^{\prime}-\left(\frac{\partial\lambda}{\partial w}-\mu_{V}\right)\partial% _{i}\partial_{j}u^{\prime}_{j}-\mu_{V}\nabla^{2}u^{\prime}_{i}.$$ (43) On letting $\partial_{i}\rightarrow ik_{i}$ and $\partial_{t}\rightarrow-i\omega$, eq. (43) becomes $$\displaystyle 0$$ $$\displaystyle\approx(-\rho\omega^{2}+\mu_{V})k^{2}u^{\prime}_{i}$$ $$\displaystyle+\left[\rho\frac{\partial P}{\partial\rho}-\frac{\partial P}{% \partial w}-\rho\frac{\partial\lambda}{\partial\rho}+\left(\frac{\partial% \lambda}{\partial w}-\mu_{V}\right)\right]k_{i}(\vec{k}\cdot\vec{u}).$$ (44) Longitudinal Mode: If $\vec{k}\cdot\vec{u}\neq 0$, then (44) shows that $u_{i}$ is along $k_{i}$, so the mode is longitudinal. Moreover, eq. (44) gives the normal mode frequencies $$\displaystyle\omega^{2}$$ $$\displaystyle=\left[\frac{\partial P}{\partial\rho}-\frac{1}{\rho}\frac{% \partial P}{\partial w}-\frac{\partial\lambda}{\partial\rho}+\frac{1}{\rho}% \frac{\partial\lambda}{\partial w}\right]k^{2}$$ $$\displaystyle=\left[c^{2}_{lL}+c^{2}_{lS}\right]k^{2}\equiv c_{l}^{2}k^{2},$$ (45) where $$\displaystyle c_{lL}^{2}\equiv$$ $$\displaystyle\frac{\partial P}{\partial\rho}-\frac{\partial\lambda}{\partial% \rho},\qquad c_{lS}^{2}\equiv\frac{1}{\rho}\frac{\partial\lambda}{\partial w}-% \frac{1}{\rho}\frac{\partial P}{\partial w}.$$ (46) The liquid-like velocity $c_{lL}$ contains thermodynamic derivatives with respect to the density $\rho$, and the solid-like velocity $c_{lS}$ contains thermodynamic derivatives with respect to the strain $w_{ik}$. Eq. (45) gives a velocity for longitudinal sound that is similar to that found in Ref. MartinParodi, . Appendix B finds the four derivatives in (46) in terms of $P_{a}$, which to second order in $P_{a}/K$ give $$\displaystyle c_{lL}^{2}=$$ $$\displaystyle\frac{P_{a}}{\rho}\left(\frac{K^{*}}{K}-1\right)+\frac{P_{a}^{2}K% ^{*}}{2\rho K^{2}}\left(1-\frac{K^{*}}{K}+\frac{\rho}{K^{*}}\frac{\partial K^{% *}}{\partial\rho}\right),$$ (47) $$\displaystyle c_{lS}^{2}=$$ $$\displaystyle\frac{K+\frac{4}{3}\mu_{V}}{\rho}+\frac{P_{a}}{\rho}\frac{K^{*}}{% K}-\frac{P_{a}^{2}{K^{*}}^{2}}{2\rho K^{3}},$$ (48) where $K^{*}$ is defined in (25). For $P_{a}=0$ we have $c_{l}^{2}=c_{lL}^{2}+c_{lS}^{2}=[K+({4}/{3})\mu_{V}]/\rho$, which agrees with Ref. LLElasticity, for an ordinary solid. Transverse Mode: If $\vec{k}\cdot\vec{u}=0$, so that the mode is transverse, then (44) gives the normal mode frequencies $$\omega^{2}=\frac{\mu_{V}}{\rho}k^{2}.$$ (49) From (39), for the transverse mode $\rho^{\prime}=0$. Eq. (49) agrees with Ref. LLElasticity, for an ordinary solid. For both longitudinal and transverse mode frequencies, eq. (36) is satisfied by $\sigma^{\prime}=0$. IV.3 Lattice Diffusion The lattice diffusion mode is the most subtle of the modes. For this mode, as for the elastic modes, we consider that $\sigma$ is constant, but we do not take $v^{\prime}_{i}=\dot{u}^{\prime}_{i}$. Rather, we assume that $\omega=-iD_{L}k^{2}$, where the lattice mode diffusion constant $D_{L}>0$ is to be determined, and we keep the dissipative terms in the equations of motion for $v^{\prime}_{i}$, $u^{\prime}_{i}$, and $\rho^{\prime}$. With $\beta_{ij}=\beta\delta_{ij}$ (i.e., an isotropic solid), $\alpha_{ij}=0$, and setting $\sigma^{\prime}=0$, eqs. (8) and (7) give $$\displaystyle-i\omega\rho^{\prime}$$ $$\displaystyle=-\rho(ik_{i})v^{\prime}_{i},$$ (50) $$\displaystyle-i\omega u^{\prime}_{i}$$ $$\displaystyle=v^{\prime}_{i}+\beta(ik_{k})\lambda^{\prime}_{ik}.$$ (51) If we assume that the mode is longitudinal, with $v_{i}^{\prime}\sim k_{i}$ (the consistency of this assumption to be determined below), then the first of these equations implies that $v^{\prime}_{i}\sim k_{i}\rho^{\prime}$. Therefore in (9) the term $\partial_{t}g_{i}\sim\omega\rho v_{i}^{\prime}$ is of order $k^{2}$ relative to the $k\rho^{\prime}$ dependence of $\partial_{k}\Pi_{ik}^{\prime}$, and is neglected in the long wavelength limit. As a consequence, $\partial_{k}\Pi_{ik}^{\prime}\approx 0$: the contributions from the liquid-like part $P^{\prime}\delta_{ik}$ and from the solid-like part $-\lambda^{\prime}_{ik}$ nearly cancel. This can only occur for an imperfect solid or a solid under applied pressure $P_{a}$, which has both liquid-like and solid-like responses. Thus, neglecting the $\partial_{t}g_{i}\sim\omega\rho v_{i}^{\prime}$ term and neglecting the viscosity $\eta_{iklm}$ (as discussed above), eq. (9) gives $$ik_{i}\rho^{\prime}\frac{\partial P}{\partial\rho}-ik_{k}\rho^{\prime}\frac{% \partial\lambda_{ik}}{\partial\rho}=-k_{k}k_{j}\frac{\partial\lambda_{ik}}{% \partial w_{jl}}u^{\prime}_{l}+\frac{\partial P}{\partial w_{jl}}k_{i}k_{j}u_{% l}^{\prime}.$$ (52) Substitution from (41) and (42) gives $$\displaystyle ik_{i}\rho^{\prime}\frac{\partial P}{\partial\rho}-ik_{i}\rho^{% \prime}\frac{\partial\lambda}{\partial\rho}$$ $$\displaystyle\quad=-\left(\frac{\partial\lambda}{\partial w}-\mu_{V}\right)k_{% i}k_{l}u^{\prime}_{l}-\mu_{V}k^{2}u^{\prime}_{i}+\frac{\partial P}{\partial w}% k_{i}k_{l}u_{l}^{\prime}.$$ (53) All but one term in (53) is along $k_{i}$, and the remaining term is along $u^{\prime}_{i}$. Therefore we deduce that $u^{\prime}_{i}$ is along $k_{i}$, and thus $k_{i}k_{l}u^{\prime}_{l}=k^{2}u^{\prime}_{i}$. Then (53) becomes $$\displaystyle i\rho^{\prime}\left(\frac{\partial P}{\partial\rho}-\frac{% \partial\lambda}{\partial\rho}\right)=-k_{l}u^{\prime}_{l}\left(\frac{\partial% \lambda}{\partial w}-\frac{\partial P}{\partial w}\right).$$ (54) Further, eq. (51) gives, on taking $\lambda^{\prime}_{ik}=(\partial\lambda_{ik}/\partial w_{jl})w_{jl}^{\prime}+(% \partial\lambda_{ik}/\partial\rho)\rho^{\prime}$, and taking $u_{i}^{\prime}$ along $k_{i}$, $$\displaystyle\left[-i\omega+\beta\frac{\partial\lambda}{\partial w}k^{2}\right% ]u^{\prime}_{i}=v^{\prime}_{i}+ik_{i}\beta\frac{\partial\lambda}{\partial\rho}% \rho^{\prime}.$$ (55) Since $u_{i}^{\prime}$ is along $k_{i}$, eq. (55) implies $v_{i}^{\prime}$ also along $k_{i}$. Hence the mode is longitudinal. We now use (50) and the sound velocities of (46) to eliminate $\rho^{\prime}$ from (54) and (55). Then (54) multiplied by $\omega$ gives $$i\rho k_{i}v^{\prime}_{i}c_{lL}^{2}=-\omega k_{i}u^{\prime}_{i}\rho c_{lS}^{2},$$ (56) and (55) multiplied by $\omega$ gives $$\displaystyle\left[-i\omega+\beta\frac{\partial\lambda}{\partial w}k^{2}\right% ]\omega u^{\prime}_{i}=\left[\omega+i\beta\rho\frac{\partial\lambda}{\partial% \rho}k^{2}\right]v^{\prime}_{i}.$$ (57) Since $u^{\prime}_{i}$ and $v^{\prime}_{i}$ are along $k_{i}$, eq. (56) implies that for the diffusive mode $$\displaystyle v_{i}^{\prime}=i\omega u_{i}^{\prime}(c^{2}_{lS}/c^{2}_{lL})=-% \dot{u}_{i}^{\prime}(c^{2}_{lS}/c^{2}_{lL}),$$ (58) which is independent of $\omega$. We interpret this as the lattice velocity $\dot{u}_{i}^{\prime}$ being out of phase relative to the matter velocity $v_{i}^{\prime}$ so that the fluid and lattice stresses cancel. Combining (56) and (57) then yields $$\displaystyle\omega\left(c_{lS}^{2}+c_{lL}^{2}\right)=$$ $$\displaystyle-ik^{2}\beta\rho\left[c_{lS}^{2}\frac{\partial\lambda}{\partial% \rho}+c_{lL}^{2}\frac{1}{\rho}\frac{\partial\lambda}{\partial w}\right]$$ $$\displaystyle=$$ $$\displaystyle-ik^{2}\beta\rho\left[\frac{\partial\lambda}{\partial w}\frac{% \partial P}{\partial\rho}-\frac{\partial\lambda}{\partial\rho}\frac{\partial P% }{\partial w}\right].$$ (59) Therefore $$\displaystyle D_{L}=i\frac{\omega}{k^{2}}=$$ $$\displaystyle\beta\rho\left[\frac{\frac{\partial\lambda}{\partial w}\frac{% \partial P}{\partial\rho}-\frac{\partial\lambda}{\partial\rho}\frac{\partial P% }{\partial w}}{\frac{\partial}{\partial w}(\lambda-P)-\rho\frac{\partial}{% \partial\rho}(\lambda-P)}\right].$$ (60) For $P_{a}=0$, eq. (60) agrees with Ref. YooDorsey, YooFootnote and with Ref. Zippelius, .ZippeliusFootnote For either a pure liquid or a pure solid, $D_{L}\rightarrow 0$: • For a pure liquid, derivatives with respect to strain go to zero: $\partial\lambda/\partial w\rightarrow 0$ and $\partial P/\partial w\rightarrow 0$. Therefore $D_{L}\rightarrow 0$. • For a pure solid, derivatives with respect to density (at constant strain) go to zero: $\partial\lambda/\partial\rho\rightarrow 0$ and $\partial P/\partial\rho\rightarrow 0$. Therefore $D_{L}\rightarrow 0$. If the system is not supersolid, and if the samples are not perfect, then it is consistent to interpret the observations of Ref. RitRep09, in terms of this lattice diffusion mode. Substitution for the four derivatives in (60) from Appendix B gives, to lowest order in $P_{a}/K$, $$\displaystyle D_{L}=$$ $$\displaystyle\frac{\beta VP_{a}^{2}}{K^{2}}\left[\frac{V}{2}\frac{\partial^{2}% K}{\partial V^{2}}+\frac{K}{K+\frac{4}{3}\mu_{V}}\frac{\partial K}{\partial V}% -\frac{V}{K+\frac{4}{3}\mu_{V}}\left(\frac{\partial K}{\partial V}\right)^{2}% \right],$$ (61) where derivatives with respect to $V$ are taken at constant $(w_{ik},\sigma,N)$. The form (61) does not apply to case of a pure liquid (whereas (60) is general), because it does not permit $P$ to have terms independent of strain. Recall that we have assumed that it is valid to expand $K$ around $P_{a}=0$. If, in $D_{L}$, all other dependences on $P_{a}$ can be neglected, then (61) implies that $D_{L}\sim P_{a}^{2}$. V Longitudinal Response of Normal Solid Recall that $\beta$ has units of $D$ divided by pressure. As $T\rightarrow 0$ we expect that, by the Arrhenius equation, $D\rightarrow 0$ as $\exp[-\Delta/k_{B}T]$, where $\Delta$ is a hopping energy, because the hopping rate should yield such a dependence. Therefore, if the wavevector $k$ is replaced by $d^{-1}$, where $d$ is a characteristic distance (the plate separation in Ref. RitRep09, ), then the characteristic response time $\tau\sim\omega^{-1}\sim(Dk^{2})^{-1}\sim d^{2}/D$. Hence the view that the experimental results of Ref. RitRep09, are due to a lattice diffusion mode leads to the conclusion that $\tau$ varies as $\exp[\Delta/k_{B}T]$. Indeed, such a dependence is observed, with $\Delta\sim 30$ mK. It would be useful to test for the predicted $d^{2}$-dependence. For instance, the present theory predicts that changing the plate separation in the pancake cell of Ref. RitRep09, from 100 $\mu$m to 200 $\mu$m should yield a relaxation time approximately four times longer. This mode provides a means for vacancy flow to equilibrate vacancy concentrations. It is consistent with the observation of Ref. RitRep09, that pressure decreases during an anneal, and when the system relaxes at constant temperature. We interpret this to mean that vacancies diffuse to or from the surface. We now turn to how a normal solid will respond to the two devices usually employed to generate a disturbance: a heater and a transducer. Since there are three longitudinal modes (thermal diffusion, lattice diffusion, and elastic waves), it would appear that there is need for an additional independent generator. Perhaps surface properties introduce a new boundary condition that amounts to having an independent generator. For example, the material against the solid ${}^{4}$He may cause the ${}^{4}$He surface to prefer vacancies, as opposed to atoms. Thus the surface treatment may affect the behavior of both heaters and transducers. This argument applies to any two ordinary solids, and there may be some for which this can be readily tested. Hence two macroscopically identical heaters or transducers made of different materials, or of the same material but with different surface treatment, would not show identical behavior. Since $v_{i}-\dot{u}_{i}\approx 0$ for the temperature mode and the elastic modes, one way to characterize the response of a surface is in terms of $v_{i}-\dot{u}_{i}$. Thus $(v_{i}-\dot{u}_{i})/P^{\prime}$ for a longitudinally moving transducer and $(v_{i}-\dot{u}_{i})/T^{\prime}$ for a heater would characterize differences in the response to different surface conditions, and the extent to which they can generate the lattice diffusion mode. VI Summary and Conclusions We have applied the Andreev-Lifshitz theory of supersolid dynamics to an ordinary solid with lattice defects – specifically, with vacancies in mind. At the thermodynamic level, this theory includes an internal pressure $P$, distinct from the applied pressure $P_{a}$ and the stress tensor $\lambda_{ik}$. For the Andreev-Lifshitz theory this is necessary to permit a continuous variation from a supersolid to a superfluid. Under uniform static $P_{a}$, we have $\lambda_{ik}=(P-P_{a})\delta_{ik}$. For $P_{a}\neq 0$, Maxwell relations imply that $P\sim P_{a}^{2}$. These results are not conventional; Ref. LLElasticity, does not include $V$ as a distinct extensive thermodynamic variable, nor its thermodynamically conjugate variable $P$. In the present work many derivatives involving $V$ are at fixed strain $w_{ik}$, which is also unconventional, since normally one assumes that $\delta w_{ii}=-\delta\rho/\rho$.LLElasticity Nevertheless, the variables of Andreev and Lifshitz must be taken if vacancies are to be permitted. For an isotropic model, the normal modes were obtained. There are, as expected, two sets of propagating transverse modes, with velocities as expected. There also are, as expected, a set of propagating longitudinal modes, but with velocities containing both solid-like and liquid-like contributions, and which depend upon $P_{a}$. In addition there are two diffusive longitudinal modes: a well-known mode that dominantly involves temperature, and another mode involving lattice defects (i.e., vacancies). Our analysis of the physical nature of this mode shows that it is surprisingly complex. It involves the mass density $\rho$, the lattice velocity $\dot{u}_{i}$, and the mass-flow velocity $v_{i}$, with the fluid-like pressure $P$ associated with $\rho$ essentially canceling the solid-like stress $\lambda$ associated with $u_{i}$. In a separate workSearsSasALSS10 we discuss the normal modes of the full Andreev and Lifshitz theory for a supersolid, which has nine variables. As Ref. AL69, established at $T=0$, there are four pairs of propagating modes. Three pairs are essentially the elastic modes we have studied here, with a weak coupling to the superfluid. The fourth pair is basically a fourth sound mode, where the normal fluid is entrained by the lattice. These propagating modes, in the presence of a finite $P_{a}$, and their generation by transducers and heaters, have been considered in Ref.SearsSasSSGen, . We also find a rather complex additional mode, not considered in Ref. AL69, , which is diffusive.SearsSasALSS10 Although the additional supersolid diffusive mode is similar to the normal solid diffusive mode found in the present work (e.g., zero net stress, and distinct mass and lattice motion), its mode structure differs significantly. The supersolid diffusive mode is characterized by three velocities: ${v^{\prime}_{n}}_{i}$, ${v^{\prime}_{s}}_{i}$, and $\dot{u}^{\prime}_{i}$, associated respectively with the normal mass, superfluid mass, and the lattice. For supersolid ${}^{4}$He with $P_{a}\ll K$, we find that ${v^{\prime}_{s}}_{i}\gg{v^{\prime}_{n}}_{i}\gg\dot{u}^{\prime}_{i}$. We also find that $g^{\prime}=\rho_{n}v_{n}^{\prime}+\rho_{s}v_{s}^{\prime}\approx 0$. If ${}^{4}$He is a genuine supersolid, then this mode provides an alternate explanation for the exponential time-dependence of the pressure decay observed by Ref. RitRep09, . We close with the following comment. Ref. AL69, predicted that supersolidity will occur because of quantum diffusion, a situation that occurs at such low temperatures that the relevant bulk diffusion processes are temperature-independent. Ref. RitRep09, observe temperature-dependent relaxation; therefore their system is not at a low enough temperature to be in the quantum diffusive regime. Note that quantum spin tunneling is an established phenomenon, wherein the magnetic relaxation rate saturates at low enough temperatures.Chudnovsky ; Tejada ; ChudTejBook VII Acknowledgements This work was partially supported by the Department of Energy through grant DE-FG02-06ER46278. References (1) A. F. Andreev and I. M. Lifshitz, Sov. Phys. JETP 29, 1107 (1969). (2) D. J. Thouless, Ann. Phys. (N.Y.) 52, 403 (1969). This contains the remark that, for a lattice of bosons, vacancies could be “in the lowest Bloch state with a finite probability, so the system would be ‘super’ but not ‘fluid’ ”. (3) G.V. Chester, Phys. Rev. A 2, 256 (1970). (4) A. J. Leggett, Phys. Rev. Lett. 25, 1543 (1970). (5) For a recent review, see S. Balibar and F. Caupin, J. Phys. Cond. Mat. 20, 173201(2008). (6) E. Kim and M. Chan, Nature (London) 427, 225 (2004). (7) E. Kim and M. Chan, Science 305, 1941 (2004). (8) A. S. C. Rittner and J. D. Reppy, Phys. Rev. Lett. 97, 165301 (2006). (9) M. Kondo, S. Takada, Y. Shibayama, and K. Shirahama, J. Low Temp. Phys. 148, 695 (2007). (10) Y. Aoki, J. C. Graves, and H. Kojima, Phys. Rev. Lett. 99, 015301 (2007). (11) A. Penzev, Y. Yasuta, and M. Kubota, J. Low Temp. Phys. 148, 677 (2007). (12) A. S. C. Rittner and J. D. Reppy, Phys. Rev. Lett. 98, 175302 (2007). (13) A. S. C. Rittner and J. D. Reppy, Phys. Rev. Lett. 101, 155301(2008). (14) X. Lin, A. C. Clark, M. H. W. Chan, Nature 449, 1025 (2007). (15) A. C. Clark, J. T. West, and M. H. W. Chan, Phys. Rev. Lett. 99, 135302 (2007). (16) J. Day and J. Beamish, Nature 450, 853 (2007). (17) M. H. W. Chan, Science 319, 1207 (2008). (18) James Day and John Beamish, Phys. Rev. Lett. 96, 105304 (2006) (19) O. Syshchenko, J. Day, and J. Beamish, Phys. Rev. Lett. 104, 195301 (2010). (20) A. Eyal, O. Pelleg, L. Embon, E. Polturak, Phys. Rev. Lett. 105, 025301 (2010). (21) J. D. Reppy, Phys. Rev. Lett. 104, 255351 (2010). (22) A. S. C. Rittner and J. D. Reppy, J. Phys. Conf. Ser. 150, 032089 (2009). (23) W. M. Saslow, Phys. Rev. B 15, 173 (1977). (24) Ref. Saslow77, finds equations of motion identical to those of Ref.AL69, , but also includes nonlinear terms. (25) M. Liu, Phys. Rev. B. 18, 1165 (1978). This work notes that Ref. Saslow77, employs the non-Galilean $\vec{j}_{s}=\rho_{s}\vec{v}_{s}$ in place of the Galilean $\vec{j}_{s}=\rho_{s}(\vec{v}_{s}-\vec{v}_{n})$. This does not affect the equations of motion until the normal modes are calculated. (26) P. D. Fleming and C. Cohen, Phys. Rev. B 13, 500 (1976). (27) C.-D. Yoo and A. T. Dorsey, Phys. Rev. B 81, 134518 (2010). (28) P. C. Martin, P. Parodi, and P. S. Pershan, Phys. Rev. A 6, 2401 (1972). (29) J. Bardeen and C. Herring, Imperfections in Nearly Perfect Crystals, John Wiley and Sons, Inc., New York, N. Y., 1952, p 261. This work treats a metal as a bicomponent system, with atoms and vacancies as the two components. It notes that mass motion relative to the lattice is not an assured result of including vacancies, as long as the vacancies are in local thermal equilibrium. (30) D. L. Johnson, J. Chem. Phys. 77, 1531 (1982). (31) L. D. Landau and E. M. Lifshitz, Theory of Elasticity, 3rd ed., Pergamon, Oxford (1986). (32) The present work uses the notation of Ref. Saslow77, , which follows Ref. AL69, for the dissipative coefficients. On the other hand, Ref. YooDorsey, seems to use the notation of Ref. Zippelius, for the dissipative coefficients, but follows Ref. AL69, in using $\vec{j}$ for the momentum density. The present work, Ref. FC76, , Ref. Zippelius, , and Ref. MartinParodi, use $\vec{g}$ for the momentum density. (33) If $N_{L}$ is the number of lattice sites and $N_{os}$ is the number of on-site atoms, then $N_{L}=N_{os}+N_{V}$. In addition, if there are $N_{i}$ interstitial atoms, then $N=N_{os}+N_{i}$. With the energy differential taking the form $dE=\dots+\phi_{L}dN_{L}+\phi_{os}dN_{os}+\phi_{i}dN_{i}$, we then have $dE=\dots+\phi_{L}dN_{V}+(\phi_{L}+\phi_{os})dN_{os}+\phi_{i}dN_{i}$. If the on-site and interstitials are in equilibrium, then $\phi_{L}+\phi_{os}=\phi_{i}$. Further, if the vacancies (subject to no conservation law) are in equilibrium, then $\phi_{V}=0$. If both the vacancies and the on-site and interstitial atoms are in equilibrium, then both $\phi_{V}=0$ and $\phi_{os}=\phi_{i}$. We do not consider interstitials and only consider the case where $P_{a}$ dominates the effect on $P$. (34) A. Zippelius, B. I. Halperin and D. R. Nelson, Phys. Rev. B 22, 2514 (1980). (35) In typical treatments of elasticity in solids it is implicit that $P=0$, even for $P_{a}\neq 0$. To produce $P=0$, even for $w_{ll}^{(0)}\neq 0$, the present application of AL theory requires $K^{*}=0$. For that to hold, by eq. (25) $K$ must vary linearly with $V$ at fixed strain $w_{ik}$; this appears to be unlikely. (36) L. D. Landau and E. M. Lifshitz, Statistical Physics, 2nd ed., Addison-Wesley, Reading, MA (1969). (37) L. D. Landau and E. M. Lifshitz, Fluid Mechanics, 2nd ed., Pergamon, Oxford (1987). (38) See section 50 of Ref. LLFluid, . (39) In the linear approximation the corresponding two equations of Ref. AL69, (see its eq.(19)), on taking $\rho_{s}\rightarrow 0$, agree with the above two equations. (40) To obtain agreement with Ref. YooDorsey, , we drop higher-order terms in the velocity, take $T=0$ (as in the present work), and neglect the static strain in eq. (5), which gives $dP\approx\rho d\mu$. Then $c_{l}^{2}$ of (45) agrees with $c_{NS}^{2}$ found in Ref. YooDorsey, , and $D_{L}$ of (60) agrees with $D_{2}$ found in Ref. YooDorsey, . However, for finite $P_{a}$ it is inconsistent to neglect the static strain (i.e., $\rho(\partial\mu/\partial\rho)\sim(P_{a}/K)^{2}\sim w_{ik}^{(0)}(\partial% \lambda_{ik}/\partial\rho)$). (41) To obtain agreement with Ref. Zippelius, , we take $\partial\lambda/\partial w$ to dominate the denominator of (60) and make the identifications, valid for zero static strain (such as $P_{a}=0$ and vacancies in equilibrium), that $\rho(\partial\lambda/\partial\rho)\rightarrow\gamma_{R}$ and $\rho(\partial P/\partial\rho)\rightarrow\chi_{R}^{-1}$. (42) M. Sears and W. M. Saslow, “Andreev-Lifshitz Supersolid Diffusive Mode”, submitted to Physical Review B. (43) M. Sears and W. M. Saslow, “Generation Efficiencies for Longitudinal Propagating Modes in a Supersolid”, accepted by Physical Review B. (44) E. M. Chudnovsky, Sov. Phys. JETP 50, 1035 (1979). (45) J. Tejada, X. X. Zhang, and E. M. Chudnovsky, Phys. Rev. B 47, 14977 (1993). (46) E. M. Chudnovsky and J. Tejada, Macroscopic Quantum Tunneling of the Magnetic Moment, Cambridge University Press, Cambridge (1998). (47) The term $-u_{i}\partial_{j}v_{j}$, proportional to the lattice position $u_{i}$, would cause $\dot{u}_{i}$ to depend upon the choice of origin; this is not translationally invariant. Appendix A Andreev-Lifshitz Supersolid A.1 Thermodynamics Consider a general frame of reference, with non-zero superfluid velocity $\vec{v}_{s}$ and normal fluid velocity $\vec{v}_{n}$. Let $\vec{u}$ be the local displacement of the crystal sites relative to their equilibrium, and take the strain to be given by $w_{ik}=\partial_{i}u_{k}$. Then by thermodynamics the differential of the energy density $\epsilon$ is given by $$d\epsilon=Tds+\lambda_{ik}dw_{ik}+\mu d\rho+\vec{j}_{s}\cdot d\vec{v}_{s}+\vec% {v}_{n}\cdot d\vec{g}.$$ (62) Here $\lambda_{ik}$ is an elastic tensor density (with the same units as pressure $P$), $\mu$ is the chemical potential (with units of velocity squared), $\vec{j}_{s}=\vec{g}-\rho\vec{v}_{n}$ (a requirement of Galilean relativity), $\rho=\rho_{n}+\rho_{s}$ (the sum of the normal and superfluid densities), and $\vec{g}=\rho_{n}\vec{v}_{n}+\rho_{s}\vec{v}_{s}$. By thermodynamic extensivity we also have $$\epsilon=-P+Ts+\lambda_{ik}w_{ik}+\mu\rho+\vec{j}_{s}\cdot\vec{v}_{s}+\vec{v}_% {n}\cdot\vec{g}$$ (63) and the Gibbs-Duhem relation $$0=-dP+sdT+w_{ik}d\lambda_{ik}+\rho d\mu+\vec{v}_{s}\cdot d\vec{j}_{s}+\vec{g}% \cdot d\vec{v}_{n}.$$ (64) The system will be in equilibrium when the thermodynamic forces $\partial_{i}T$, $\partial_{i}\lambda_{ik}$, $\partial_{i}\mu$, $\partial_{i}{v_{n}}_{j}$, and $\partial_{i}{j_{s}}_{i}$ are all zero. A.2 Dynamics The thermodynamic variables $\epsilon$, $s$, $u_{i}$, $\rho$, $\vec{v}_{s}$, and $\vec{g}$ are taken to satisfy equations of motion that are first order in time and that satisfy appropriate properties under space rotation and inversion, and under time-reversal. Thus $\epsilon$, $\rho$, and $\vec{g}$ satisfy conservation laws (a flux but no source), the phase gradient $\vec{v}_{s}$ is proportional to a gradient (a type of flux, with no source), and the displacement $u_{i}$ has a source but no flux. Thus $$\displaystyle\partial_{t}\epsilon+\partial_{i}Q_{i}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (65) $$\displaystyle\partial_{t}s+\partial_{i}f_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{R}{T},\quad(R\geq 0),$$ (66) $$\displaystyle\partial_{t}u_{i}$$ $$\displaystyle=$$ $$\displaystyle U_{i},$$ (67) $$\displaystyle\partial_{t}\vec{v}_{s}+\vec{\nabla}\theta$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (68) $$\displaystyle\partial_{t}\rho+\partial_{i}g_{i}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (69) $$\displaystyle\partial_{t}g_{i}+\partial_{k}\Pi_{ik}$$ $$\displaystyle=$$ $$\displaystyle 0.$$ (70) (The source $U_{i}$ was implicit in previous theories.AL69 ; Saslow77 ) The unknown fluxes $Q_{i}$, $f_{i}$, $\phi$, and $\Pi_{ik}$, and the unknown sources $R$ and $U_{i}$, are determined by subjecting them to the condition that, when applied to the thermodynamic equation (62), the density $R$ of the rate of dissipated energy be non-negative. Note that $g_{i}$ is already known, and $Q_{i}$ and $R$ will not be needed. For $f_{i}$, $U_{i}$, $\theta$, and $\Pi_{ik}$ we have, when terms non-linear in velocities and strains are neglected, $$\displaystyle f_{i}$$ $$\displaystyle=sv_{ni}-\frac{\kappa_{ij}}{T}\partial_{j}T-\frac{\alpha_{ij}}{T}% \partial_{l}\lambda_{lk},$$ (71) $$\displaystyle U_{i}$$ $$\displaystyle=v_{ni}+\frac{\alpha_{ij}}{T}\partial_{j}T+\beta_{ij}\partial_{l}% \lambda_{lk},$$ (72) $$\displaystyle\theta$$ $$\displaystyle=\mu-\zeta_{ik}\partial_{k}v_{ni}-\chi\partial_{k}j_{sk},$$ (73) $$\displaystyle\Pi_{ik}$$ $$\displaystyle=(P\delta_{ik}-\lambda_{ik})-\eta_{iklm}\partial_{m}v_{nl}-\zeta_% {ik}\partial_{l}j_{sl}.$$ (74) In each of these equations, the last two terms are dissipative and the preceding terms are reactive. Refs. AL69, and Saslow77, obtain a nonlinear term in $U_{i}$, which may be obtained by letting $v_{i}\rightarrow v_{i}-v_{j}\partial_{j}u_{i}$. On the other hand, Ref. YooDorsey, obtains two nonlinear terms, which may be obtained by letting $v_{i}\rightarrow v_{i}-v_{j}\partial_{j}u_{i}-u_{i}\partial_{j}v_{j}$.YooDorseyFootnote Appendix B Relevant Thermodynamic Derivatives In what follows, the quantities $(\partial P/\partial w_{jl})_{\rho,\sigma}$, $(\partial P/\partial\rho)_{w_{ik},\sigma}$, $(\partial\lambda_{ik}/\partial w_{jl})_{\rho,\sigma}$, and $(\partial\lambda_{ik}/\partial\rho)_{w_{ik},\sigma}$ are obtained in terms of $P_{a}$ and the elastic constants. (1) With ${w_{11}^{(0)}}={w_{22}^{(0)}}={w_{33}^{(0)}}$, eq. (27) gives $$\displaystyle\left.\frac{\partial P}{\partial w_{ik}}\right|_{V,s,\rho}=$$ $$\displaystyle 3K^{*}\delta_{ik}w_{11}^{(0)}\equiv\frac{\partial P}{\partial w}% \delta_{ik},$$ (75) where $\partial P/\partial w$ is defined for later convenience. Substitution for $w_{11}^{(0)}$ from (33) gives, to second order in $P_{a}/K$, $$\displaystyle\frac{\partial P}{\partial w}\approx-P_{a}\frac{K^{*}}{K}+\frac{P% _{a}^{2}{K^{*}}^{2}}{2K^{3}}.$$ (76) (2) From (29) we have $$\displaystyle\left.\frac{\partial P}{\partial\rho}\right|_{\sigma,w_{ik}}=% \frac{9}{2}{w_{11}^{(0)}}^{2}\left.\frac{\partial K^{*}}{\partial\rho}\right|_% {\sigma,w_{ik}}.$$ (77) By (25), $$\displaystyle\left.\frac{\partial K^{*}}{\partial\rho}\right|_{\sigma,w_{ik}}=$$ $$\displaystyle\left[\frac{\partial}{\partial\rho}\left(K-V\left.\frac{\partial K% }{\partial V}\right|_{\sigma,w_{ik},N}\right)\right]_{\sigma,w_{ik}}$$ $$\displaystyle=$$ $$\displaystyle\frac{V^{2}}{\rho}\left.\frac{\partial^{2}K}{\partial V^{2}}% \right|_{\sigma,w_{ik},N}.$$ (78) Thus (77) can be written as $$\displaystyle\left.\frac{\partial P}{\partial\rho}\right|_{\sigma,w_{ik}}=% \frac{9V^{2}}{2\rho}{w_{11}^{(0)}}^{2}\left.\frac{\partial^{2}K}{\partial V^{2% }}\right|_{\sigma,w_{ik},N}.$$ (79) To second order in $P_{a}/K$, eqs. (33), (77) and (79) give $$\displaystyle\left.\frac{\partial P}{\partial\rho}\right|_{\sigma,w_{ik}}% \approx\frac{1}{2}\frac{P_{a}^{2}}{K^{2}}\left.\frac{\partial K^{*}}{\partial% \rho}\right|_{\sigma,w_{ik}}=\frac{V^{2}P_{a}^{2}}{2\rho K^{2}}\left.\frac{% \partial^{2}K}{\partial V^{2}}\right|_{\sigma,w_{ik},N}.$$ (80) (3) From (23) we have $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial w_{jl}}\right|_{\rho,% \sigma}=$$ $$\displaystyle\left(K-\frac{2}{3}\mu_{V}\right)\delta_{ik}\delta_{jl}+\mu_{V}% \left(\delta_{ij}{\delta_{kl}}+\delta_{kj}\delta_{il}\right).$$ (81) We now define $$\displaystyle\frac{\partial\lambda}{\partial w}\equiv K+\frac{4}{3}\mu_{V},$$ (82) so that $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial w_{jl}}\right|_{\rho,% \sigma}=$$ $$\displaystyle\frac{\partial\lambda}{\partial w}\delta_{ik}\delta_{jl}+\mu_{V}% \left(\delta_{ij}{\delta_{kl}}+\delta_{kj}\delta_{il}-2\delta_{ik}\delta_{jl}% \right).$$ (83) (4) From (23) we also have $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial\rho}\right|_{w_{ik},% \sigma}=$$ $$\displaystyle\left(\left.\frac{\partial K}{\partial\rho}\right|_{w_{ik},\sigma% }-\frac{2}{3}\left.\frac{\partial\mu_{V}}{\partial\rho}\right|_{w_{ik},\sigma}% \right)\delta_{ik}w_{ll}^{(0)}$$ $$\displaystyle+\left.\frac{\partial\mu_{V}}{\partial\rho}\right|_{w_{ik},\sigma% }\left(w_{ik}^{(0)}+w_{ki}^{(0)}\right).$$ (84) With $$\displaystyle\frac{\partial K}{\partial\rho}_{w_{ik},\sigma}=-\frac{V}{\rho}% \frac{\partial K}{\partial V}_{w_{ik},\sigma,N}=\frac{K^{*}-K}{\rho},$$ (85) and with a similar relation for $\mu_{V}$, eq. (84) gives $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial\rho}\right|_{w_{ik},% \sigma}=$$ $$\displaystyle\left(\frac{K^{*}-K}{\rho}-\frac{2}{3}\frac{\mu_{V}^{*}-\mu_{V}}{% \rho}\right)\delta_{ik}w_{ll}^{(0)}$$ $$\displaystyle+\frac{\mu_{V}^{*}-\mu_{V}}{\rho}\left(w_{ik}^{(0)}+w_{ki}^{(0)}% \right).$$ (86) With (26) and ${w_{11}^{(0)}}={w_{22}^{(0)}}={w_{33}^{(0)}}$, $$\displaystyle\left.\frac{\partial\lambda_{ik}}{\partial\rho}\right|_{w_{ik},% \sigma}=$$ $$\displaystyle\left(\frac{K^{*}-K}{\rho}\right)\delta_{ik}w_{ll}^{(0)}=3\left(% \frac{K^{*}-K}{\rho}\right)\delta_{ik}w_{11}^{(0)}$$ $$\displaystyle\equiv$$ $$\displaystyle\frac{\partial\lambda}{\partial\rho}\delta_{ik},$$ (87) where $\partial\lambda/\partial\rho$ is defined for later convenience. To second order in $P_{a}/K$, eq. (33) gives $$\displaystyle\frac{\partial\lambda}{\partial\rho}\approx\Big{(}1-\frac{K^{*}}{% K}\Big{)}\left[\frac{P_{a}}{\rho}-\frac{P_{a}^{2}K^{*}}{2\rho K^{2}}\right].$$ (88)
Closing the Gap Between Short and Long XORs for Model Counting Shengjia Zhao Computer Science Department Tsinghua University zhaosj12@mails.tsinghua.edu.cn &Sorathan Chaturapruek Computer Science Department Stanford University sorathan@cs.stanford.edu &Ashish Sabharwal Allen Institute for AI Seattle, WA ashishs@allenai.org &Stefano Ermon Computer Science Department Stanford University ermon@cs.stanford.edu Abstract Many recent algorithms for approximate model counting are based on a reduction to combinatorial searches over random subsets of the space defined by parity or XOR constraints. Long parity constraints (involving many variables) provide strong theoretical guarantees but are computationally difficult. Short parity constraints are easier to solve but have weaker statistical properties. It is currently not known how long these parity constraints need to be. We close the gap by providing matching necessary and sufficient conditions on the required asymptotic length of the parity constraints. Further, we provide a new family of lower bounds and the first non-trivial upper bounds on the model count that are valid for arbitrarily short XORs. We empirically demonstrate the effectiveness of these bounds on model counting benchmarks and in a Satisfiability Modulo Theory (SMT) application motivated by the analysis of contingency tables in statistics. Closing the Gap Between Short and Long XORs for Model Counting Shengjia Zhao Computer Science Department Tsinghua University zhaosj12@mails.tsinghua.edu.cn                        Sorathan Chaturapruek Computer Science Department Stanford University sorathan@cs.stanford.edu                        Ashish Sabharwal Allen Institute for AI Seattle, WA ashishs@allenai.org                        Stefano Ermon Computer Science Department Stanford University ermon@cs.stanford.edu Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Introduction Model counting is the problem of computing the number of distinct solutions of a given Boolean formula. It is a classical problem that has received considerable attention from a theoretical point of view (?; ?), as well as from a practical perspective (?; ?). Numerous probabilistic inference and decision making tasks, in fact, can be directly translated to (weighted) model counting problems (?; ?). As a generalization of satisfiability testing, the problem is clearly intractable in the worst case. Nevertheless, there has been considerable success in both exact and approximate model counting algorithms, motivated by a number of applications (?). Recently, approximate model counting techniques based on randomized hashing have emerged as one of the leading approaches (?; ?; ?; ?; ?; ?). While approximate, these techniques provide strong guarantees on the accuracy of the results in a probabilistic sense. Further, these methods all reduce model counting to a small number of combinatorial searches on a randomly projected version of the original formula, obtained by augmenting it with randomly generated parity or XOR constraints. This approach allows one to leverage decades of research and engineering in combinatorial reasoning technology, such as fast satisfiability (SAT) and SMT solvers (?). While modern solvers have witnessed tremendous progress over the past 25 years, model counting techniques based on hashing tend to produce instances that are difficult to solve. In order to achieve strong (probabilistic) accuracy guarantees, existing techniques require each randomly generated parity constraint to be relatively long, involving roughly half of the variables in the original problem. Such constraints, while easily solved in isolation using Gaussian Elimination, are notoriously difficult to handle when conjoined with the original formula (?; ?; ?; ?). Shorter parity constraints, i.e., those involving a relative few variables, are friendlier to SAT solvers, but their statistical properties are not well understood. ? (?) showed that long parity constraints are not strictly necessary, and that one can obtain the same accuracy guarantees using shorter XORs, which are computationally much more friendly. They provided a closed form expression, allowing an easy computation of an XOR length that suffices, given various parameters such as the number of problem variables, the number of constraints being added, and the size of the solution space under consideration. It is, however, currently not known how tight their sufficiency condition is, how it scales with various parameters, or whether it is in fact a necessary condition. We resolve these open questions by providing an analysis of the optimal asymptotic constraint length required for obtaining high-confidence approximations to the model count. Surprisingly, for formulas with $n$ variables, we find that when $\Theta(n)$ constraints are added, a constraint length of $\Theta(\log n)$ is both necessary and sufficient. This is a significant improvement over standard long XORs, which have length $\Theta(n)$. Constraints of logarithmic length can, for instance, be encoded efficiently with a polynomial number of clauses. We also study upper bounds on the minimum sufficient constraint length, which evolve from $\mathrm{O}(\log n)$ to $\mathrm{O}(n^{\gamma}\log^{2}n)$ to $n/2$ across various regimes of the number of parity constraints. As a byproduct of our analysis, we obtain a new family of probabilistic upper and lower bounds that are valid regardless of the constraint length used. These upper and lower bounds on the model count reach within a constant factor of each other as the constraint density approaches the aforementioned optimal value. The bounds gracefully degrade as we reduce the constraint length and the corresponding computational budget. While lower bounds for arbitrary XOR lengths were previously known (?; ?), the upper bound we prove in this paper is the first non-trivial upper bound in this setting. Remarkably, even though we rely on random projections and therefore only look at subsets of the entire space (a local view, akin to traditional sampling), we are able to say something about the global nature of the space, i.e., a probabilistic upper bound on the number of solutions. We evaluate these new bounds on standard model counting benchmarks and on a new counting application arising from the analysis of contingency tables in statistics. These data sets are common in many scientific domains, from sociological studies to ecology (?). We provide a new approach based on SMT solvers and a bit-vector arithmetic encoding. Our approach scales very well and produces accurate results on a wide range of benchmarks. It can also handle additional constraints on the tables, which are very common in scientific data analysis problems, where prior domain knowledge translates into constraints on the tables (e.g., certain entries must be zero because the corresponding event is known to be impossible). We demonstrate the capability to handle structural zeroes (?) in real experimental data. Preliminaries: Counting by Hashing Let $x_{1},\cdots,x_{n}$ be $n$ Boolean variables. Let $S\subseteq\{0,1\}^{n}$ be a large, high-dimensional set111We restrict ourselves to the binary case for the ease of exposition. Our work can be naturally extended to categorical variables.. We are interested in computing $|S|$, the number of elements in $S$, when $S$ is defined succinctly through conditions or constraints that the elements of $S$ satisfy and membership in $S$ can be tested using an NP oracle. For example, when $S$ is the set of solutions of a Boolean formula over $n$ binary variables, the problem of computing $|S|$ is known as model counting, which is the canonical $\#$-P complete problem (?). In the past few years, there has been growing interest in approximate probabilistic algorithms for model counting. It has been shown (?; ?; ?; ?; ?) that one can reliably estimate $|S|$, both in theory and in practice, by repeating the following simple process: randomly partition $S$ into $2^{m}$ cells and select one of these lower-dimensional cells, and compute whether $S$ has at least 1 element in this cell (this can be accomplished with a query to an NP oracle, e.g., invoking a SAT solver). Somewhat surprisingly, repeating this procedure a small number of times provides a constant factor approximation to $|S|$ with high probability, even though counting problems (in $\#$-P) are believed to be significantly harder than decision problems (in NP). The correctness of the approach relies crucially on how the space is randomly partitioned into cells. All existing approaches partition the space into cells using parity or XOR constraints. A parity constraint defined on a subset of variables checks whether an odd or even number of the variables take the value 1. Specifically, $m$ parity (or XOR) constraints are generated, and $S$ is partitioned into $2^{m}$ equivalence classes based on which parity constraints are satisfied. The way in which these constraints are generated affects the quality of the approximation of $|S|$ (the model count) obtained. Most methods randomly generate $m$ parity constraints by adding each variable to each constraint with probability $f\leq 1/2$. This construction can also be interpreted as defining a hash function, mapping the space $\{0,1\}^{n}$ into $2^{m}$ hash bins (cells). Formally, Definition 1. Let $A\in\{0,1\}^{m\times n}$ be a random matrix whose entries are Bernoulli i.i.d. random variables of parameter $f\leq 1/2$, i.e., $\Pr[A_{ij}=1]=f$. Let $b\in\{0,1\}^{m}$ be chosen uniformly at random, independently from $A$. Then, $\mathcal{H}^{f}_{m\times n}=\{h_{A,b}:\{0,1\}^{n}\rightarrow\{0,1\}^{m}\}$, where $h_{A,b}(x)=Ax+b\mod{2}$ and $h_{A,b}\in_{R}\mathcal{H}_{m\times n}^{f}$ is chosen randomly according to this process, is a family of $f$-sparse hash functions. The idea to estimate $|S|$ is to define progressively smaller cells (by increasing $m$, the number of parity constraints used to define $h$), until the cells become so small that no element of $S$ can be found inside a (randomly) chosen cell. The intuition is that the larger $|S|$ is, the smaller the cells will have to be, and we can use this information to estimate $|S|$. Based on this intuition, we give a hashing-based counting procedure (Algorithm 1, SPARSE-COUNT), which relies on an NP oracle $\mathcal{O}_{S}$ to check whether $S$ has an element in the cell. It is adapted from the SPARSE-WISH algorithm of ? (?). The algorithm takes as input $n$ families of $f$-sparse hash functions $\{\mathcal{H}^{f_{i}}_{i\times n}\}_{i=0}^{n}$, used to partition the space into cells. In practice, line 7 is implemented using a SAT solver as an NP-oracle. In a model counting application, this is accomplished by adding to the original formula $i$ parity constraints generated as in Definition 1 and checking the satisfiability of the augmented formula. Typically, $\{\mathcal{H}^{\frac{1}{2}}_{i\times n}\}$ is used, corresponding to XORs where each variable is added with probability $1/2$ (hence with average length $n/2$). We call these long parity constraints. In this case, it can be shown that SPARSE-COUNT will output a factor $16$ approximation of $|S|$ with probability at least $1-\Delta$ (?). Unfortunately, checking satisfiability (i.e., $S(h^{i}_{A,b})\geq 1$, line 7) has been observed to be very difficult when many long parity constraints are added to a formula (?; ?; ?; ?). Note, for instance, that while a parity constraint of length one simply freezes a variable right away, a parity constraint of length $k$ can be propagated only after $k-1$ variables have been set. From a theoretical perspective, parity constraints are known to be fundamentally difficult for the resolution proof system underlying SAT solvers (cf. exponential scaling of Tseitin tautologies (?)). A natural question, therefore, is whether short parity constraints can be used in SPARSE-COUNT and provide reliable bounds for $|S|$. Intuitively, for the method to work we want the hash functions $\{\mathcal{H}^{f_{i}}_{i\times n}\}$ to have a small collision probability. In other words, we want to ensure that when we partition the space into cells, configurations from $S$ are divided into cells evenly. This gives a direct relationship between the original number of solutions $|S|$ and the (random) number of solutions in one (randomly chosen) cell, $S(h)$. More precisely, we say that the hash family shatters $S$ if the following holds: Definition 2. For $\epsilon>0$, a family of hash functions $\mathcal{H}^{f}_{i\times n}$ $\epsilon$-shatters a set $S$ if $\Pr[S(h)\geq 1]\geq 1/2+\epsilon$ when $h\in_{R}\mathcal{H}^{f}_{i\times n}$, where $S(h)=|\{x\in S\mid h(x)=0\}|$. The crucial property we need to obtain reliable estimates is that the hash functions (equivalently, parity constraints) are able to shatter sets $S$ with arbitrary “shape”. This property is both sufficient and necessary for SPARSE-COUNT to provide accurate model counts with high probability: Theorem 1. (Informal statement) A necessary and sufficient condition for SPARSE-COUNT to provide a constant factor approximation to $|S|$ is that each family $\mathcal{H}^{f_{i}}_{i\times n}$ $\epsilon$-shatters all sets $S^{\prime}$ of size $|S^{\prime}|=2^{i+c}$ for some $c\geq 2$. A formal statement, along with all proofs, is provided in a companion technical report (?). Long parity constraints, i.e., $1/2$-sparse hash functions, are capable of shattering sets of arbitrary shape. When $h\in_{R}\mathcal{H}^{\frac{1}{2}}_{i\times n}$, it can be shown that configurations $x\in\{0,1\}^{n}$ are placed into hash bins (cells) pairwise independently, and this guarantees shattering of sufficiently large sets of arbitrary shape. Recently, ? (?) showed that sparser hash functions can be used for approximate counting as well. In particular, $f^{*}$-sparse hash functions, for sufficiently large $f^{*}\lneqq 1/2$, were shown to have good enough shattering capabilities. It is currently not known whether $f^{*}$ is the optimal constraint density. Asymptotically Optimal Constraint Density We analyze the asymptotic behavior of the minimum constraint density $f$ needed for SPARSE-COUNT to produce correct bounds with high confidence. As noted earlier, the bottleneck lies in ensuring that $f$ is large enough for a randomly chosen hash bin to receive at least one element of the set $S$ under consideration, i.e., the hash family shatters $S$. Definition 3. Let $n,m\in\mathbb{N},n\geq m$. For any fixed $\epsilon>0$, the minimum constraint density $f=\tilde{f}_{\epsilon}(m,n)$ is defined as the pointwise smallest function such that for any constant $c\geq 2$, $\mathcal{H}^{f}_{m\times n}$ $\epsilon$-shatters all sets $S\in\{0,1\}^{n}$ of size $2^{m+c}$. We will show (Theorem 2) that for any $\epsilon>0$, $\tilde{f}_{\epsilon}(m,n)=\Omega(\frac{\log m}{m})$, and this is asymptotically tight when $\epsilon$ is small enough and $m=\Theta(n)$, which in practice is often the computationally hardest regime of $m$. Further, for the regime of $m=\Theta(n^{\beta})$ for $\beta<1$, we show that $\tilde{f}_{\epsilon}(m,n)=\mathrm{O}(\frac{\log^{2}m}{m})$. Combined with the observation that $\tilde{f}_{\epsilon}(m,n)=\Theta(1)$ when $m=\Theta(1)$, our results thus reveal how the minimum constraint density evolves from a constant to $\Theta(\frac{\log m}{m})$ as $m$ increases from a constant to being linearly related to $n$. The minimum average constraint length, $n\cdot\tilde{f}_{\epsilon}(m,n)$, correspondingly decreases from $n/2$ to $\mathrm{O}(n^{1-\beta}\log^{2}n)$ to $\Theta(\log n)$, showing that in the computationally hardest regime of $m=\Theta(n)$, the parity constraints can in fact be represented using only $2^{\Theta(\log n)}$, i.e., a polynomial number of SAT clauses. Theorem 2. Let $n,m\in\mathbb{N},n\geq m,$ and $\kappa>1$. The minimum constraint density, $\tilde{f}_{\epsilon}(m,n)$, behaves as follows: 1. Let $\epsilon>0$. There exists $M_{\kappa}$ such that for all $m\geq M_{\kappa}$: $$\tilde{f}_{\epsilon}(m,n)>\frac{\log{m}}{\kappa\,m}$$ 2. Let $\epsilon\in(0,\frac{3}{10}),\alpha\in(0,1),$ and $m=\alpha n$. There exists $N$ such that for all $n\geq N$: $$\tilde{f}_{\epsilon}(m,n)\leq\left({3.6-\frac{5}{4}\log_{2}{\alpha}}\right)% \frac{\log{m}}{m}$$ 3. Let $\epsilon\in(0,\frac{3}{10}),\alpha,\beta\in(0,1),$ and $m=\alpha n^{\beta}$. There exists $N_{\kappa}$ such that for all $n\geq N_{\kappa}$: $$\tilde{f}_{\epsilon}(m,n)\leq\frac{\kappa\,(1-\beta)}{2\beta}\frac{\log^{2}{m}% }{m}$$ The lower bound in Theorem 2 follows from analyzing the shattering probability of an $m+c$ dimensional hypercube $S_{c}=\{x\mid x_{j}=0\ \ \forall j>m+c\}$. Intuitively, random parity constraints of density smaller than $\frac{\log m}{m}$ do not even touch the $m+c$ relevant (i.e., non-fixed) dimensions of $S_{c}$ with a high enough probability, and thus cannot shatter $S_{c}$ (because all elements of $S_{c}$ would be mapped to the same hash bin). For the upper bounds, we exploit the fact that $\tilde{f}(m,n)$ is at most the $f^{*}$ function introduced by ? (?) and provide an upper bound on the latter. Intuitively, $f^{*}$ was defined as the minimum function such that the variance of $S(h)$ is relatively small. The variance was upper bounded by considering the worst case “shape” of $|S|$: points packed together unrealistically tightly, all fitting together within Hamming distance $w^{*}$ of a point. For the case of $m=\alpha n$, we observe that $w^{*}$ must grow as $\Theta(n)$, and divide the expression bounding the variance into two parts: terms corresponding to points that are relatively close (within distance $\lambda n$ for a particular $\lambda$) are shown to contribute a vanishingly small amount to the variance, while terms corresponding to points that are farther apart are shown to behave as if they contribute to $S(h)$ in a pairwise independent fashion. The $\frac{\log m}{m}$ bound is somewhat natural and also arises in the analysis of the rank of sparse random matrices and random sparse linear systems (?). For example, this threshold governs the asymptotic probability that a matrix $A$ generated as in Definition 1 has full rank (?). The connection arises because, in our setting, the rank of the matrix $A$ affects the quality of hashing. For example, an all-zero matrix $A$ (of rank $0$) would map all points to the same hash bucket. Improved Bounds on the Model Count In the previous sections, we established the optimal (smallest) constraint density that provides a constant factor approximation on the model count $|S|$. However, depending on the size and structure of $S$, even adding constraints of density $f^{\ast}\ll 0.5$ can lead to instances that cannot be solved by modern SAT solvers (see Table 1 below). In this section we show that for $f<f^{\ast}$ we can still obtain probabilistic upper and lower bounds. The bounds constitute a trade off between small $f$ for easily solved NP queries and $f$ close to $f^{*}$ for guaranteed constant factor approximation. To facilitate discussion, we define $S(h)=|\{x\in S\mid h(x)=0\}|=|S\cap h^{-1}(0)|$ to be the random variable indicating how many elements of $S$ survive $h$, when $h$ is randomly chosen from $\mathcal{H}^{f}_{m\times n}$ as in Definition 1. Let $\mu_{S}=\mathbb{E}[S(h)]$ and $\sigma^{2}_{S}=\mathrm{Var}[S(h)]$. Then, it is easy to verify that irrespective of the value of $f$, $\mu_{S}=|S|2^{-m}$. $\mathrm{Var}[S(h)]$ and $\Pr[S(h)\geq 1]$, however, do depend on $f$. Tighter Lower Bound on $|S|$ Our lower bound is based on Markov’s inequality and the fact that the mean of $S(h)$, $\mu_{S}=|S|2^{-m}$, has a simple linear relationship to $|S|$. Previous lower bounds (?; ?) are based on the observation that if at least half of $T$ repetitions of applying $h$ to $S$ resulted in some element of $S$ surviving, there are likely to be at least $2^{m-2}$ solutions in $S$. Otherwise, $\mu_{S}$ would be too small (specifically, $\leq 1/4$) and it would be unlikely to see solutions surviving often. Unlike previous methods, we not only check whether the estimated $\Pr[S(h)\geq 1]$ is at least $1/2$, but also consider an empirical estimate of $\Pr[S(h)\geq 1]$. This results in a tighter lower bound, with a probabilistic correctness guarantee derived using Chernoff’s bound. Lemma 1. Let $S\subseteq\{0,1\}^{n},f\in[0,1/2],$ and for each $m\in\{1,2,\ldots,n\}$, let hash function $h_{m}$ be drawn randomly from $\mathcal{H}^{f}_{m\times n}$. Then, $$\displaystyle\left|S\right|$$ $$\displaystyle\geq\max_{m=1}^{n}\ 2^{m}\Pr[S(h_{m})\geq 1].$$ (1) Our theoretical lower bound is $L=2^{m}\mathbb{\Pr}\left[S(h)\geq 1\right]$, which satisfies $L\leq|S|$ by the previous Lemma. In practice we cannot compute $\Pr[S(h)\geq 1]$ with infinite precision, so our practical lower bound is $\hat{L}=2^{m}\Pr_{\text{est}}\left[S(h)\geq 1\right],$ which will be defined in a moment. We would like to have a statement of the form $\Pr\left[\left|S\right|\geq\hat{L}\right]\geq 1-\delta.$ It turns out that we can guarantee that $|S|$ is larger than the estimated lower bound shrunk by a $(1+\kappa)$ factor with high probability. Let $Y_{k}=\mathbb{I}\left[\left|h_{k}^{-1}(0)\cap S\right|\geq 1\right]$ and $Y=\sum_{k=1}^{T}Y_{k}.$ Let $\nu=\mathbb{E}\left[Y\right]=T\mathbb{E}\left[Y_{1}\right]=T\Pr\left[S(h)\geq 1% \right].$ We define our estimator to be $\Pr_{\text{est}}\left[S(h)\geq 1\right]=Y/T$. Using Chernoff’s bound, we have the following result. Theorem 3. Let $\kappa>0$. If $\Pr_{\text{est}}\left[S(h)\geq 1\right]\geq c$, then $$\displaystyle\Pr\left[|S|\geq\frac{2^{m}c}{(1+\kappa)}\right]\geq 1-\exp\left(% -\frac{\kappa^{2}cT}{(1+\kappa)(2+\kappa)}\right).$$ New Upper Bound for $|S|$ The upper bound expression for $f$ that we derive next is based on the contrapositive of the observation of ? (?) that the larger $|S|$ is, the smaller an $f$ suffices to shatter it. For $n,m,f,$ and $\epsilon(n,m,q,f)$ from (?), define: $$\displaystyle v(q)=\frac{q}{2^{m}}\left(1+\epsilon(n,m,q,f)\cdot(q-1)-\frac{q}% {2^{m}}\right)$$ (2) This quantity is an upper bound on the variance $\mathrm{Var}[S(h)]$ of the number of surviving solutions as a function of the size of the set $q$, the number of constraints used $m$, and the statistical quality of the hash functions, which is controlled by the constraint density $f$. The following Lemma characterizes the asymptotic behavior of this upper bound on the variance: Lemma 2. $q^{2}/v(q)$ is an increasing function of $q$. Using Lemma 2, we are ready to obtain an upper bound on the size of the set $S$ in terms of the probability $\Pr[S(h)\geq 1]$ that at least one configuration from $S$ survives after adding the randomly generated constraints. Lemma 3. Let $S\subseteq\{0,1\}^{n}$ and $h\in_{R}\mathcal{H}^{f}_{m\times n}$. Then $$|S|\leq\min\left\{z\mid\frac{1}{1+2^{2m}v(z)/z^{2}}>\Pr[S(h)\geq 1]\right\}$$ The probability $\Pr[S(h)\geq 1]$ is unknown, but can be estimated from samples. In particular, we can draw independent samples of the hash functions and get accurate estimates using Chernoff style bounds. We get the following theorem: Theorem 4. Let $S\subseteq\{0,1\}^{n}$. Let $\Delta\in(0,1)$. Suppose we draw $T=24\ln\frac{1}{\Delta}$ hash functions $h_{1},\cdots,h_{T}$ from $\mathcal{H}^{f}_{m\times n}$. If $\mathrm{Median}(\mathbb{I}[S(h_{1})=0],\cdots,\mathbb{I}[S(h_{T})=0])=0$ then $$|S|\leq\min\left\{z\mid 1-\frac{1}{1+2^{2m}v(z)/z^{2}}\geq\frac{3}{4}\right\}$$ with probability at least $1-\Delta$. This theorem provides us with a way of computing upper bounds on the cardinality of $S$ that hold with high probability. The bound is known to be tight (matching the lower bound derived in the previous section) if the family of hash function used is fully pairwise independent, i.e. $f=0.5$. Experimental Evaluation Model Counting Benchmarks 222Source code for this experiment can be found at https://github.com/ShengjiaZhao/XORModelCount We evaluate the quality of our new bounds on a standard model counting benchmark (ANOR2011) from ? (?). Both lower and upper bounds presented in the previous section are parametric: they depend both on $m$, the number of constraints, and $f$, the constraint density. Increasing $f$ is always beneficial, but can substantially increase the runtime. The dependence on $m$ is more complex, and we explore it empirically. To evaluate our new bounds, we consider a range of values for $f\in[0.01,0.5]$, and use a heuristic approach to first identify a promising value for $m$ using a small number of samples $T$, and then collect more samples for that $m$ to reliably estimate $P[S(h)\geq 1]$ and improve the bounds on $|S|$. We primarily compare our results to ApproxMC (?), which can compute a constant factor approximation to user specified precision with arbitrary confidence (at the cost of more computation time). ApproxMC is similar to SPARSE-COUNT, and uses long parity constraints. For both methods, Cryptominisat version 2.9.4 (?) is used as the NP-oracle $\mathcal{O}_{S}$ (with Gaussian elimination enabled), and the confidence parameter is set to $0.95$, so that bounds reported hold with $95\%$ probability. Our results on the Langford12 instance (576 variables, 13584 clauses, $10^{5}\approx\exp(11.5)$ solutions) are shown in Figure 1. The pattern is representative of all other instances we tested. The tradeoff between quality of the bounds and runtime, which is governed by $f$, clearly emerges. Instances with small $f$ values can be solved orders of magnitude faster than with full length XORs ($f\approx 0.5$), but provide looser bounds. Interestingly, lower bounds are not very sensitive to $f$, and we empirically obtain good bounds even for very small values of $f$. We also evaluate ApproxMC (horizontal and vertical lines) with parameter setting of $\epsilon=0.75$ and $\delta=0.05$, obtaining an 8-approximation with probability at least $0.95$. The runtime is $47042$ seconds. It can be seen that ApproxMC and our bounds offer comparable model counts for dense $f\approx 0.5$. However, our method allows to trade off computation time against the quality of the bounds. We obtain non-trivial upper bounds using as little as $0.1\%$ of the computational resources required with long parity constraints, a flexibility not offered by any other method. Table 1 summarizes our results on other instances from the benchmark and compares them with ApproxMC with a 12 hour timeout. We see that for the instances on which ApproxMC is successful, our method obtains approximate model counts of comparable quality and is generally faster. While ApproxMC requires searching for tens or even hundreds of solutions during each iteration, our method needs only one solution per iteration. Further, we see that long parity constraints can lead to very difficult instances that cannot be solved, thereby reinforcing the benefit of provable upper and lower bounds using sparse constraints (small $f$). Our method is designed to produce non-trivial bounds even when the computational budget is significantly limited, with bounds degrading gracefully with runtime. SMT Models for Contingency Table Analysis In statistics, a contingency table is a matrix that captures the (multivariate) frequency distribution of two variables with $r$ and $c$ possible values, resp. For example, if the variables are gender (male or female) and handedness (left- or right-handed), then the corresponding $2\times 2$ contingency table contains frequencies for the four possible value combinations, and is associated with $r$ row sums and $c$ column sums, also known as the row and column marginals. Fisher’s exact test (?) tests contingency tables for homogeneity of proportion. Given fixed row and column marginals, it computes the probability of observing the entries found in the table under the null hypothesis that the distribution across rows is independent of the distribution across columns. This exact test, however, quickly becomes intractable as $r$ and $c$ grow. Statisticians, therefore, often resort to approximate significance tests, such as the chi-squared test. The associated counting question is: Given $r$ row marginals $R_{i}$ and $c$ column marginals $C_{j}$, how many $r\times c$ integer matrices $M$ are there with these row and column marginals? When entries of $M$ are restricted to be in $\{0,1\}$, the corresponding contingency table is called binary. We are interested in counting both binary and integer tables. This counting problem for integer tables is #P-complete even when $r$ is 2 or $c$ is 2 (?). Further, for binary contingency tables with so-called structural zeros (?) (i.e., certain entries of $M$ are required to be $0$), we observe that the counting problem is still #P-complete. This can be shown via a reduction from the well-known permanent computation problem, which is #P-complete even for 0/1 matrices (?). Model counting for contingency tables is formulated most naturally using integer variables and arithmetic constraints for capturing the row and column sums. While integer linear programming (ILP) appears to be a natural fit, ILP solvers do not scale very well on this problem as they are designed to solve optimization problems and not feasibility queries. We therefore propose to encode the problem using a Satisfiability Modulo Theory (SMT) framework (?), which extends propositional logic with other underlying theories, such as bitvectors and real arithmetic. We choose a bitvector encoding where each entry $a_{ij}$ of $M$ is represented as a bitvector of size $\lceil\log_{2}\min(R_{i},C_{j})\rceil$. The parity constraints are then randomly generated over the individual bits of each bitvector, and natively encoded into the model as XORs. As a solver, we use Z3 (?). We evaluate our bounds on six datasets: Darwin’s Finches (df). The marginals for this binary contingency table dataset are from ? (?). This is one of the few datasets with known ground truth: $\log_{2}|S|\approx 55.8982$, found using a clever divide-and-conquer algorithm of David desJardins. The 0-1 label in cell $(x,y)$ indicates the presence or absence of one of 13 finch bird species $x$ at one of 17 locations $y$ in the Galápagos Islands. To avoid trivialities, we drop one of the species that appears in every island, resulting in $12\times 17=204$ binary variables. Climate Change Perceptions (icons). This $9\times 6$ non-binary dataset is taken from the alymer R package (?). It concerns lay perception of climate change. The dataset is based on a study reported by ? (?) in which human subjects are asked to identify which icons (such as polar bears) they find the most concerning. There are 18 structural zeros representing that not all icons were shown to all subjects. Social Anthropology (purum). This $5\times 5$ non-binary dataset (?) concerns marriage rules of an isolated tribe in India called the Purums, which is subdivided into $5$ sibs. Structured zeros represent marriage rules that prevent some sibs from marrying other sibs. Industrial Quality Control (iqd). This $4\times 7$ non-binary dataset (?) captures an industrial quality control setting. Cell $(x,y)$ is the number of defects in the $x$-th run attributable to machine $y$. It has 9 structured zeros, representing machines switched off for certain runs. Synthetic Data (synth). This $n\times n$ binary dataset contains blocked matrices (?). The row and column marginals are both $\{1,n-1,\ldots,n-1\}$. It can be seen that a blocked matrix has either has a value of 1 in entry $(1,1)$ or it has two distinct entries with value 1 in the first row and the first column, cell $(1,1)$ excluded. Instantiating the first row and the first column completely determines the rest of the table. It is also easy to verify that the desired count is $1+(n-1)^{2}$. Table 2 summarizes the obtained lower and upper bounds on the number of contingency tables, with a 10 minute timeout. For the datasets with ground truth, we see that very sparse parity constraints (e.g., $f=0.03$ for the Darwin finches dataset, as opposed to a theoretical minimum of $f^{*}=0.18$) often suffice in practice to obtain very accurate lower bounds. For the iqd dataset, we obtain upper and lower bounds within a small constant factor. For other datasets, there is a wider gap between the upper and lower bounds. However, the upper bounds we obtain are orders of magnitude tighter than the trivial log-upper bounds, which is the number of variables in a binary encoding of the problem. Conclusions We introduced a novel analysis of the randomized hashing schemes used by numerous recent approximate model counters and probabilistic inference algorithms. We close a theoretical gap, providing a tight asymptotic estimate for the minimal constraint density required. Our analysis also shows, for the first time, that even very short parity constraints can be used to generate non-trivial upper bounds on model counts. Thanks to this finding, we proposed a new scheme for computing anytime upper and lower bounds on the model count. Asymptotically, these bounds are guaranteed to become tight (up to a constant factor) as the constraint density grows. Empirically, given very limited computational resources, we are able to obtain new upper bounds on a variety of benchmarks, including a novel application for the analysis of statistical contingency tables. A promising direction for future research is the analysis of related ensembles of random parity constraints, such as low-density parity check codes (?). Acknowledgments This work was supported by the Future of Life Institute (grant 2015-143902). References [Achlioptas and Jiang 2015] Achlioptas, D., and Jiang, P. 2015. Stochastic integration via error-correcting codes. In Proc. Uncertainty in Artificial Intelligence. [Angluin and Valiant 1979] Angluin, D., and Valiant, L. 1979. Fast probabilistic algorithms for hamiltonian circuits and matchings. Journal of Computer and System Sciences 18(2):155–193. [Barrett, Stump, and Tinelli 2010] Barrett, C.; Stump, A.; and Tinelli, C. 2010. The Satisfiability Modulo Theories Library (SMT-LIB). www.SMT-LIB.org. [Belle, Van den Broeck, and Passerini 2015] Belle, V.; Van den Broeck, G.; and Passerini, A. 2015. Hashing-based approximate probabilistic inference in hybrid domains. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI). [Biere et al. 2009] Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T. 2009. Handbook of satisfiability. frontiers in artificial intelligence and applications, vol. 185. [Calabro 2009] Calabro, C. 2009. The Exponential Complexity of Satisfiability Problems. Ph.D. Dissertation, University of California, San Diego. [Chakraborty, Meel, and Vardi 2013a] Chakraborty, S.; Meel, K.; and Vardi, M. 2013a. A scalable and nearly uniform generator of SAT witnesses. In Proc. of the 25th International Conference on Computer Aided Verification (CAV). [Chakraborty, Meel, and Vardi 2013b] Chakraborty, S.; Meel, K.; and Vardi, M. 2013b. A scalable approximate model counter. In Proc. of the 19th International Conference on Principles and Practice of Constraint Programming (CP), 200–216. [Chen et al. 2005] Chen, Y.; Diaconis, P.; Holmes, S. P.; and Liu, J. S. 2005. Sequential monte carlo methods for statistical analysis of tables. Journal of the American Statistical Association 100(469):109–120. [Chen 2007] Chen, Y. 2007. Conditional inference on tables with structural zeros. Journal of Computational and Graphical Statistics 16(2). [Cooper 2000] Cooper, C. 2000. On the rank of random matrices. Random Structures & Algorithms 16(2):209–232. [De Moura and Bjørner 2008] De Moura, L., and Bjørner, N. 2008. Z3: An efficient smt solver. In Tools and Algorithms for the Construction and Analysis of Systems. Springer. 337–340. [Dyer, Kannan, and Mount 1997] Dyer, M.; Kannan, R.; and Mount, J. 1997. Sampling contingency tables. Random Structures and Algorithms 10(4):487–506. [Ermon et al. 2013a] Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2013a. Optimization with parity constraints: From binary codes to discrete integration. In Proc. of the 29th Conference on Uncertainty in Artificial Intelligence (UAI). [Ermon et al. 2013b] Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2013b. Taming the curse of dimensionality: Discrete integration by hashing and optimization. In Proc. of the 30th International Conference on Machine Learning (ICML). [Ermon et al. 2014] Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2014. Low-density parity constraints for hashing-based discrete integration. In Proc. of the 31st International Conference on Machine Learning (ICML), 271–279. [Fisher 1954] Fisher, R. 1954. Statistical Methods for Research Workers. Oliver and Boyd. [Gogate and Dechter 2007] Gogate, V., and Dechter, R. 2007. Approximate counting by sampling the backtrack-free search space. In Proc. of the 22nd National Conference on Artifical Intelligence (AAAI), volume 22, 198–203. [Gogate and Domingos 2011] Gogate, V., and Domingos, P. 2011. Probabilistic theorem proving. In Uncertainty in Artificial Intelligence. [Golshan, Byers, and Terzi 2013] Golshan, B.; Byers, J.; and Terzi, E. 2013. What do row and column marginals reveal about your dataset? In Advances in Neural Information Processing Systems, 2166–2174. [Gomes et al. 2007] Gomes, C. P.; Hoffmann, J.; Sabharwal, A.; and Selman, B. 2007. Short XORs for model counting: From theory to practice. In Theory and Applications of Satisfiability Testing (SAT), 100–106. [Gomes, Sabharwal, and Selman 2006] Gomes, C. P.; Sabharwal, A.; and Selman, B. 2006. Model counting: A new strategy for obtaining good bounds. In Proc. of the 21st National Conference on Artificial Intelligence (AAAI), 54–61. [Guruswami 2010] Guruswami, V. 2010. Introduction to coding theory - lecture notes. [Ivrii et al. 2015] Ivrii, A.; Malik, S.; Meel, K. S.; and Vardi, M. Y. 2015. On computing minimal independent support and its applications to sampling and counting. Constraints 1–18. [Kolchin 1999] Kolchin, V. F. 1999. Random graphs. Number 53 in Encyclopedia of Mathematics and its Applications. Cambridge University Press. [Kroc, Sabharwal, and Selman 2011] Kroc, L.; Sabharwal, A.; and Selman, B. 2011. Leveraging belief propagation, backtrack search, and statistics for model counting. Annals of Operations Research 184(1):209–231. [O’Neil 2008] O’Neil, S. 2008. An Iconic Approach to Communicating Climate Change. Ph.D. Dissertation, School of Environmental Science, University of East Anglia. [Richardson and Domingos 2006] Richardson, M., and Domingos, P. 2006. Markov logic networks. Machine Learning 62(1):107–136. [Sang, Beame, and Kautz 2005] Sang, T.; Beame, P.; and Kautz, H. 2005. Solving Bayesian networks by weighted model counting. In Proc. of the 20th National Conference on Artificial Intelligence (AAAI), volume 1, 475–482. [Sang et al. 2004] Sang, T.; Bacchus, F.; Beame, P.; Kautz, H.; and Pitassi, T. 2004. Combining component caching and clause learning for effective model counting. In Theory and Applications of Satisfiability Testing (SAT). [Sheldon and Dietterich 2011] Sheldon, D. R., and Dietterich, T. G. 2011. Collective graphical models. In Advances in Neural Information Processing Systems, 1161–1169. [Sinclair 2011] Sinclair, A. 2011. Randomness and computation - lecture notes. [Soos, Nohl, and Castelluccia 2009] Soos, M.; Nohl, K.; and Castelluccia, C. 2009. Extending SAT solvers to cryptographic problems. In Theory and Applications of Satisfiability Testing (SAT). [Stockmeyer 1985] Stockmeyer, L. 1985. On approximation algorithms for #P. SIAM Journal on Computing 14(4):849–861. [Tseitin 1968] Tseitin, G. S. 1968. On the complexity of derivation in the propositional calculus. In Slisenko, A. O., ed., Studies in Constructive Mathematics and Mathematical Logic, Part II. [Valiant 1979a] Valiant, L. G. 1979a. The complexity of computing the permanent. Theoretical computer science 8(2):189–201. [Valiant 1979b] Valiant, L. 1979b. The complexity of enumeration and reliability problems. SIAM Journal on Computing 8(3):410–421. [West and Hankin 2008] West, L. J., and Hankin, R. K. 2008. Exact tests for two-way contingency tables with structural zeros. Journal of Statistical Software 28(11):1–19. [Zhao et al. 2015] Zhao, S.; Chaturapruek, S.; Sabharwal, A.; and Ermon, S. 2015. Closing the gap between short and long xors for model counting. Technical report, Stanford University. Appendix: Proofs Theorem 5 (Formal statement of Theorem 1). Let $\{\mathcal{H}^{f_{i}}_{i\times n}\}_{i=0}^{n}$ be families of $f_{i}$-sparse hash functions. • (Sufficiency) If there exist $c\geq 2$ and $\epsilon>0$ such that for all $i$, $\mathcal{H}^{f_{i}}_{i\times n}$ $\epsilon$-shatters all sets $S^{\prime}\subseteq\{0,1\}^{n}$ of size $|S^{\prime}|=2^{i+c}$, then for any set $S$, $0<\Delta<1$, and $\alpha\leq 2\,\left(\min(\epsilon,1/2-1/2^{c})\right)^{2}\ln 2$, SPARSE-COUNT$(\mathcal{O}_{S},\Delta,\alpha,\{\mathcal{H}^{f_{i}}_{i\times n}\})$ outputs a $2^{c+1}$ approximation of $|S|$ with probability at least $1-\Delta$. • (Necessity) If there exists an $i$ and a set $S$ of size $2^{i+c}$ such that for all $\epsilon>0$ $\mathcal{H}^{f_{i}}_{i\times n}$ does not $\epsilon$-shatter $S$, then for any choice of $\alpha>0$ and $0<\Delta<1$, SPARSE-COUNT$(\mathcal{O}_{S},\Delta,\alpha,\{\mathcal{H}^{f_{i}}_{i\times n}\})$ outputs a $2^{c}$ approximation of $|S|$ with probability at most $1/2$. Proof. For the sufficiency part, we show that there exist $c>0$ and $\delta>2$ such that for all $i$ two conditions hold: (a) for all sets $S\subseteq\{0,1\}^{n}$ of size $|S|\leq 2^{i-c}$ $$\Pr[S(h)=0]\geq 1-\frac{1}{\delta}$$ when $h$ is chosen from $\mathcal{H}^{f_{i}}_{i\times n}$ (b) for all sets $S\subseteq\{0,1\}^{n}$ of size $|S|\geq 2^{i+c}$ $$\Pr[S(h)\geq 1]\geq 1-\frac{1}{\delta}$$ when $h$ is chosen from $\mathcal{H}^{f_{i}}_{i\times n}$. Standard analysis following ? (?) then implies that for any set $S$ and $0<\Delta<1$, if $\alpha\leq 2(1-\frac{1}{\delta}-\frac{1}{2})^{2}\ln 2$, we have $$\frac{|S|}{2^{c+1}}\leq\text{SPARSE-COUNT}(\mathcal{O}_{S},\Delta,\alpha,\{% \mathcal{H}^{f_{i}}_{i\times n}\})\leq|S|2^{c}$$ with probability at least $1-\Delta$. The second condition (b) is implied by the shattering properties of $h$ in the assumptions for some $c=c^{\prime}$ and $\epsilon=1/2-\frac{1}{\delta}$. The first condition (a) is trivially satisfied for any $c\geq 2$ and $\delta=2^{c}$. Formally, we have $$\displaystyle\Pr[S(h)>0]$$ $$\displaystyle=\Pr[S(h)\geq 1]=\Pr[S(h)\geq 2^{c}\mu_{S}]$$ $$\displaystyle\leq\frac{1}{2^{c}}$$ from Markov’s inequality. Conditions (a) and (b) are therefore simultaneously met choosing $c=c^{\prime}$ and $\delta=\min(2^{c},\frac{1}{1/2-\epsilon})$. For the necessity part, let $S$ be a set of size $2^{i+c}$ as in the statement of the Theorem, i.e., not shattered by $\mathcal{H}^{f_{i}}_{i\times n}$. Let us condition on the event that the outer loop of SPARSE-COUNT$(\mathcal{O}_{S},\Delta,\alpha,\{\mathcal{H}^{f_{i}}_{i\times n}\})$ reaches iteration $i$. For any $T\geq 1$ (therefore, for any choice of $\Delta$ and $\alpha$), the while loop breaks at iteration $i$ with probability at least $1/2$ because by assumption $\Pr[S(h)\geq 1]\leq 1/2$ when $h$ is chosen from $\mathcal{H}^{f_{i}}_{i\times n}$. Otherwise, $\mathcal{H}^{f_{i}}_{i\times n}$ would $\epsilon$-shatter $S$ for some $\epsilon>0$. Therefore, the output satisfies $\frac{|S|}{2^{c}}\leq$ SPARSE-COUNT$(\mathcal{O}_{S},\Delta,\alpha,\{\mathcal{H}^{f_{i}}_{i\times n}\})$ with probability at most $1/2$. This also bounds the probability that the output is a $2^{c}$ approximation of $|S|$. ∎ Proofs of New Upper and Lower Bounds We continue with proofs for the bounds on $|S|$ for arbitrary constraint density $f$. Several of the following proofs will rely on the following notion: Definition 4 (? ?). Let $m,n\in\mathbb{N},m\leq n,f\leq{\frac{1}{2}},$ and $q\leq 2^{n}+1$. Then: $$\displaystyle w^{*}(n,q)$$ $$\displaystyle=\max\left\{w\mid\sum_{j=1}^{w}\binom{n}{j}\leq q-1\right\}$$ (3) $$\displaystyle r(n,q)$$ $$\displaystyle=\left(q-1-\sum_{w=1}^{w^{*}(n,q)}\binom{n}{w}\right)$$ (4) $$\displaystyle\epsilon(n,m,q,f)$$ $$\displaystyle=\frac{1}{q-1}\left[\sum_{w=1}^{w^{*}(n,q)}\binom{n}{w}\frac{1}{2% ^{m}}(1+(1-2f)^{w})^{m}\right.$$ $$\displaystyle\ \ \ \ \ \ \ \left.+\,\frac{r}{2^{m}}(1+(1-2f)^{w^{*}(n,q)+1})^{% m}\right]$$ (5) We observe that $w^{*}(n,q)$ is always at most $n$. We will often be interested in the case where $q=2^{m+c}$ for $c\geq 0$. New Lower Bound Proof of Lemma 1. Let hash function $h_{m}$ be drawn randomly from $\mathcal{H}^{f}_{m\times n}$. Recall that our random variable $S(h_{m})$ takes on a non-negative integer value. Then, for any $m$, by Markov’s inequality, $$\displaystyle\Pr[S(h_{m})\geq 1]\leq\mathbb{E}[S(h_{m})]=\frac{|S|}{2^{m}}.$$ Hence, for any $m$, $|S|\geq 2^{m}\Pr[S(h_{m})\geq 1]$. Taking the maximum over all choices of $m$ finishes the proof. ∎ Proof of Theorem 3. Let hash functions $h_{m}^{k}$ be drawn independently from $\mathcal{H}^{f}_{m\times n}$ for $k=1,\cdots,T$. Let $Y_{k}=\mathbb{I}\left[\left|(h_{m}^{k})^{-1}(0)\cap S\right|\geq 1\right]=% \mathbb{I}\left[S(h_{m}^{k})\geq 1\right]$ and $Y=\sum_{k=1}^{T}Y_{k}.$ Let $\nu=\mathbb{E}\left[Y\right]=T\mathbb{E}\left[Y_{1}\right]=T\Pr\left[S(h_{m}^{% k})\geq 1\right].$ Suppose $\mathbb{E}[Y]=T\Pr[S(h_{m}^{k})\geq 1]\leq cT/(1+\kappa)$. Define $Z=\sum_{k=1}^{T}Z_{k}$ where $Z_{k}$ are i.i.d. Bernoulli variables with probability $c/(1+\kappa)$ of being 1. Then $\mathbb{E}[Z]=cT/(1+\kappa)$. By Chernoff’s bound (?; ?), the probability of observing $\Pr_{\text{est}}\left[S(h_{m}^{k})\geq 1\right]=\frac{Y}{T}\geq c$ as in the Theorem statement is bounded above: $$\displaystyle\Pr[Y\geq cT]$$ $$\displaystyle\leq\Pr[Z\geq cT]$$ $$\displaystyle=\Pr[Z\geq(1+\kappa)\,\mathbb{E}[Z]]$$ $$\displaystyle\leq\exp\left(-\frac{\kappa^{2}}{2+\kappa}\frac{cT}{1+\kappa}\right)$$ The first inequality follows from the fact that $Y_{k}$ and $Z_{k}$ are Bernoulli i.i.d. random variables, with $\mathbb{E}[Y_{k}]\leq\mathbb{E}[Z_{k}]$. Hence, with a probability at least $1-\exp\left(-\frac{\kappa^{2}cT}{(1+\kappa)(2+\kappa)}\right)$, it must be the case that $\mathbb{E}[Y]>cT/(1+\kappa)$ and hence $\Pr[S(h)\geq 1]>c/(1+\kappa)$. From Lemma 1, it follows that with probability at least $1-\exp\left(-\frac{\kappa^{2}cT}{(1+\kappa)(2+\kappa)}\right)$, $|S|\geq 2^{m}c/(1+\kappa)$. ∎ New Upper Bound Proof of Lemma 2. Let $f(q)=q^{2}/v(q)$, where $$v(q)=\frac{q}{2^{m}}\left(1+\epsilon(n,m,q,f)\cdot(q-1)-\frac{q}{2^{m}}\right)$$ is defined as in Definition 2. We show that $f(q)<f(q+1)$ for all $q.$ Removing constant terms in $f(q)$ we see that it suffices to show that $$g(q)=\frac{q}{B_{1}+B_{2}-q},$$ (6) where $B_{1}=2^{m}+\sum_{w=1}^{w^{*}(n,q)}{n\choose w}(1+x^{w})^{m}$ and $B_{2}=\left(q-1-\sum_{w=1}^{w^{*}(n,q)}{n\choose w}\right)(1+x^{w^{*}(n,q)+1})% ^{m}$, is an increasing function of $q$. The relevant quantities are defined in Definition 4, and $x=1-2f$ for brevity. We show that $g(q)<g(q+1).$ We note that $$w^{*}(n,q+1)=w^{*}(n,q)+h(q),$$ (7) where $h(q)\in\left\{0,1\right\}$ and it is $1$ only when $\sum_{j=1}^{w^{*}(n,q)+1}{n\choose j}=q$ (by looking at the definition). Define $$\displaystyle t(q)=B_{1}+B_{2}-q=2^{m}+\sum_{w=1}^{w^{*}(n,q)}{n\choose w}(1+x% ^{w})^{m}+$$ $$\displaystyle\qquad\left(q-1-\sum_{w=1}^{w^{*}(n,q)}{n\choose w}\right)(1+x^{w% ^{*}(n,q)+1})^{m}-q.$$ (8) Case 1: $h(q)=0$. We have $g(q)<g(q+1)$ if and only if $$\frac{q}{t(q)}<\frac{q+1}{t(q)+(1+x^{w^{*}(n,q)+1})^{m}-1},$$ (9) which is true if and only if $$(1+x^{w^{*}(n,q)+1})^{m}q-q<t(q).$$ (10) Expanding the definition of $t(q),$ we get that the above inequality is true if and only if $$\displaystyle 0<\left(2^{m}-(1+x^{w^{*}(n,q)+1})^{m}\right)+$$ $$\displaystyle\qquad\sum_{w=1}^{w^{*}(n,q)}{n\choose w}\left((1+x^{w})^{m}-(1+x% ^{w^{*}(n,q)+1})^{m}\right),$$ (11) which is true because both terms are positive. Case 2: $h(q)=1$. This implies $\sum_{j=1}^{w^{*}(n,q)+1}{n\choose j}=q.$ We have $g(q)<g(q+1)$ if and only if $$\frac{q}{t(q)}<\frac{q+1}{t(q+1)},$$ (12) and we have $$t(q+1)=2^{m}+\sum_{w=1}^{w^{*}(n,q)+1}{n\choose w}(1+x^{w})^{m}-q-1.$$ (13) Expanding the definition of $t(q)$ and $t(q+1),$ we get that the above inequality is true if and only if $$\displaystyle 0<\left(2^{m}-(1+x^{w^{*}(n,q)+1})^{m}\right)+$$ $$\displaystyle\qquad\sum_{w=1}^{w^{*}(n,q)}{n\choose w}\left((1+x^{w})^{m}-(1+x% ^{w^{*}(n,q)+1})^{m}\right),$$ (14) which is true because both terms are positive (note this is the same inequality as before). ∎ Proof of Lemma 3. Let $Q\subseteq\{0,1\}^{n}$ be any set of size exactly $q$ and $h\in_{R}\mathcal{H}^{f}_{m\times n}$. Following ? (?), we can get a worst-case bound for the variance of $Q(h)$ as a function of $q$. Regardless of the structure of $Q$, we have $$\sigma^{2}(Q)\leq v(q)=\frac{q}{2^{m}}\left(1+\epsilon(n,m,q,f)(q-1)-\frac{q}{% 2^{m}}\right)$$ where $\sigma^{2}(Q)=Var[|\{x\in Q\mid h(x)=0\}|]$ is the variance of the random variable $Q(h)$, and $\epsilon(n,m,q,f)$ is from Definition 4. From Cantelli’s inequality $$P[Q(h)>0]\geq 1-\frac{\sigma^{2}(Q)}{\sigma^{2}(Q)+\left(\frac{|Q|}{2^{m}}% \right)^{2}}\geq 1-\frac{v(q)}{v(q)+\left(\frac{q}{2^{m}}\right)^{2}}$$ We claim that $$\frac{v(q)}{v(q)+\left(\frac{q}{2^{m}}\right)^{2}}$$ gives a lower bound on the shattering probability, is a decreasing function of $q$. By dividing numerator and denominator by $v(q)$, it is sufficient to show that $$\frac{\left(\frac{q}{2^{m}}\right)^{2}}{v(q)}$$ is increasing in $q$, which follows from Lemma 2. To prove the Lemma, suppose by contradiction that $$|S|>\min\left\{x:1-\frac{v(x)}{v(x)+\left(\frac{x}{2^{m}}\right)^{2}}>P\left[S% (h)>0\right]\right\}$$ Since $\frac{v(q)}{v(q)+\left(\frac{q}{2^{m}}\right)^{2}}$ is a decreasing function of $q$, and $|S|$ is assumed to be larger than the smallest element in the set above, it holds that $$1-\frac{v(|S|)}{v(|S|)+\left(\frac{|S|}{2^{m}}\right)^{2}}>P\left[S(h)>0\right]$$ (15) From Cantelli’s inequality $$P[S(h)>0]\geq 1-\frac{\sigma^{2}(S)}{\sigma^{2}(S)+\left(\frac{|S|}{2^{m}}% \right)^{2}}\geq 1-\frac{v(|S|)}{v(|S|)+\left(\frac{|S|}{2^{m}}\right)^{2}}$$ where the second inequality holds because $v(|S|)$ upper bounds the true variance $\sigma^{2}(S)$. The last inequality contradicts Eq. (15). ∎ Proof of Theorem 4. If $$|S|>\min\left\{x:1-\frac{v(x)}{v(x)+\left(\frac{x}{2^{m}}\right)^{2}}\geq\frac% {3}{4}\right\}$$ by the monotonicity of $\frac{v(q)}{v(q)+\left(\frac{q}{2^{m}}\right)^{2}}$ shown in the proof of Lemma  3, it holds that $$1-\frac{v(|S|)}{v(|S|)+\left(\frac{|S|}{2^{m}}\right)^{2}}\geq\frac{3}{4}$$ From Cantelli’s inequality $$\displaystyle\Pr[S(h)>0]$$ $$\displaystyle\geq$$ $$\displaystyle 1-\frac{\sigma^{2}(S)}{\sigma^{2}(S)+\left(\frac{q}{2^{m}}\right% )^{2}}$$ $$\displaystyle\geq$$ $$\displaystyle 1-\frac{v(|S|)}{v(|S|)+\left(\frac{|S|}{2^{m}}\right)^{2}}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{3}{4}$$ Then $P[S(h)=0]\leq\frac{1}{4}$, and from Chernoff’s bound $$\displaystyle\Pr[\mathrm{Median}(\mathbb{I}[S(h_{1})=0],\cdots,\mathbb{I}[S(h_% {T})=0])=0]\leq$$ $$\displaystyle\exp\left(-\frac{T}{24}\right)\leq\Delta$$ Therefore $$|S|\leq\min\left\{x:1-\frac{v(x)}{v(x)+\left(\frac{x}{2^{m}}\right)^{2}}\geq% \frac{3}{4}\right\}$$ with probability at least $1-\Delta$. ∎ Proof of Theorem 2 Theorem 2 contains three statements: 1. Let $\epsilon>0$. $\kappa>1$. There exists $M_{\kappa}$ such that for all $m\geq M_{\kappa}$: $$\tilde{f}_{\epsilon}(m,n)>\frac{\log{m}}{\kappa\,m}$$ 2. Let $\epsilon\in(0,\frac{3}{10}),\alpha\in(0,1),$ and $m=\alpha n$. There exists $N$ such that for all $n\geq N$: $$\tilde{f}_{\epsilon}(m,n)\leq\left({3.6-\frac{5}{4}\log_{2}{\alpha}}\right)% \frac{\log{m}}{m}$$ 3. Let $\epsilon\in(0,\frac{3}{10}),\alpha,\kappa>1,\beta\in(0,1),$ and $m=\alpha n^{\beta}$. There exists $N_{\kappa}$ such that for all $n\geq N_{\kappa}$: $$\tilde{f}_{\epsilon}(m,n)\leq\frac{\kappa\,(1-\beta)}{2\beta}\frac{\log^{2}{m}% }{m}$$ We will prove them in turn in the following subsections. The arguments will often use the inequality $1+x\leq\exp(x)$ which holds for any $x\in\mathbb{R}$. We also use the expression ”for sufficiently large n” to mean the more formal statement $$\exists N>0,\forall n>N$$ Part I: Lower Bound Proof of Theorem 2 (part 1). Since the minimum constraint density must work for every set $S$, it must also work for the hypercube $S_{c}=\{0,1\}^{m+c}\times\{0\}^{n-m-c}$ with $2^{m+c}$ elements. This is a set where the first $m+c$ variables are “free”, while the remaining $n-m-c$ are fixed to $0$. Let $h$ be a hash function drawn from $\mathcal{H}_{m\times n}^{f}$. Let’s consider a parity constraint of the form $a_{i1}x_{1}\oplus\cdots\oplus a_{in}x_{n}=b_{i}$ as in Definition 1. If $a_{i1}=a_{i2}=...=a_{i(m+c)}=0$ and $b_{i}=1$, then $\{x\in S_{c}:ax=b\bmod 2\}=\emptyset$. If the constraint is constructed as in Definition 1, this happens with probability ${\frac{1}{2}}(1-f)^{m+c}$. Accumulating this probability over $m$ independent parity constraints and setting $f=\frac{\log m}{\kappa\,m}$ for any $\kappa>1$, we obtain: $$\displaystyle\Pr[S_{c}(h)>0]$$ $$\displaystyle\leq\left(1-{\frac{1}{2}}(1-f)^{m+c}\right)^{m}$$ $$\displaystyle=\left(1-{\frac{1}{2}}\left(1-\frac{\log m}{\kappa m}\right)^{m+c% }\right)^{m}$$ For any $\lambda>1$, it can be verified from the Taylor expansion of the exponential function that for any small enough $x>0$, $1-x\geq\exp(-\lambda x)$. Observe that for any fixed $\kappa>1$, $1-\frac{\log m}{\kappa\,m}>0$ as long as $m$ is large enough. It follows that for any $\gamma>1$, there exists an $M_{\kappa,\gamma}$ such that for all $m\geq M_{\kappa,\gamma}$, the above expression is upper bounded by: $$\displaystyle\left(1-{\frac{1}{2}}\exp\left(-\gamma\frac{\log m}{\kappa\,m}(m+% c)\right)\right)^{m}$$ $$\displaystyle=\left(1-{\frac{1}{2}}m^{-\gamma(m+c)/(\kappa\,m)}\right)^{m}$$ $$\displaystyle\leq\exp\left(-{\frac{1}{2}}m^{1-\gamma(m+c)/(\kappa m)}\right)$$ where the last inequality follows from $1+x\leq\exp(x)$. Since $\kappa>1$, we can choose $\gamma$ such that $1<\gamma<\kappa$. In this case, for large enough $m$, the last expression above is less than $1/2$. In other words, there exists an $M_{\kappa}$ such that for all $m\geq M_{\kappa}$, $\Pr[S_{c}(h)>0]<1/2$. It follows that for all such $m$, the minimum constraint density, $\tilde{f}(m,n)$, must be larger than $\frac{\log m}{\kappa\,m}$, finishing the proof. ∎ Part II: Upper bound when $m=\Theta(n)$ To prove the upper bound for $m=\Theta(n)$, we will need to first establish a few lemmas. In all the proofs below we will assume $m=\alpha n$, with $\alpha$ constant with respect to $n$. Let us denote the binary entropy function as $H(p)\triangleq-p\log_{2}p-(1-p)\log_{2}(1-p)$. It is well known that $H(0)=0$, $H({\frac{1}{2}})=1$, and it is monotonically increasing in the interval $[0,{\frac{1}{2}}]$. We use the following relationship between the sum of binomials and the binary entropy function: Proposition 1 (? ?, Lemma 5). For any $n\in\mathbb{N}$ and $\lambda\in[0,{\frac{1}{2}}]$, $$\sum_{j=0}^{\lambda n}{n\choose j}\leq 2^{H(\lambda)\,n}$$ Lemma 4. Let $\alpha\in(0,1)$, there exists a unique $\lambda^{*}<{\frac{1}{2}}$ such that $H(\lambda^{*})=\alpha$, where $H$ is the binary entropy function. For all $\lambda<\lambda^{*}$, $$\lim_{n\to\infty}\frac{\sum_{j=1}^{\lambda n}\binom{n}{j}}{2^{\alpha n}}=0$$ Proof. We can always find a unique $\lambda^{*}<{\frac{1}{2}}$ such that $H(\lambda^{*})=\alpha$. This is because $H(\lambda)$ increases monotonically from 0 to 1 as $\lambda$ increases from 0 to ${\frac{1}{2}}$, so $H^{-1}(\alpha)$ takes one and only one value in the range $(0,1/2)$. Furthermore, due to monotonicity, $H(\lambda)<\alpha=H(\lambda^{*})$ for all $\lambda<\lambda^{*}$. From Proposition 1, for any $\lambda<{\frac{1}{2}}$, the sum of binomials in the numerator of the desired quantity is at most $2^{H(\lambda)\,n}$. Hence, the fraction is at most $2^{(H(\lambda)-\alpha)\,n}$, which approaches $0$ as $n$ increases because $H(\lambda)<\alpha$. Since numerator and denominator are non-negative, the limit is zero and this concludes the proof. ∎ Corollary 1. Let $\alpha\in(0,1),c\geq 2,w^{*}(n,q)$ be as in Definition 4, and $\lambda^{*}<{\frac{1}{2}}$ be such that $H(\lambda^{*})=\alpha$. Then for all $\lambda<\lambda^{*}$, and any $n$ sufficiently large $$w^{*}(n,2^{m+c})=w^{*}(n,2^{\alpha n+c})\geq\lambda n$$ Proof of Corollary 1. By Lemma 4, for all $\lambda<H^{-1}(\alpha)$, when n is sufficiently large, $$\sum_{j=1}^{\lambda n}\binom{n}{j}<2^{\alpha n}<2^{m+c}-1$$ Thus, it follows immediately from the definition of $w^{*}$ that for sufficiently large $n$, $\lambda n\leq w^{*}(n,2^{m+c})$. ∎ Remark 1. Corollary 1, together with the trivial fact that $w^{*}(n,q)\leq n$, implies $w^{*}(n,q)=\Theta(n)$ when $m=\alpha n$ and $q=2^{m+c}$. Lemma 5. For all $\delta>0$ and $w\in\mathbb{R}$, the function $f_{\delta}(w)=\log{(1+\delta^{w})}$ is convex. Proof. We will show that the second derivative of $f_{\delta}(w)$ is non-negative: $$\displaystyle f_{\delta}^{\prime}(w)$$ $$\displaystyle=\frac{\delta^{w}\log{\delta}}{1+\delta^{w}}$$ $$\displaystyle f_{\delta}^{\prime\prime}(w)$$ $$\displaystyle=\frac{\delta^{w}(1+\delta^{w})(\log{\delta})^{2}-\delta^{2w}(% \log{\delta})^{2}}{(1+\delta^{w})^{2}}$$ $$\displaystyle=\frac{\delta^{w}(\log{\delta})^{2}}{(1+\delta^{w})^{2}}\geq 0$$ It follows that $f_{\delta}(w)$ is convex. ∎ Lemma 6. Let $t>0,0<\delta<1,k>-t\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}},$ and $w\geq 0$. Then for all $m$ sufficiently large, $$\frac{\left({\frac{1}{2}}+{\frac{1}{2}}\left(1-\frac{k\log{m}}{m}\right)^{w}% \right)^{m}}{m^{-tw}+(1+\delta)^{-m}}<1$$ (16) Lemma 6 is an attempt to simplify the expression below, which we will call $\zeta(w)$ and make its dependence on $k$ and $m$ implicit. $$\displaystyle\zeta(w)=\zeta(w,k,m)=\left({\frac{1}{2}}+{\frac{1}{2}}\left(1-% \frac{k\log{m}}{m}\right)^{w}\right)^{m}$$ Note that for large enough $m$ such that $\frac{k\log m}{m}\leq 1$, $\zeta(w)$ is monotonically non-increasing in $w$, a property we will use. This term is too complex to study in detail, therefore, we upper bound it by the sum of two simpler expressions. The intuition for this bound is shown in Figure 2. Lemma 6 can thus be restated as claiming $\zeta(w)\leq m^{-tw}+(1+\delta)^{-m}$ for $m$ sufficiently large. Towards this end, when $w<w_{0}$, we show that $m^{-tw}$ is the dominant term and that $\zeta(w)<m^{-tw}$. When $w>w_{0}$, we show that the term $(1+\delta)^{-m}$ dominates and that $\zeta(w)<(1+\delta)^{-m}$. Combining these two regimes, we deduce that $\zeta(w)$ must be upper bounded by their sum for all values of $w$. A formal proof follows. Proof. We will show that for $m$ sufficiently large, $\zeta(w)\leq m^{-tw}+(1+\delta)^{-m}$. Assume w.l.o.g. that $m$ is large enough to satisfy: $$\displaystyle 1-k\frac{\log{m}}{m}\geq 0$$ (17) Next we consider the location $w_{0}$ where the dominant term of $m^{-tw}+(1+\delta)^{-m}$ switches from $m^{-tw}$ to $(1+\delta)^{-m}$. This is where $$m^{-tw_{0}}=(1+\delta)^{-m}$$ which gives us $w_{0}=\frac{m\log{(1+\delta)}}{t\log{m}}$. At this $w_{0}$ we have $$\displaystyle\frac{\zeta(w_{0})}{(1+\delta)^{-m}}$$ $$\displaystyle\leq\frac{(1+\delta)^{m}}{2^{m}}\left(1+\left(1-\frac{k\log{m}}{m% }\right)^{w_{0}}\right)^{m}$$ $$\displaystyle\leq\frac{(1+\delta)^{m}}{2^{m}}\left(1+\exp\left(-\frac{k\log m}% {m}\,\frac{m\log{(1+\delta)}}{t\log{m}}\right)\right)^{m}$$ $$\displaystyle=\frac{(1+\delta)^{m}}{2^{m}}\left(1+\exp\left(-\frac{k}{t}\log{(% 1+\delta)}\right)\right)^{m}$$ $$\displaystyle=\frac{(1+\delta)^{m}}{2^{m}}\left(1+(1+\delta)^{-\frac{k}{t}}% \right)^{m}$$ $$\displaystyle=\left(\frac{(1+\delta)(1+(1+\delta)^{-\frac{k}{t}})}{2}\right)^{m}$$ Clearly if we choose $k$ such that $\frac{(1+\delta)(1+(1+\delta)^{-\frac{k}{t}})}{2}<1$, then the entire expression is less then $1$. This condition is satisfied if: $$k>-t\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$$ Recall our earlier observation that $\zeta(w)$ is monotonically non-increasing in $w$ for large enough $m$. Thus, for $m$ sufficiently large and any $w\geq w_{0}$, we have $\zeta(w)\leq\zeta(w_{0})<(1+\delta)^{-m}\leq m^{-tw}+(1+\delta)^{-m}$. Finally, let’s consider the case where $ww_{0}$, again assuming $m$ sufficiently large so that $\frac{k\log m}{m}<1$. Notice that $$\log{\zeta(w)}=m\,\log\left({\frac{1}{2}}+{\frac{1}{2}}\left(1-\frac{k\log{m}}% {m}\right)^{w}\right)$$ is convex with respect to $w$ because of Lemma 5. We have that for all positive $m$: $$\log{\zeta(0)}=\log(1)=\log{\left[m^{-t0}\right]}$$ and for all $m$ sufficiently large: $$\log{\zeta(w_{0})}<\log{\left[(1+\delta)^{-m}\right]}=\log{\left[m^{-tw_{0}}% \right]}$$ where the inequality is from the above analysis and the equality is by definition of $w_{0}$. For $w\in[0,w_{0}]$, we can write $w=(1-\lambda)0+\lambda w_{0}$ for some $\lambda\geq 0$. Therefore, for such $w$ and for $m$ sufficiently large, by convexity of $\log\zeta(w)$: $$\displaystyle\log\zeta(w)$$ $$\displaystyle\leq(1-\lambda)\log\zeta(0)+\lambda\log\zeta(w_{0})$$ $$\displaystyle\leq(1-\lambda)\log[m^{-t0}]+\lambda\log{[m^{-tw_{0}}]}$$ $$\displaystyle=\log{\left[m^{-tw}\right]}$$ i.e., $\zeta(w)\leq m^{-tw}<m^{-tw}+(1+\delta)^{-m}$, as desired. ∎ Lemma 7. Let $\alpha,\delta\in(0,1),k>-\frac{\log{\left(\frac{2}{1+\delta}-1\right)}}{\log{% \left(1+\delta\right)}}$, and $\lambda^{*}<{\frac{1}{2}}$ be such that $H(\lambda^{*})=\alpha\log_{2}{(1+\delta)}$. Then, for all $\lambda<\lambda^{*}$, $$\lim_{n\to\infty}\sum_{w=1}^{\lambda n}\binom{n}{w}\frac{1}{2^{m}}\left(1+% \left(1-2\frac{k\log{m}}{m}\right)^{w}\right)^{m}=0$$ Proof. By Lemma 6, we can select any $0<\delta<1$, $t>1$ and $k>-t\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$ so that when n (or equivalently $m=\alpha n$) is sufficiently large, $$\displaystyle\sum_{w=1}^{\lambda n}$$ $$\displaystyle\binom{n}{w}\frac{1}{2^{m}}\left(1+\left(1-2\frac{k\log{m}}{m}% \right)^{w}\right)^{m}$$ $$\displaystyle\leq\sum_{w=1}^{\lambda n}\binom{n}{w}\left(m^{-tw}+(1+\delta)^{-% m}\right)$$ $$\displaystyle\leq\sum_{w=1}^{\lambda n}\frac{n^{w}}{w!}{(\alpha n)}^{-tw}+% \frac{\sum_{w=1}^{\lambda n}\binom{n}{w}}{(1+\delta)^{m}}$$ where we used the inequality $\binom{n}{w}\leq\frac{n^{w}}{w!}$ for all $n\in\mathbb{N}^{*}$ and $0\leq w\leq n$. The first term of the sum can be driven to zero because when we choose any $t>1$ $$\displaystyle\lim_{n\to\infty}\sum_{w=1}^{\lambda n}\frac{n^{w}}{w!}{(\alpha n% )}^{-tw}$$ $$\displaystyle=\lim_{n\to\infty}\sum_{w=1}^{\lambda n}\frac{{\alpha}^{-tw}n^{(1% -t)w}}{w!}$$ $$\displaystyle\leq\lim_{n\to\infty}\sum_{w=0}^{\infty}\frac{{\alpha}^{-tw}n^{(1% -t)w}}{w!}-\frac{{\alpha}^{0}n^{0}}{0!}$$ $$\displaystyle=\lim_{n\to\infty}e^{\alpha^{-t}n^{1-t}}-1=0$$ This requires that there exists $t>1$ such that $k>-t\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$. For this, $k>-\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$ suffices. The second term can be driven to zero when $H(\lambda)<\alpha\log_{2}{(1+\delta)}$ as a direct consequence of Lemma 4. ∎ Lemma 8. Let $\alpha\in(0,1),m=\alpha n$, $c\in\mathbb{N},q=2^{m+c}$. Let $w^{*}$ and $\epsilon(n,m,q,f)$ be as in Definition 4. Then, for all $\gamma>1$, $k\geq{3.6-\frac{5}{4}\log_{2}{\alpha}}$ and $f=\frac{k\log{m}}{m}$, $\exists N_{k}>0$, so that $\forall n\geq N_{k}$, we have $$\epsilon(n,m,q,f)\leq\gamma\frac{2^{c}}{q-1}$$ Proof. For any $\delta\in(0,1)$, if we choose $\lambda^{*}<{\frac{1}{2}}$, such that $H(\lambda^{*})=\alpha\log_{2}{(1+\delta)}$, then $\forall\lambda<\lambda^{*}$ by Corollary 1, we have for any value of $n$ sufficiently large, $\lambda n\leq w^{*}(n,q)$. Thus: $$\displaystyle(q-1)\,\epsilon(n,m,q,f)$$ $$\displaystyle=\sum_{w=1}^{\lambda n}\binom{n}{w}\frac{1}{2^{m}}(1+(1-2f)^{w})^% {m}+$$ $$\displaystyle\qquad\sum_{w=\lambda n+1}^{w^{*}}\binom{n}{w}\frac{1}{2^{m}}(1+(% 1-2f)^{w})^{m}+$$ $$\displaystyle\qquad\frac{r}{2^{m}}(1+(1-2f)^{w^{*}+1})^{m}$$ $$\displaystyle\leq\sum_{w=1}^{\lambda n}\binom{n}{w}\frac{1}{2^{m}}(1+(1-2f)^{w% })^{m}+$$ $$\displaystyle\qquad\sum_{w=1}^{w^{*}}\binom{n}{w}\frac{1}{2^{m}}(1+(1-2f)^{% \lambda n})^{m}+$$ $$\displaystyle\qquad\frac{r}{2^{m}}(1+(1-2f)^{\lambda n})^{m}$$ $$\displaystyle=\sum_{w=1}^{\lambda n}\binom{n}{w}\frac{1}{2^{m}}(1+(1-2f)^{w})^% {m}+$$ $$\displaystyle\qquad\frac{q-1}{2^{m}}(1+(1-2f)^{\lambda n})^{m}$$ $$\displaystyle=A_{n}+B_{n}$$ where $A_{n}=\sum_{w=1}^{\lambda n}\binom{n}{w}\frac{1}{2^{m}}(1+(1-2f)^{w})^{m}$ and $B_{n}=\frac{q-1}{2^{m}}(1+(1-2f)^{\lambda n})^{m}$. By our choice of $\lambda$, and according to Lemma 7, if we choose any $k>-\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$ and $f=\frac{k\log{m}}{m}$, we can have $$\lim_{n\to\infty}A_{n}=0$$ For $B_{n}$ we have $$\displaystyle B_{n}$$ $$\displaystyle=\frac{q-1}{2^{m}}\left(1+\left(1-2\frac{k\log{m}}{m}\right)^{% \lambda n}\right)^{m}$$ $$\displaystyle\leq\frac{q-1}{2^{m}}\left(1+\exp\left(-\frac{2k\log{m}}{m}% \lambda n\right)\right)^{m}$$ $$\displaystyle\leq 2^{c}\left(1+m^{-\frac{2k\lambda}{\alpha}}\right)^{m}$$ $$\displaystyle\leq 2^{c}\exp\left(m^{1-\frac{2k\lambda}{\alpha}}\right)$$ where the inequalities follow from $1+x\leq\exp(x)$. If we choose $k$ such that $1-\frac{2k\lambda}{\alpha}<0$, or equivalently $k>\frac{\alpha}{2\lambda}$, we have $$\lim\sup_{n\to\infty}B_{n}\leq 2^{c}$$ If we choose a $k$ that is sufficiently large to satisfy both $k>-\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$ and $k>\frac{\alpha}{2\lambda}$, we have $$\lim\sup_{n\to\infty}A_{n}+B_{n}\leq 2^{c}$$ which implies that for all $\gamma>1$, and $n$ sufficiently large, $$\displaystyle\epsilon(n,m,q,f)$$ $$\displaystyle\leq\frac{A_{n}+B_{n}}{q-1}\leq\gamma\frac{2^{c}}{q-1}$$ Now we obtain an upper bound on the value of $k$ (the constraint density $f$ is proportional to $k$, so we’d like this number to be as small as possible). From the derivation above, we can choose any $0<\delta<1$, and any $k$ that satisfy the following inequalities: $$k>-\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}}$$ (18) $$k>\frac{\alpha}{2\lambda}$$ (19) The second inequality also depends on $\lambda$, which we are free to choose as long as it satisfies $\lambda<\lambda^{*}$, or $$H(\lambda)<\log_{2}{(1+\delta)}\alpha\equiv\sigma$$ We denote the latter term as $\sigma$ to lighten the notation. This is satisfied if $$\lambda<\frac{\sigma}{2\log_{2}(6/\sigma)}\leq H^{-1}(\sigma)$$ The latter inequality is adapted from Theorem 2.2 in (?) Therefore, considering that $\alpha<1$, the following condition is tighter than (19) $$k>\frac{\log_{2}(6/\sigma)}{\log{(1+\delta)}}$$ Combining with (18) we have the following condition on $k$: $$k>\max\left(-\frac{\log{(\frac{2}{1+\delta}-1)}}{\log{(1+\delta)}},\quad\frac{% \log_{2}(6/\sigma)}{\log_{2}(1+\delta)}\right)$$ We are allowed to choose $\delta$ to give us the best bound. This choice is asymptotically insignificant, so we choose an arbitrary but empirically well performing $\delta=3/4$, and derive $$k>\max\left(-\frac{\log{(\frac{2}{7/4}-1)}}{\log{(7/4)}},\quad\frac{\log_{2}{% \frac{6}{\log_{2}{7/4}}}-\log_{2}{\alpha}}{\log_{2}(7/4)}\right)$$ which is approximately $$k>\max\left(3.47,\quad 3.58-1.23\log_{2}{\alpha}\right)$$ which is implied by $$k\geq 3.6-\frac{5}{4}\log_{2}{\alpha}$$ (20) as desired. ∎ The bounds in Lemma 8 are graphically shown in Figure 3. When $m=\alpha n$, the plot on the left shows the minimum $f^{*}$ so that $\epsilon(n,m,q,f^{*})$ defined in Definition 4 is less than $2/2^{m}$. The plot on the right shows the empirical k so that the $f^{*}\equiv k\frac{\log{m}}{m}$. We also show the proved asymptotic bounds $k=3.6-\frac{5}{4}\log_{2}{\alpha}$ for comparison. As expected, the value of $k$ found empirically does not exceed the bound (20). Proof of Theorem 2 (part 2). By Corollary 1 and Theorem 2 of ? (?), for set $S$ with size $|S|=q=2^{m+c}$ and $h\in\mathcal{H}^{f}_{m\times n}$, a sufficient condition for ensuring that $S$ is $\varepsilon$-shattered, i.e., $\Pr[S(h)\geq 1]\geq{\frac{1}{2}}+\varepsilon$, is the ‘‘weak-concentration’’ condition given by:333Note that the notation used for $1/2+\varepsilon$ (for $\varepsilon>0$) by ? (?) is $1-1/\delta$ (for $\delta>2$). $$\displaystyle\epsilon(n,m,|S|,f)$$ $$\displaystyle\leq\frac{\mu/({\frac{1}{2}}+\varepsilon)-1}{|S|-1}=\frac{2^{c}/(% {\frac{1}{2}}+\varepsilon)-1}{q-1}$$ (21) By Lemma 8, when $\gamma>1$, $f>({3.6-\frac{5}{4}\log_{2}{\alpha}})\frac{\log{m}}{m},c\geq 2$, and $m$ is sufficiently large: $$\epsilon(n,m,|S|,f)\leq\gamma\frac{2^{c}}{q-1}$$ Hence, to satisfy requirement (21), it suffices to have: $$\gamma 2^{c}\leq\frac{2^{c}}{{\frac{1}{2}}+\varepsilon}-1$$ that is, $\gamma\leq 1/({\frac{1}{2}}+\varepsilon)-2^{-c}$. We can therefore choose a $\gamma>1$ whenever $1/({\frac{1}{2}}+\varepsilon)>1+2^{-c}$. Rearranging terms, this yields $\varepsilon<\frac{2^{c}-1}{2(2^{c}+1)}$. Hence, for $c\geq 2$, it suffices to have $\varepsilon<3/10$. This completes the proof of part two of Theorem 2. ∎ Part III: Upper Bound when $m=\Theta(n^{\beta})$ Similar to Part II, we will first establish a few lemmas. We will assume for the rest of the reasoning that $m=\alpha n^{\beta}$ for some constant $\alpha,\beta\in(0,1)$. Lemma 9. Let $\alpha,\beta\in(0,1),\gamma>0,m=\alpha n^{\beta}$, and $\lambda^{*}=\frac{\gamma}{1-\beta}$. Then for all $\lambda<\lambda^{*}$, $$\lim_{n\to\infty}\frac{\sum_{j=1}^{\lambda m/\log n}\binom{n}{j}}{2^{\gamma m}% }=0$$ Proof. For any $1\leq w\leq\frac{n}{2}$, $$\displaystyle\log\left(\sum_{j=1}^{w}\binom{n}{j}\right)$$ $$\displaystyle\leq\log\left(w\binom{n}{w}\right)\leq\log\left(w\left(\frac{ne}{% w}\right)^{w}\right)$$ $$\displaystyle\leq\log w+w\log\frac{ne}{w}\leq w\log\frac{2ne}{w}$$ When n is sufficiently large, $1\leq\lambda\frac{m}{\log{n}}\leq\frac{n}{2}$. Let $\epsilon>0$ be any constant. Substituting $w=\lambda m=\lambda\frac{\alpha n^{\beta}}{\log n}$: $$\displaystyle\log\left(\sum_{j=1}^{\lambda\alpha n^{\beta}/\log n}\binom{n}{j}\right)$$ $$\displaystyle\leq\frac{\lambda\alpha n^{\beta}}{\log n}\log\left(\frac{2en^{1-% \beta}\log n}{\lambda\alpha}\right)$$ $$\displaystyle\leq(1-\beta+\epsilon)\lambda\alpha n^{\beta}$$ for large enough $n$. We thus have, $$\displaystyle\lim_{n\to\infty}\frac{\sum_{j=1}^{\lambda m/\log n}\binom{n}{j}}% {2^{\gamma m}}$$ $$\displaystyle\leq\lim_{n\to\infty}2^{\left((1-\beta+\epsilon)\lambda-\gamma% \right)\alpha n^{\beta}}$$ Let $\lambda^{*}=\gamma/(1-\beta)$. It follows that for any $\lambda<\lambda^{*}$, $\exists\epsilon>0$, such that $$\displaystyle(1-\beta+\epsilon)\lambda<\frac{1-\beta+\epsilon}{1-\beta}\gamma<\gamma$$ and the above limit can be driven to zero. ∎ Corollary 2. Let $\alpha,\beta\in(0,1),m=\alpha n^{\beta},c\geq 2,$ $w^{*}$ as in Definition 4, and $\lambda^{*}=1/(1-\beta)$. Then, for all $\lambda<\lambda^{*}$, $\exists N>0$ such that $\forall n>N$, we have $$\frac{\lambda m}{\log{n}}\leq w^{*}(n,2^{m+c})$$ Lemma 10. Let $t>0,\delta\in(0,1),w\geq 0$, and $k>0$. Then for all values of $m$ sufficiently large, $$\frac{\left({\frac{1}{2}}+{\frac{1}{2}}\left(1-\frac{k\log^{2}{m}}{m}\right)^{% w}\right)^{m}}{m^{-tw}+(1+\delta)^{-m}}<1$$ Proof. Similar to Lemma 6, we will simplify notation by defining $$\displaystyle\zeta(w)=\left({\frac{1}{2}}+{\frac{1}{2}}\left(1-\frac{k\log^{2}% {m}}{m}\right)^{w}\right)^{m}$$ and assume that $1-\frac{k\log^{2}{m}}{m}>0$. Consider the bound at $$\displaystyle w_{0}=\frac{m\log{(1+\delta)}}{t\log{m}}$$ where $m^{-tw}=(1+\delta)^{-m}$, then separately consider the cases of $w$ being smaller and larger than $w_{0}$. When $w=w_{0}$, we have: $$\displaystyle\frac{\zeta(w_{0})}{(1+\delta)^{-m}}$$ $$\displaystyle\leq\frac{(1+\delta)^{m}}{2^{m}}\left(1+\exp\left(\frac{-w_{0}k% \log^{2}{m}}{m}\right)\right)^{m}$$ $$\displaystyle=\frac{(1+\delta)^{m}}{2^{m}}\left(1+\exp\left(\frac{-k\log{(1+% \delta)}\log{m}}{t}\right)\right)^{m}$$ $$\displaystyle=\frac{(1+\delta)^{m}}{2^{m}}\left(1+(1+\delta)^{-\frac{k}{t}\log% {m}}\right)^{m}$$ $$\displaystyle=\left(\frac{(1+\delta)(1+(1+\delta)^{-\frac{k}{t}\log{m}})}{2}% \right)^{m}$$ It is easy to see that for any $k>0,t>0$, when $m$ is sufficiently large the base of this exponential quantity, and hence the quantity itself, is smaller than $1$. Now consider the general case of $w>w_{0}$. Because $m>0$ and $\zeta(w)$ is monotonically non-increasing in $w$, we have: $$\displaystyle\frac{\zeta(w)}{m^{-tw}+(1+\delta)^{-m}}<\frac{\zeta(w_{0})}{(1+% \delta)^{-m}}$$ which, by the above argument, is smaller than $1$, as desired. The remaining case of $w\leq w_{0}$ is proved similar to the proof of Lemma 6. Due to the convexity of $\log{\zeta(w)}$ with respect to $w$, combined with the fact that for any $m>0$: $$\log{\zeta(0)}=\log{\left[m^{-t0}\right]}$$ and for $m$ sufficiently large: $$\log{\zeta(w_{0})}\leq\log{\left[m^{-tw_{0}}\right]}$$ we have that $\exists M$ such that $\forall m>M,0\leq w\leq w_{0}$ $$\displaystyle\log\zeta(w)\leq\log{\left[m^{-tw}\right]}$$ which implies $$\zeta(w)<m^{-tw}+(1+\delta)^{-m}$$ when $w\leq w_{0}$. Combined with the earlier similar result for $w>w_{0}$, this finishes the proof. ∎ Lemma 11. Let $\alpha,\beta,\delta\in(0,1),m=\alpha n^{\beta},\lambda^{*}=\frac{\log_{2}(1+% \delta)}{1-\beta},\lambda<\lambda^{*}$, and $k>0$. Then $$\lim_{n\to\infty}\sum_{w=1}^{\lambda m/\log n}\binom{n}{w}\frac{1}{2^{m}}\left% (1+\left(1-2\frac{k\log^{2}{m}}{m}\right)^{w}\right)^{m}=0$$ Proof. By Lemma 10, for any $t>0$ and large enough $n$ (and thus $m$), the desired expression is at most: $$\displaystyle\sum_{w=1}^{\lambda m/\log n}\binom{n}{w}\left(m^{-tw}+(1+\delta)% ^{-m}\right)$$ $$\displaystyle\leq\sum_{w=1}^{\lambda m/\log n}\frac{n^{w}m^{-tw}}{w!}+\frac{% \sum_{w=1}^{\lambda m/\log n}\binom{n}{w}}{(1+\delta)^{m}}$$ The second term here converges to zero as $n\to\infty$ by applying Lemma 9 with $\gamma$ set to $\log_{2}(1+\delta)$. For the first term, we get: $$\displaystyle\sum_{w=1}^{\lambda m/\log n}\frac{\alpha^{-tw}n^{(1-\beta t)w}}{% w!}$$ $$\displaystyle\leq\sum_{w=0}^{\infty}\frac{\alpha^{-tw}n^{(1-\beta t)w}}{w!}-% \frac{\alpha^{0}n^{0}}{0!}$$ $$\displaystyle=\exp(\alpha^{-t}n^{1-\beta t})-1$$ Choose any $t>1/\beta$. Then the second term converges to zero as well. ∎ Lemma 12. Let $\alpha,\beta\in(0,1),m=\alpha n^{\beta},c\in\mathbb{Z},$ and $q=2^{m+c}$. Let $w^{*}$ and $\epsilon(n,m,q,f)$ be as in Definition 4. Then, for all $\gamma>1$, $k>\frac{1-\beta}{2\beta}$ and $f=\frac{k\log^{2}{m}}{m}$, and values of $n$ greater than some $N>0$, $$\epsilon(n,m,q,f)\leq\gamma\frac{2^{c}}{q-1}$$ Proof. For any $0<\delta<1$, let $\lambda^{*}=\frac{\log_{2}(1+\delta)}{1-\beta}$, then $\forall\lambda<\lambda^{*}$, by Lemma 2, for all values of $n$ sufficiently large, $$\frac{\lambda m}{\log n}\leq w^{*}(n)=w^{*}$$ We can write: $$\displaystyle(q-1)\epsilon(n,m,q,f)$$ $$\displaystyle=\sum_{w=1}^{\lambda m/\log n}\binom{n}{w}\frac{1}{2^{m}}\left(1+% (1-2f)^{w}\right)^{m}+$$ $$\displaystyle\qquad\sum_{w=1+\lambda m/\log n}^{w^{*}}\binom{n}{w}\frac{1}{2^{% m}}\left(1+(1-2f)^{w}\right)^{m}+$$ $$\displaystyle\qquad\frac{r}{2^{m}}\left(1+(1-2f)^{w^{*}+1}\right)^{m}$$ $$\displaystyle\leq\sum_{w=1}^{\lambda m/\log n}\binom{n}{w}\frac{1}{2^{m}}\left% (1+(1-2f)^{w}\right)^{m}+$$ $$\displaystyle\qquad\sum_{w=1}^{w^{*}}\binom{n}{w}\frac{1}{2^{m}}\left(1+(1-2f)% ^{\lambda m/\log n}\right)^{m}+$$ $$\displaystyle\qquad\frac{r}{2^{m}}\left(1+(1-2f)^{\lambda m/\log n}\right)^{m}$$ $$\displaystyle=\sum_{w=1}^{\lambda m/\log n}\binom{n}{w}\frac{1}{2^{m}}\left(1+% (1-2f)^{w}\right)^{m}+$$ $$\displaystyle\qquad\frac{q-1}{2^{m}}\left(1+(1-2f)^{\lambda m/\log n}\right)^{m}$$ $$\displaystyle=A_{n}+B_{n}$$ By Lemma 11, from our choice of $\lambda$, for any $k>0$, $f=\frac{k\log^{2}{m}}{m}$, we have: $$\lim_{n\to\infty}A_{n}=0$$ For $B_{n}$, when n is sufficiently large, we have: $$\displaystyle B_{n}$$ $$\displaystyle=\frac{q-1}{2^{m}}\left(1+\left(1-2\frac{k\log^{2}{m}}{m}\right)^% {\lambda m/\log n}\right)^{m}$$ $$\displaystyle\leq 2^{c}\left(1+\exp\left(-2\frac{k\log^{2}{m}}{m}\frac{\lambda m% }{\frac{1}{\beta}(\log{m}-\log{\alpha})}\right)\right)^{m}$$ $$\displaystyle\leq 2^{c}\left(1+\exp\left(-2\frac{k\log{m}}{m}\lambda\beta m% \right)\right)^{m}$$ $$\displaystyle=2^{c}\left(1+m^{-2k\lambda\beta}\right)^{m}$$ $$\displaystyle\leq 2^{c}\exp\left(m^{-2k\lambda\beta}m\right)=2^{c}\exp\left(m^% {1-2k\lambda\beta}\right)$$ for all $\lambda>1$, if we choose $k$ such that $k>1/(2\lambda\beta)$, then $1-2k\lambda\beta<0$ and we have $$\lim\sup_{n\to\infty}B_{n}\leq 2^{c}$$ Combining the two results, as long as we choose $k>1/(2\lambda\beta)$, we have, $$\lim\sup_{n\to\infty}A_{n}+B_{n}\leq 2^{c}$$ which implies that for all $\gamma>1$, for sufficiently large n: $$\epsilon(n,m,q,f)\leq\frac{A_{n}+B_{n}}{q-1}\leq\gamma\frac{2^{c}}{q-1}$$ Since $\lambda<\log_{2}(1+\delta)/(1-\beta)$, we need $k$ to be larger than $(1-\beta)/(2\beta\log_{2}(1+\delta))$. Since $\delta$ can be chosen arbitrarily from $(0,1)$, it suffices to have: $$k>\frac{1-\beta}{2\beta}$$ as for any such $k$, we can always find a $\delta$ close enough to 1 such that the above condition is satisfied. ∎ Proof of Theorem 2 (part 3). This proof is almost exactly the same as that of Theorem 2 (part 2), following as a direct consequence of Lemma 12 along with Corollary 1 and Theorem 2 of ? (?). ∎
Domain Wall Fermion Lattice Simulation in Quaternion Basis Sadataka Furui furui@umb.teikyo-u.ac.jp School of Science and Engineering, Teikyo University 1-1 Toyosatodai, Utsunomiya 320-8551,Japan Abstract In the QCD analysis, when quarks are expressed in quaternion basis, the quark and its charge conjugate together are expressed by octonions and the octonion posesses the triality symmetry. Gluos are expressed by Plücker coordinates of spinors. Roles of triality in the proton charge form factor, three loop gluon self energy, technicolor, fine tuning and unparticle physics are discussed. pacs: 12.38.-t, 12.38.Gc, 12.60.-i, 11.10.Lm I Introduction Domain wall fermion(DWF) approximately preserves chiral symmetry and it transforms under $SU(3)$ color and $SU(2)$ spin as symmetries of internal coordinates. Although Pauli matrices which follows the SU(2) symmetry is frequently used, the symmetry of quaternion ${\mathcal{H}}$ which is invented by Hamilton is not considered seriously. By adding a new imaginary unit ${l}$ orthogonal to the quaternion basis ${\bf e_{1}}={i},{\bf e_{2}}={j},{\bf e_{3}}={k}$, one can construct octonion ${\mathcal{O}}={H}+{lH}$ which is spanned by $$\{1,{\bf i},{\bf j},{\bf k},{\bf l},{\bf i}{\bf l},{\bf j}{\bf l},{\bf k}{\bf l% }\}=\{1,{\bf e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{7}}\}$$ i.e. one real unit and 7 imaginary units Lo01 . In this 8 dimensional space, É. Cartan introduced a universal covering group of $SO(8)$, which is called $Spin(8)$. It has the triality automorphism. In this presentation, I show that the triality automorphism could be an important ingredient which can solve various puzzles in the infrared (IR) QCD. In sect.2, I introduce the quaternion, the octonion and triality automorphism and in sect.3 puzzles in IR QCD are discussed. The lattice simulation of proton charge form factor using quaternion bases, assuming correlation of domain wall fermions via exchange of self-dual gauge fields SF10 is shown in sect.4. Discussion and conclusion are given in sect.5. II Quaternion, Octonion and triality In 1877 Frobenius showed that an associative, quadratic real algebra ${\mathcal{A}}$ without divisors of zero has only three possibilities 1. ${\mathcal{A}}$ is isomorphic to ${\mathcal{R}}$ (Real). 2. ${\mathcal{A}}$ is isomorphic to ${\mathcal{C}}$ (Complex). 3. ${\mathcal{A}}$ is isomorphic to ${\mathcal{H}}$ (Quaternion). Quaternions are generalization of complex number ${\mathcal{C}}={\mathcal{R}}+i{\mathcal{R}}$, which are expressed as $q=w+x{\mbox{\boldmath$i$}}+y{\mbox{\boldmath$j$}}+z{\mbox{\boldmath$k$}}$. Automorphism group of ${\mathcal{H}}={\mathcal{R}}+{\mathcal{R}}^{3}$ is SO(3). A new imaginary unit $l$ that anticommutes with the bases of quaternions ${\mbox{\boldmath$i$}},{\mbox{\boldmath$j$}},{\mbox{\boldmath$k$}}$ compose octonions ${\mathcal{O}}={\mathcal{H}}+l{\mathcal{H}}$. Automorphism group of ${\mathcal{O}}={\mathcal{R}}+{\mathcal{R}}^{7}$ is not SO(7), but exceptional Lie group $G_{2}$. It contains tensor product of three ${\mathcal{R}}^{7}$ bases and three vectors. The triality automorphism is a transformation that rotates 24 dimensional bases defined by CartanCartan66 . $$\displaystyle\{\xi_{0},\xi_{1},\xi_{2},\xi_{3},\xi_{4}\},\quad\{\xi_{12},\xi_{% 31},\xi_{23},\xi_{14},\xi_{24},\xi_{34}\},$$ $$\displaystyle\{\xi_{123},\xi_{124},\xi_{314},\xi_{234},\xi_{1234}\},$$ $$\displaystyle\{x^{1},x^{2},x^{3},x^{4}\},\quad\{x^{1^{\prime}},x^{2^{\prime}},% x^{3^{\prime}},x^{4^{\prime}}\}$$ (1) There are three semi-spinors which have a quadratic form which is invariant with respect to the group of rotation $$\displaystyle\Phi={{}^{t}\phi}C\phi=\xi_{0}\xi_{1234}-\xi_{23}\xi_{14}-\xi_{31% }\xi_{24}-\xi_{12}\xi_{34}$$ $$\displaystyle\Psi={{}^{t}\psi}C\psi=-\xi_{1}\xi_{234}-\xi_{2}\xi_{314}-\xi_{3}% \xi_{124}+\xi_{4}\xi_{123}$$ and the vector $$F=x^{1}x^{1^{\prime}}+x^{2}x^{2^{\prime}}+x^{3}x^{3^{\prime}}+x^{4}x^{4^{% \prime}}$$ (3) With use of the quaternion bases $1,{\mbox{\boldmath$i$}},{\mbox{\boldmath$j$}},{\mbox{\boldmath$k$}}$, the spinors $\phi$ and $C\phi=\phi^{\prime}$ are defined as $$\displaystyle\phi$$ $$\displaystyle=$$ $$\displaystyle\xi_{0}+\xi_{14}{\mbox{\boldmath$i$}}+\xi_{24}{\mbox{\boldmath$j$% }}+\xi_{34}{\mbox{\boldmath$k$}}$$ $$\displaystyle C\phi$$ $$\displaystyle=$$ $$\displaystyle\xi_{1234}-\xi_{23}{\mbox{\boldmath$i$}}-\xi_{31}{\mbox{\boldmath% $j$}}-\xi_{12}{\mbox{\boldmath$k$}}.$$ (4) Similarly, $\psi$ and $C\psi=\psi^{\prime}$ are defined as $$\displaystyle\psi$$ $$\displaystyle=$$ $$\displaystyle\xi_{4}+\xi_{1}{\mbox{\boldmath$i$}}+\xi_{2}{\mbox{\boldmath$j$}}% +\xi_{3}{\mbox{\boldmath$k$}}$$ $$\displaystyle C\psi$$ $$\displaystyle=$$ $$\displaystyle\xi_{123}-\xi_{234}{\mbox{\boldmath$i$}}-\xi_{314}{\mbox{% \boldmath$j$}}-\xi_{124}{\mbox{\boldmath$k$}}.$$ (5) III Correlation of quarks via self-dual gauge field III.1 Proton charge form factor To calculate proton charge form factor with use of $16^{3}\times 32\times 16$ DWF produced by RBC/UKQCD collaboration DWF , I first perform Landau gauge fixing and then Coulomb gauge fixing of the gauge configuration. Instead of performing the residual gauge transformation of the Coulomb gauge, I rotate the fermion on the left domain wall and on the right domain wall such that they are correlated by the self dual gauge field which is parametrized as Corrigan and GoddardCG81 . The transition function of CG81 is $$g(\lambda\omega,\lambda\pi)=g(\omega,\pi),\quad det\,g=1.$$ where $\zeta=\frac{\pi_{1}}{\pi_{2}}$, $h(x,\zeta)$ is regular in $|\zeta|>1-\epsilon$ and $k(x,\zeta)$ is regular in $|\zeta|<1+\epsilon$. I adopt the Ansatz $$\displaystyle g_{0}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}e^{-\nu}&0\\ 0&e^{\nu}\end{array}\right)\left(\begin{array}[]{cc}\zeta^{1}&\rho\\ 0&\zeta^{-1}\end{array}\right)\left(\begin{array}[]{cc}e^{\mu}&0\\ 0&e^{-\mu}\end{array}\right)$$ (6) $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}e^{\gamma}\zeta^{1}&f(\gamma,\zeta)\\ 0&e^{-\gamma}\zeta^{-1}\end{array}\right)$$ In our 5-dimesional domain wall fermion case, $\gamma=\mu-\nu$ and $\mu,\nu$ contain the phase in the 5th direction $i\eta$. $$2\mu=i\omega_{2}/\pi_{2}-i\eta=(x_{1}+ix_{2})\zeta+ix_{0}-x_{3}-i\eta$$ $$2\nu=i\omega_{1}/\pi_{1}+i\eta=(x_{1}-ix_{2})\zeta+ix_{0}+x_{3}+i\eta$$ The quaternion reality condition of the transformation matrix $g(\gamma,\zeta)$ gives $$\displaystyle\left(\begin{array}[]{cc}a_{L_{s}-1}&b_{L_{s}-1}\\ c_{L_{s}-1}&d_{L_{s}-1}\end{array}\right)\left(\begin{array}[]{cc}\zeta^{1}e^{% \gamma}&f\\ 0&\zeta^{-1}e^{-\gamma}\end{array}\right)$$ $$\displaystyle=\left(\begin{array}[]{cc}\zeta^{1}e^{-\gamma}&\bar{f}\\ 0&\zeta^{-1}e^{\gamma}\end{array}\right)\left(\begin{array}[]{cc}a_{0}&b_{0}\\ c_{0}&d_{0}\end{array}\right),$$ (7) where $\displaystyle f=\frac{d_{0}e^{\gamma}-a_{0}e^{-\gamma}}{\psi}$, $\psi=c_{L_{s}-1}\zeta^{1}=c_{0}\zeta^{-1}$ and $\displaystyle\bar{f}=\overline{f(\bar{\gamma},-\frac{1}{\bar{\zeta}})}$. I search parameters using Mathematica and obtained Fig.1. The charge form factor of DWF was calculated also in Schroedinger functional method LHPC09 , but the charge radius was smaller than the experiment. The difference of the two is expected to be due to the treatment of final state interaction, as was the case in the $\eta$ decay into three mesonsRT81 . III.2 The gluon self energy É. Cartan defined the vector field as a plücker coordinate of fermion spinors. The trilinear form of fermion, antifermion and vector field in the quaternion bases is $$\displaystyle{\mathcal{F}}=\phi^{T}CX\psi$$ $$\displaystyle=x^{1}(\xi_{12}\xi_{314}-\xi_{31}\xi_{124}-\xi_{14}\xi_{123}+\xi_% {1234}\xi_{1})$$ $$\displaystyle+x^{2}(\xi_{23}\xi_{124}-\xi_{12}\xi_{234}-\xi_{24}\xi_{123}+\xi_% {1234}\xi_{2})$$ $$\displaystyle+x^{3}(\xi_{31}\xi_{234}-\xi_{23}\xi_{314}-\xi_{34}\xi_{123}+\xi_% {1234}\xi_{3})$$ $$\displaystyle+x^{4}(-\xi_{14}\xi_{234}-\xi_{24}\xi_{314}-\xi_{34}\xi_{124}+\xi% _{1234}\xi_{4})$$ $$\displaystyle+x^{1^{\prime}}(-\xi_{0}\xi_{234}+\xi_{23}\xi_{4}-\xi_{24}\xi_{3}% +\xi_{34}\xi_{2})$$ $$\displaystyle+x^{2^{\prime}}(-\xi_{0}\xi_{314}+\xi_{31}\xi_{4}-\xi_{34}\xi_{1}% +\xi_{14}\xi_{3})$$ $$\displaystyle+x^{3^{\prime}}(-\xi_{0}\xi_{124}+\xi_{12}\xi_{4}-\xi_{14}\xi_{2}% +\xi_{24}\xi_{1})$$ $$\displaystyle+x^{4^{\prime}}(\xi_{0}\xi_{123}-\xi_{23}\xi_{1}-\xi_{31}\xi_{2}-% \xi_{12}\xi_{3})$$ (8) Using this trilinear form, I construct three loop gluon self-energy diagram as shown in Figs.2 and 3 as transverse polarized and Figs.4 to 6 as Coulomb potential in the Coulomb gauge. Here the two exchanged vector fields are self-dual. The triality transformation transforms fermion field to other triality eigen states. If quark-gluon interaction is triality blind, the gluon created by a quark-anti quark pair in a triality sector will interact with quark-anti quark pairs of other triality. In finite temperature QCD, these diagrams give $g^{6}$ order term in the perturbative calculation of the pressure. Since all diagrams have the same phase, they are the candidate of compenating the $g^{2}$ order negative pressure term. When the conjecture of DD78 works, this kind of zero mode contribution dominates the plessure of the QCD ground state. III.3 Puzzles in the critical fermion number According to BZ82 , presence of infrared fixed point and the opening of the conformal window occurs in a region near a certain critical flavor number $N_{f}^{c}$. Lattice simulations in Schrödinger Functional(SF) method AFN08 says $N_{f}^{c}\sim 10$ while latttice simulation in MOM scheme SF10 and experimental data DBCK08 suggests presence of IR fixed point for $N_{f}^{c}=3$. In QCD there is axial anomaly, and to make the theory self-consistent, anomaly cancellation is required, but it is shown that it occurs when $N_{f}$ is larger than 10San09 . These puzzling features could be resolved, if the quark-gluon interaction is triality blind, consistent with the phenomenological analysis of weak decay processes Ma10 , and the effective $N_{f}$ in the SF scheme is three times larger than that in the MOM scheme. IV Discussion and Conclusion In the review of Shäfer and ShyryakSS98 , the effective four quark interaction with auxiliary scalar field $L_{a}$ and $R_{a}$ is given as $$\displaystyle(\psi^{\dagger}\tau_{a}^{-}\gamma_{-}\psi)^{2}\to 2(\psi^{\dagger% }\tau_{a}^{-}\gamma_{-}\psi)L_{a}-L_{a}L_{a}$$ $$\displaystyle(\psi^{\dagger}\tau_{a}^{-}\gamma_{+}\psi)^{2}\to 2(\psi^{\dagger% }\tau_{a}^{-}\gamma_{+}\psi)R_{a}-R_{a}R_{a}$$ (9) In this model, meson decay into three mesons occur through exchange of self-dual gauge fields and/or quark-pair creation, but in $\eta_{c}\to\eta\pi\pi$ and $\eta_{c}\to K\bar{K}\pi$ decay processes, the exchange of two self-dual gauge fields dominates. These decay could be measured in B-factory at KEK and yield useful information on instanton. If charged lepton interaction preserves triality, but quark interactions does not, the hierarchy problem (fine tuning in the definition of GUT scale is necessary) and the $U(1)$ problem ChLi84 could be resolved. Some unparticles Ge07 , which are believed to exist from astrophysical observations, could be quark-anti quark pairs that belong to different triality sectors from that of electrons or muons in the detector. Lattice simulations of larger lattice to confirm importance of the triality automorphism in IR QCD are under way. ACKNOWLEDGEMENTS The author thanks the organizer for the interesting conference in Madrid. Thanks are also due to computer centers at KEK and at Tsukuba Univertsity for supports. References (1) P. Lounesto ”Clifford Algebras and Spinors”, Cambridge University Press,(2001). (2) É. Cartan, ”The theory of Spinors”, Dover Pub. (1966), ”Lecons sur théorie des spineurs”, Hermann, Paris (1938). (3) S. Furui, arXiv:0912.5397[hep-lat], 1009.3865[hep-ph] (4) A. D’Adda and P. Di Vecchia, Phys. Lett. B73,162 (1978). (5) S.N. Syritsyn et al., (LHPC Collaboration) Phys. Rev. D81, 034507(2010) . arXiv:0907.4194 (6) C. Roisenel and T.N. Truong, Nucl. Phys. B187,293 (1981) . (7) T. Banks and A. Zaks, Nucl. Phys. B196 ,189(1982). (8) T. Appelquist, G.T. Fleming and E.T. Neil, Phys. Rev. Lett.100,171607 (2008), Phys. Rev. Lett.102,149902 (2009) (E). (9) A. Deur, V. Burkert, J.P. Chen and W. Korsch, Phys. Lett. B665,349 (2008) . (10) E.Corrigan and P. Goddard, Comm. Math. Phys.80 575 (1981) (11) C. Allton et al., Phys. Rev. D76, 014504 (2007); arXiv:hep-lat/0701013. (12) F. Sannino, Phys. Rev. D80,065011 (2009); arXiv:0911.0931[hep-th]. (13) E. Ma, Phys. Rev. D82,037301 (2010). (14) T. Schafer and E.V. Shyryak, Rev. Mod. Phys. 70, 323(1998). (15) T-P. Cheng and L-F. Li, Gauge theory of elementary particle physics Clarendon Press, Oxford (1984). (16) H. Georgi, Phys. Rev. Lett.98,221601 (2007).
Fast Semidifferential-based Submodular Function Optimization ††thanks: A shorter version of this appeared in Proc. International Conference of Machine Learning (ICML), Atlanta, 2013 Rishabh Iyer rkiyer@u.washington.edu Department of EE, University of Washington, Seattle Stefanie Jegelka stefje@eecs.berkeley.edu Department of EECS, University of California, Berkeley Jeff Bilmes bilmes@u.washington.edu Department of EE, University of Washington, Seattle Abstract We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub- and super-differentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular minimization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many state-of-the-art maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments. 1 Introduction In this paper, we address minimization and maximization problems of the following form: $$\displaystyle\mbox{Problem 1: }\min_{X\in\mathcal{C}}f(X),\qquad\mbox{Problem % 2: }\max_{X\in\mathcal{C}}f(X)$$ where $f:2^{V}\to\mathbb{R}$ is a discrete set function on subsets of a ground set $V=\{1,2,\cdots,n\}$, and $\mathcal{C}\subseteq 2^{V}$ is a family of feasible solution sets. The set $\mathcal{C}$ could express, for example, that solutions must be an independent set in a matroid, a limited budget knapsack, or a cut (or spanning tree, path, or matching) in a graph. Without making any further assumptions about $f$, the above problems are trivially worst-case exponential time and moreover inapproximable. If we assume that $f$ is submodular, however, then in many cases the above problems can be approximated and in some cases solved exactly in polynomial time. A function $f:2^{V}\to\mathbb{R}$ is said to be submodular [9] if for all subsets $S,T\subseteq V$, it holds that $f(S)+f(T)\geq f(S\cup T)+f(S\cap T)$. Defining $f(j|S)\triangleq f(S\cup j)-f(S)$ as the gain of $j\in V$ with respect to $S\subseteq V$, then $f$ is submodular if and only if $f(j|S)\geq f(j|T)$ for all $S\subseteq T$ and $j\notin T$. Traditionally, submodularity has been a key structural property for problems in combinatorial optimization, and for applications in econometrics, circuit and game theory, and operations research. More recently, submodularity’s popularity in machine learning has been on the rise. On the other hand, a potential stumbling block is that machine learning problems are often large (e.g., “big data”) and are getting larger. For general unconstrained submodular minimization, the computational complexity often scales as a high-order polynomial. These algorithms are designed to solve the most general case and the worst-case instances are often contrived and unrealistic. Typical-case instances are much more benign, so simpler algorithms (e.g., graph-cut) might suffice. In the constrained case, however, the problems often become NP-complete. Algorithms for submodular maximization are very different in nature from their submodular minimization cohorts, and their complexity too varies depending on the problem. In any case, there is an urgent need for efficient, practical, and scalable algorithms for the aforementioned problems if submodularity is to have a lasting impact on the field of machine learning. In this paper, we address the issue of scalability and simultaneously draw connections across the apparent gap between minimization and maximization problems. We demonstrate that many algorithms for submodular maximization may be viewed as special cases of a generic minorize-maximize framework that relies on discrete semidifferentials. This framework encompasses state-of-the-art greedy and local search techniques, and provides a rich class of very practical algorithms. In addition, we show that any approximate submodular maximization algorithm can be seen as an instance of our framework. We also present a complementary majorize-minimize framework for submodular minimization that makes two contributions. For unconstrained minimization, we obtain new nontrivial bounds on the lattice of minimizers, thereby reducing the possible space of candidate minimizers. This method easily integrates into any other exact minimization algorithm as a preprocessing step to reduce running time. In the constrained case, we obtain practical algorithms with bounded approximation factors. We observe these algorithms to be empirically competitive to more complicated ones. As a whole, the semidifferential framework offers a new unifying perspective and basis for treating submodular minimization and maximization problems in both the constrained and unconstrained case. While it has long been known [9] that submodular functions have tight subdifferentials, our results rely on a recently discovered property [18, 22] showing that submodular functions also have superdifferentials. Furthermore, our approach is entirely combinatorial, thus complementing (and sometimes obviating) related relaxation methods. 2 Motivation and Background Submodularity’s escalating popularity in machine learning is due to its natural applicability. Indeed, instances of Problems 1 and 2 are seen in many forms, to wit: MAP inference/Image segmentation: Markov Random Fields with pairwise attractive potentials are important in computer vision, where MAP inference is identical to unconstrained submodular minimization solved via minimum cut [3]. A richer higher-order model can be induced for which MAP inference corresponds to Problem 1 where $V$ is a set of edges in a graph, and $\mathcal{C}$ is a set of cuts in this graph — this was shown to significantly improve many image segmentation results [22]. Moreover, [6] efficiently solve MAP inference in a sparse higher-order graphical model by restating the problem as a submodular vertex cover, i.e., Problem 1 where $\mathcal{C}$ is the set of all vertex covers in a graph. Clustering: Variants of submodular minimization have been successfully applied to clustering problems [38, 36]. Limited Vocabulary Speech Corpora: The problem of finding a maximum size speech corpus with bounded vocabulary [32] can be posed as submodular function minimization subject to a size constraint. Alternatively, cardinality can be treated as a penalty, reducing the problem to unconstrained submodular minimization [23]. Size constraints: The densest $k$-subgraph and size-constrained graph cut problems correspond to submodular minimization with cardinality constraints, problems that are very hard [44]. Specialized algorithms for cardinality and related constraints were proposed e.g. in [44, 35]. Minimum Power Assignment: In wireless networks, one seeks a connectivity structure that maintains connectivity at a minimum energy consumption. This problem is equivalent to finding a suitable structure (e.g., a spanning tree) minimizing a submodular cost function [45]. Transportation: Costs in real-world transportation problems are often non-additive. For example, it may be cheaper to take a longer route owned by one carrier rather than a shorter route that switches carriers. Such economies of scale, or “right of usage” properties are captured in the “Categorized Bottleneck Path Problem” – a shortest path problem with submodular costs [1]. Similar costs have been considered for spanning tree and matching problems. Summarization/Sensor placement: Submodular maximization also arises in many subset extraction problems. Sensor placement [25], document summarization [31] and speech data subset selection [29], for example, are instances of submodular maximization. Determinantal Point Processes: The Determinantal Point Processes (DPP’s) which have found numerous applications in machine learning [26] are known to be log-submodular distributions. In particular, the MAP inference problem is a form of non-monotone submodular maximization. Indeed, there is strong motivation for solving Problems 1 and 2 but, as mentioned above, these problems come not without computational difficulties. Much work has therefore been devoted to developing optimal or near optimal algorithms. Among the several algorithms [33] for the unconstrained variant of Problem 1, where $\mathcal{C}=2^{V}$, the best complexity to date is $O(n^{5}\gamma+n^{6})$ [40] ($\gamma$ is the cost of evaluating $f$). This has motivated studies on faster, possibly special case or approximate, methods [42, 23]. Constrained minimization problems, even for simple constraints such as a cardinality lower bound, are mostly NP-hard, and not approximable to within better than a polynomial factor. Approximation algorithms for these problems with various techniques have been studied in [44, 16, 12, 21]. Unlike submodular minimization, all forms of submodular maximization are NP-hard. Most such problems, however, admit constant-factor approximations, which are attained via very simple combinatorial algorithms [39, 4]. Majorization-minimization (MM)111MM also refers to minorization-maximization here. algorithms are known to be useful in machine learning [15]. Notable examples include the EM algorithm [34] and the convex-concave procedure [46]. Discrete instances have been used to minimize the difference between submodular functions [37, 17], but these algorithms generally lack theoretical guarantees. This paper shows, by contrast, that for submodular optimization, MM algorithms have strong theoretical properties and empirically work very well. 3 Submodular semi-differentials We first briefly introduce submodular semidifferentials. Throughout this paper, we assume normalized submodular functions (i.e., $f(\emptyset)=0$). The subdifferential $\partial_{f}(Y)$ of a submodular set function $f:2^{V}\to\mathbb{R}$ for a set $Y\subseteq V$ is defined [9] analogously to the subdifferential of a continuous convex function: $$\displaystyle\partial_{f}(Y)$$ $$\displaystyle=\{y\in\mathbb{R}^{n}:$$ (1) $$\displaystyle\quad f(X)-y(X)\geq f(Y)-y(Y)\;\text{for all }X\subseteq V\}$$ For a vector $x\in\mathbb{R}^{V}$ and $X\subseteq V$, we write $x(X)=\sum_{j\in X}x(j)$ — in such case, we say that $x$ is a normalized modular function. We shall denote a subgradient at $Y$ by $h_{Y}\in\partial_{f}(Y)$. The extreme points of $\partial_{f}(Y)$ may be computed via a greedy algorithm: Let $\sigma$ be a permutation of $V$ that assigns the elements in $Y$ to the first $|Y|$ positions ($\sigma(i)\in Y$ if and only if $i\leq|Y|$). Each such permutation defines a chain with elements $S_{0}^{\sigma}=\emptyset$, $S^{\sigma}_{i}=\{\sigma(1),\sigma(2),\dots,\sigma(i)\}$ and $S^{\sigma}_{|Y|}=Y$. This chain defines an extreme point $h^{\sigma}_{Y}$ of $\partial_{f}(Y)$ with entries $$\displaystyle h^{\sigma}_{Y}(\sigma(i))=f(S^{\sigma}_{i})-f(S^{\sigma}_{i-1}).$$ (2) Surprisingly, we can also define superdifferentials $\partial^{f}(Y)$ of a submodular function [22, 18] at $Y$: $$\displaystyle\partial^{f}(Y)$$ $$\displaystyle=\{y\in\mathbb{R}^{n}:$$ (3) $$\displaystyle\quad f(X)-y(X)\leq f(Y)-y(Y);\text{for all }X\subseteq V\}$$ We denote a generic supergradient at $Y$ by $g_{Y}$. It is easy to show that the polyhedron $\partial^{f}$ is non-empty. We define three special supergradients $\hat{g}_{Y}$ (“grow”), $\check{g}_{Y}$ (“shrink”) and $\bar{g}_{Y}$ as follows [18]: $$\displaystyle\hat{g}_{Y}(j)=f(j\mid V\setminus\{j\})$$ $$\displaystyle\hat{g}_{Y}(j)=f(j\mid Y)$$ $$\displaystyle\check{g}_{Y}(j)=f(j\mid Y\setminus\{j\})$$ $$\displaystyle\check{g}_{Y}(j)=f(j\mid\emptyset)$$ $$\displaystyle\underbrace{\bar{g}_{Y}(j)=f(j\mid V\setminus\{j\})}$$ $$\displaystyle\underbrace{\bar{g}_{Y}(j)=f(j\mid\emptyset)}$$ $$\displaystyle\qquad\text{ for }j\in Y$$ $$\displaystyle\text{for }j\notin Y.$$ For a monotone submodular function, i.e., a function satisfying $f(A)\leq f(B)$ for all $A\subseteq B\subseteq V$, the sub- and supergradients defined here are nonnegative. 4 The discrete MM framework With the above semigradients, we can define a generic MM algorithm. In each iteration, the algorithm optimizes a modular approximation formed via the current solution $Y$. For minimization, we use an upper bound $$m^{g_{Y}}(X)=f(Y)+g_{Y}(X)-g_{Y}(Y)\;\geq f(X),$$ (4) and for maximization a lower bound $$m_{h_{Y}}(X)=f(Y)+h_{Y}(X)-h_{Y}(Y)\;\leq f(X).$$ (5) Both these bounds are tight at the current solution, satisfying $m_{g_{Y}}(Y)=m_{h_{Y}}(Y)=f(Y)$. In almost all cases, optimizing the modular approximation is much faster than optimizing the original cost function $f$. Algorithm 1 shows our discrete MM scheme for maximization (MMax) [and minimization (MMin)] , and for both constrained and unconstrained settings. Since we are minimizing a tight upper bound, or maximizing a tight lower bound, the algorithm must make progress. Lemma 4.1. Algorithm 1 monotonically improves the objective function value for Problems 1 and 2 at every iteration, as long as a linear function can be exactly optimized over $\mathcal{C}$. Proof. By definition, it holds that $f(X^{t+1})\leq m^{g_{X^{t}}}(X^{t+1})$. Since $X^{t+1}$ minimizes $m^{g_{X^{t}}}$, it follows that $$f(X^{t+1})\leq m^{g_{X^{t}}}(X^{t+1})\leq m^{g_{X^{t}}}(X^{t})=f(X^{t}).$$ (6) The observation that Algorithm 1 monotonically increases the objective of maximization problems follows analogously. ∎ Contrary to standard continuous subgradient descent schemes, Algorithm 1 produces a feasible solution at each iteration, thereby circumventing any rounding or projection steps that might be challenging under certain types of constraints. In addition, it is known that for relaxed instances of our problems, subgradient descent methods can suffer from slow convergence [2]. Nevertheless, Algorithm 1 still relies on the choice of the semigradients defining the bounds. Therefore, we next analyze the effect of certain choices of semigradients. 5 Submodular function minimization For minimization problems, we use MMin with the supergradients $\hat{g}_{X},\check{g}_{X}$ and $\bar{g}_{X}$. In both the unconstrained and constrained settings, this yields a number of new approaches to submodular minimization. 5.1 Unconstrained Submodular Minimization We begin with unconstrained minimization, where $\mathcal{C}=2^{V}$ in Problem 1. Each of the three supergradients yields a different variant of Algorithm 1, and we will call the resulting algorithms MMin-I, II and III, respectively. We make one more assumption: of the minimizing arguments in Step 4 of Algorithm 1, we always choose a set of minimum cardinality. MMin-I is very similar to the algorithms proposed in [23]. Those authors, however, decompose $f$ and explicitly represent graph-representable parts of the function $f$. We do not require or consider such a restriction here. Let us define the sets $A=\{j:f(j|\emptyset)<0\}$ and $B=\{j:f(j|V\setminus\{j\})\leq 0\}$. Submodularity implies that $A\subseteq B$, and this allows us to define a lattice222This lattice contains all sets $S$ satisfying $A\subseteq S\subseteq B$ $\mathcal{L}=[A,B]$ whose least element is the set $A$ and whose greatest element is the set is $B$. This sublattice $\mathcal{L}$ of $[\emptyset,V]$ retains all minimizers $X^{*}$ (i.e., $A\subseteq X^{*}\subseteq B$ for all $X^{*}$): Lemma 5.1. [9] Let $\mathcal{L}^{*}$ be the lattice of the global minimizers of a submodular function $f$. Then $\mathcal{L}^{*}\subseteq\mathcal{L}$, where we use $\subseteq$ to denote a sublattice. Lemma 5.1 has been used to prune down the search space of the minimum norm point algorithm from the power set of $V$ to a smaller lattice [2, 10]. Indeed, $A$ and $B$ may be obtained by using MMin-III: Lemma 5.2. With $X^{0}=\emptyset$ and $X^{0}=V$, MMin-III returns the sets $A$ and $B$, respectively. Initialized by an arbitrary $X^{0}$, MMin-III converges to $(X^{0}\cap B)\cup A$. Proof. When using $X^{0}=\emptyset$, we obtain $X^{1}=\operatorname*{argmin}_{X}f(\emptyset)+\sum_{j\in X}f(j)=A$. Since $A\subseteq B$, the algorithm will converge to $X^{1}=A$. At this point, no more elements will be added, since for all $i\notin A$ we have $\bar{g}_{X^{1}}(i)=f(i\mid\emptyset)>0$. Moreover, the algorithm will not remove any elements: for all $i\in A$, it holds that $\bar{g}_{X^{1}}(i)=f(i\mid V\setminus i)\leq f(i)\leq 0$. By a similar argumentation, the initialization $X^{0}=V$ will lead to $X^{1}=B$, where the algorithm terminates. If we start with any arbitrary $X^{0}$, MMin-III will remove the elements $j$ with $f(j|V\setminus j)>0$ and add the element $j$ with $f(j|\emptyset)<0$. Hence it will add the elements in $A$ that are not in $X^{0}$ and remove those element from $X^{0}$ that are not in $B$. Let the resulting set be $X^{1}$. As before, for all $i\in A$, it holds that $\bar{g}_{X^{1}}(i)=f(i\mid V\setminus i)\leq f(i)\leq 0$, so these elements will not be removed in any possible subsequent iteration. The elements $i\in X^{1}\setminus A$ were not removed, so $f(i\mid V\setminus i)\leq 0$. Hence, no more elements will be removed after the first iteration. Similarly, no elements will be added since for all $i\notin X^{1}$, it holds that $f(i\mid\emptyset)\geq f(i\mid V\setminus i)>0$. ∎ Lemma 5.2 implies that MMin-III effectively provides a contraction of the initial lattice to $\mathcal{L}$, and, if $X^{0}$ is not in $\mathcal{L}$, it returns a set in $\mathcal{L}$. Henceforth, we therefore assume that we start with a set $X^{0}\in\mathcal{L}$. While the known lattice $\mathcal{L}$ has proven useful for warm-starts, MMin-I and II enable us to prune $\mathcal{L}$ even further. Let $A_{+}$ be the set obtained by starting MMin-I at $X^{0}=\emptyset$, and $B_{+}$ be the set obtained by starting MMin-II at $X^{0}=V$. This yields a new, smaller sublattice $\mathcal{L}_{+}=[A_{+},B_{+}]$ that retains all minimizers: Theorem 5.3. For any minimizer $X^{*}\in\mathcal{L}$, it holds that $A\subseteq A_{+}\subseteq X^{*}\subseteq B_{+}\subseteq B$. Hence $\mathcal{L}^{*}\subseteq\mathcal{L}_{+}\subseteq\mathcal{L}$. Furthermore, when initialized with $X^{0}=\emptyset$ and $X^{0}=V$, respectively, both MMin-I and II converge in $O(n)$ iterations to a local minimum of $f$. By a local minimum, we mean a set $X$ that satisfies $f(X)\leq f(Y)$ for any set $Y$ that differs from $X$ by a single element. We point out that Theorem 5.3 generalizes part of Lemma 3 in [23]. For the proof, we build on the following Lemma: Lemma 5.4. Every iteration of MMin-I can be written as $X^{t+1}=X^{t}\cup\{j:f(j|X^{t})<0\}$. Similarly, every iteration of MMin-II can be expressed as $X^{t+1}=X^{t}\backslash\{j:f(j|X^{t}\setminus j)>0\}$. Proof. (Lemma 5.4) Throughout this paper, we assume that we select only the minimal minimizer of the modular function at every step. In other words, we do not choose the elements that have zero marginal cost. We observe that in iteration $t+1$ of MMin-I, we add the elements $i$ with $\hat{g}_{X^{t}}(i)<0$, i.e., $X^{t+1}=X^{t}\cup\{j:f(j|X^{t})<0\}$. No element will ever be removed, since $\hat{g}_{X^{t}}(i)=f(i\mid V\setminus i)\leq f(i\mid X^{t-1})\leq 0$. If we start with $X^{0}=\emptyset$, then after the first iteration, it holds that $X^{1}=\operatorname*{argmin}_{X}f(\emptyset)+\sum_{j\in X}f(j)$. Hence $X^{1}=A$. MMin-I terminates when reaching a set $A_{+}$, where $f(j|A_{+})\geq 0$, for all $j\notin A_{+}$. The analysis of MMin-II is analogous. In iteration $t+1$, we remove the elements $i$ with $\check{g}_{X^{t}}(i)>0$, i.e., $X^{t+1}=X^{t}\backslash\{j:f(j|X^{t}-j)>0\}$. Similarly to the argumentation above, MMin-II never adds any elements. If we begin with $X^{0}=V$, then $X^{1}=\mbox{arg min}_{X}f(V)+\sum_{j\in V\backslash X}f(j|V-\{j\})$, and therefore $X^{1}=B$. MMin-II terminates with a set $B_{+}$. ∎ Now we can prove Theorem 5.3. Proof. (Thm. 5.3) Since, by Lemma 5.4, MMin-I only adds elements and MMin-II only removes elements, at least one in each iteration, both algorithms terminate after $O(n)$ iterations. Let us now turn to the relation of $X^{*}$ to $A$ and $B$. Since $f(i)<0$ for all $i\in A$, the set $X^{1}=A$ found in the first iteration of MMin-I must be a subset of $X^{*}$. Consider any subset $X^{t}\subseteq X^{*}$. Any element $j$ for which $f(j\mid X^{t})<0$ must be in $X^{*}$ as well, because by submodularity, $f(j\mid X^{*})\leq f(j\mid X^{t})<0$. This means $f(X^{*}\cup j)<f(X^{*})$, which would otherwise contradict the optimality of $X^{*}$. The set of such $j$ is exactly $X^{t+1}$, and therefore $X^{t+1}\subseteq X^{*}$. This induction shows that MMin-I, whose first solution is $A\subseteq X^{*}$, always returns a subset of $X^{*}$. Analogously, $B\supseteq X^{*}$, and MMin-II only removes elements $j\notin X^{*}$. Finally, we argue that $A_{+}$ is a local minimum; the proof for $B_{+}$ is analogous. Algorithm MMin-I generates a chain $\emptyset=X^{0}\subseteq X^{1}\subseteq X^{2}\cdots\subseteq A_{+}=X^{T}$. For any $t\leq T$, consider $j\in X^{t}\setminus X^{t-1}$. Submodularity implies that $f(j|A_{+}\setminus j)\leq f(j|X^{t-1})<0$. The last inequality follows from the fact that $j$ was added in iteration $t$. Therefore, removing any $j\in A_{+}$ will increase the cost. Regarding the elements $i\notin A_{+}$, we observe that MMin-I has terminated, which implies that $f(i\mid A_{+})\geq 0$. Hence, adding $i$ to $A_{+}$ will not improve the solution, and $A_{+}$ is a local minimum. ∎ Theorem 5.3 has a number of nice implications. First, it provides a tighter bound on the lattice of minimizers of the submodular function $f$ that, to the best of our knowledge, has not been used or mentioned before. The sets $A_{+}$ and $B_{+}$ obtained above are guaranteed to be supersets and subsets of $A$ and $B$, respectively, as illustrated in Figure 2. This means we can start any algorithm for submodular minimization from the lattice $\mathcal{L}_{+}$ instead of the initial lattice $2^{V}$ or $\mathcal{L}$. When using an algorithm whose running time is a high-order polynomial of $|V|$, any reduction of the ground set $V$ is beneficial. Second, each iteration of MMin takes linear time. Therefore, its total running time is $O(n^{2})$. Third, Theorem 5.3 states that both MMin-I and II converge to a local minimum. This may be counter-intuitive if one considers that each algorithm either only adds or only removes elements. In consequence, a local minimum of a submodular function can be obtained in $O(n^{2})$, a fact that is of independent interest and that does not hold for local maximizers [8]. The following example illustrates that $\mathcal{L}_{+}$ can be a strict subset of $\mathcal{L}$ and therefore provides non-trivial pruning. Let $w_{1},w_{2}\in\mathbb{R}^{V}$, $w_{1}\geq 0$ be two vectors, each defining a linear (modular) function. Then the function $f(X)=\sqrt{w_{1}(X)}+w_{2}(X)$ is submodular. Specifically, let $w_{1}=[3,9,17,14,14,10,16,4,13,2]$ and $w_{2}=[-9,4,6,-1,10,-4,-6,-1,2,-8]$. Then we obtain $\mathcal{L}$ defined by $A=[1,6,7,10]$ and $B=[1,4,6,7,8,10]$. The tightened sublattice contains exactly the minimizer: $A_{+}=B_{+}=X^{*}=[1,6,7,8,10]$. As a refinement to Theorem 5.3, we can show that MMin-I and MMin-II converge to the local minima of lowest and highest cardinality, respectively. Lemma 5.5. The set $A_{+}$ is the smallest local minimum of $f$ (by cardinality), and $B_{+}$ is the largest. Moreover, every local minimum $Z$ is in $\mathcal{L}_{+}$: $Z\in\mathcal{L}_{+}$ for every local minimum $Z$. Proof. The proof proceeds analogously to the proof of Theorem 5.3. Let $Y_{s}$ be the local minimum of smallest-cardinality, and $Y_{\ell}$ the largest one. First, we note that $X^{0}=\emptyset\subseteq Y_{s}$. For induction, assume that $X^{t}\subseteq Y_{s}$. For contradiction, assume there is an element $j\in X^{t+1}$ that is not in $Y_{s}$. Since $j\in X^{t+1}\setminus X^{t}$, it holds by construction that $f(j\mid Y_{s})\leq f(j\mid X^{t})<0$, implying that $f(Y_{s}\cup j)<f(Y_{s})$. This contradicts the local optimality of $Y_{s}$, and therefore it must hold that $X^{t+1}\subseteq Y_{s}$. Consequently, $A_{+}\subseteq Y_{s}$. But $A_{+}$ is itself a local minimum, and hence equality holds. The result for $B_{+}$ follows analogously. By the same argumentation as above for $Y_{s}$ and $Y_{\ell}$, we conclude that each local minimum $Z$ satisfies $A_{+}\subseteq Z\subseteq B_{+}$, and therefore $Z\in\mathcal{L}_{+}\subseteq\mathcal{L}$. ∎ As a corollary, Lemma 5.5 implies that if a submodular function has a unique local minimum, MMin-I and II must find this minimum, which is a global one. In the following we consider two extensions of MMin-I and II. First, we analyze an algorithm that alternates between MMin-I and MMin-II. While such an algorithm does not provide much benefit when started at $X^{0}=\emptyset$ or $X^{0}=V$, we see that with a random initialization $X^{0}=R$, the alternation ensures convergence to a local minimum. Second, we address the question of which supergradients to select in general. In particular, we show that the supergradients $\hat{g}$ and $\check{g}$ subsume alternativee supergradients and provide the tightest results with MMin. Hence, our results are the tight. Alternating MMin-I and II and arbitrary initializations. Instead of running only one of MMin-I and II, we can run one until it stops and then switch to the other. Assume we initialize both algorithms with a random set $X^{0}=R\in\mathcal{L}_{+}$. By Theorem 5.3, we know that MMin-I will return a subset $R^{1}\supset R$ (no element will be removed because all removable elements are not in $B$, and $R\subset B$ by assumption). When MMin-I terminates, it holds that $\hat{g}_{R^{1}}(j)=f(j|R^{1})\geq 0$ for all $j\notin R^{1}$, and therefore $R^{1}$ cannot be increased using $\hat{g}_{R_{1}}$. We will call such a set an I-minimum. Similarly, MMin-II returns a set $R_{1}\subseteq R$ from which, considering that $\check{g}_{R_{1}}(j)=f(j|R_{1}\setminus j)\leq 0$ for all $j\in R_{1}$, no elements can be removed. We call such a non-decreasable set a D-minimum. Every local minimum is both an I-minimum and a D-minimum. We can apply MMin-II to the I-minimum $R^{1}$ returned by MMin-I. Let us call the resulting set $R^{2}$. Analogously, applying MMin-I to $R_{1}$ yields $R_{2}\supseteq R_{1}$. Lemma 5.6. The sets $R_{2}$ and $R^{2}$ are local optima. Furthermore, $R_{1}\subseteq R_{2}\subseteq R^{2}\subseteq R^{1}$. Proof. It is easy to see that $A\subseteq R_{1}\subseteq B$, and $A\subseteq R^{1}\subseteq B$. By Lemma 5.4, MMin-I applied to $R_{1}$ will only add elements, and MMin-II on $R^{1}$ will only remove elements. Since $R^{1}$ is an I-minimum, adding an element $j\in V\setminus R^{1}$ to any set $X\subset R^{1}$ never helps, and therefore $R^{1}$ contains all of $R_{1}$, $R_{2}$ and $R^{2}$. Similarly, $R_{1}$ is contained in $R_{2}$, $R^{2}$ and $R^{1}$. In consequence, it suffices to look at the contracted lattice $[R_{1},R^{1}]$, and any local minimum in this sublattice is a local minimum on $[\emptyset,V]$. Theorem 5.3 applied to the sublattice $[R_{1},R^{1}]$ (and the submodular function restricted to the sublattice) yields the inclusion $R_{2}\subseteq R^{2}$, so $R_{1}\subseteq R_{2}\subseteq R^{2}\subseteq R^{1}$, and both $R_{2}$ and $R^{2}$ are local minima. ∎ The following lemma provides a more general view. Lemma 5.7. Let $S_{1}\subseteq S^{1}$ be such that $S_{1}$ is an I-minimum and $S^{1}$ is a D-minimum. Then there exist local minima $S_{2}\subseteq S^{2}$ in $[S_{1},S^{1}]$ such that initializing with any $X^{0}\in[S_{1},S^{1}]$, an alternation of MMin-I and II converges to a local minimum in $[S_{2},S^{2}]$, and $$\displaystyle\min_{X\in[S_{1},S^{1}]}f(X)=\min_{X\in[S_{2},S^{2}]}f(X).$$ (7) Proof. Let $S_{2},S^{2}$ be the smallest and largest local minima within $[S_{1},S^{1}]$. By the same argumentation as for Lemma 5.6, using $X^{0}\in[S_{1},S^{1}]$ leads to a local minimum within $[S_{2},S^{2}]$. Since by definition all local optima in $[S_{1},S^{1}]$ are within $[S_{2},S^{2}]$, the global minimum within $[S_{1},S^{1}]$ will also be in $[S_{2},S^{2}]$. ∎ The above lemmas have a number of implications for minimization algorithms. First, many of the properties for initializing with $V$ or the empty set can be transferred to arbitrary initializations. In particular, the succession of MMin-I and II will terminate in $O(n^{2})$ iterations, regardless of what $X^{0}$ is. Second, Lemmas 5.6 and 5.7 provide useful pruning opportunities: we can prune down the initial lattice to $[R_{2},R^{2}]$ or $[S_{2},S^{2}]$, respectively. In particular, if any global optimizer of $f$ is contained in $[S_{1},S^{1}]$, it will also be contained in $[S_{2},S^{2}]$. Choice of supergradients. We close this section with a remark about the choice of supergradients. The following Lemma states how $\hat{g}_{X}$ and $\check{g}_{X}$ subsume alternative choices of supergradients and MMin-I and II lead to the tightest results possible. Lemma 5.8. Initialized with $X^{0}=\emptyset$, Algorithm 1 will converge to a subset of $A_{+}$ with any choice of supergradients. Initialized with $X^{0}=V$, the algorithm will converge to a superset of $B_{+}$ with any choice of supergradients. If $X^{0}$ is a local minimum, then the algorithm will not move with any supergradient. The proof of Lemma 5.8 is very similar to the proof of Theorem 5.3. 5.2 Constrained submodular minimization MMin straightforwardly generalizes to constraints more complex than $\mathcal{C}=2^{V}$, and Theorem 5.3 still holds for more general lattices or ring family constraints. Beyond lattices, MMin applies to any set of constraints $\mathcal{C}$ as long as we have an efficient algorithm at hand that minimizes a nonnegative modular cost function over $\mathcal{C}$. This subroutine can even be approximate. Such algorithms are available for cardinality bounds, independent sets of a matroid and many other combinatorial constraints such as trees, paths or cuts. As opposed to unconstrained submodular minimization, almost all cases of constrained submodular minimization are very hard [44, 21, 12], and admit at most approximate solutions in polynomial time. The next theorem states an upper bound on the approximation factor achieved by MMin-I for nonnegative, nondecreasing cost functions. An important ingredient in the bound is the curvature [5] of a monotone submodular function $f$, defined as $$\displaystyle\kappa_{f}=1-\min\nolimits_{j\in V}f(j\mid V\backslash j)\,/\,f(j)$$ (8) Theorem 5.9. Let $X^{*}\in\operatorname*{argmin}_{X\in\mathcal{C}}f(X)$. The solution $\widehat{X}$ returned by MMin-I satisfies $$\displaystyle f(\widehat{X})\leq\frac{|X^{*}|}{1+(|X^{*}|-1)(1-\kappa_{f})}f(X% ^{*})\leq\frac{1}{1-\kappa_{f}}f(X^{*})$$ If the minimization in Step 4 is done with approximation factor $\beta$, then $f(\widehat{X})\leq\beta/(1-\kappa_{f})f(X^{*})$. Before proving this result, we remark that a similar, slightly looser bound was shown for cuts in [22], by using a weaker notion of curvature. Note that the bound in Theorem 5.9 is at most $\frac{n}{1+(n-1)(1-\kappa_{f})}$, where $n=|V|$ is the dimension of the problem. Proof. We will use the shorthand $g\triangleq\hat{g}_{\emptyset}$. To prove Theorem 5.9, we use the following result shown in [20]: $$f(\widehat{X})\leq\frac{g(X^{*})/f(i)}{1+(1-\kappa_{f})(g(X^{*})/f(i)-1)}f(X^{% *})$$ (9) for any $i\in V$. We now transfer this result to curvature. To do so, we use $i^{\prime}\in\arg\max_{i\in V}f(i)$, so that $g(X^{*})=\sum_{j\in X^{*}}f(j)\leq|X^{*}|f(i^{\prime})$. Observing that the function $p(x)=\frac{x}{1+(1-\kappa_{f})(x-1)}$ is increasing in $x$ yields that $$\displaystyle f(\widehat{X})\leq\frac{|X^{*}|}{1+(1-\kappa_{f})(|X^{*}|-1)}f(X% ^{*}).$$ (10) ∎ For problems where $\kappa_{f}<1$, Theorem 5.9 yields a constant approximation factor and refines bounds for constrained minimization that are given in [12, 44]. To our knowledge, this is the first curvature dependent bound for this general class of minimization problems. A class of functions with $\kappa_{f}=1$ are matroid rank functions, implying that these functions are difficult instances the MMin algorithms. But several classes of functions occurring in applications have more benign curvature. For example, concave over modular functions were used in [31, 22]. These comprise, for instance, functions of the form $f(X)=(w(X))^{a}$, for some $a\in[0,1]$ and a nonnegative weight vector $w$, whose curvature is $\kappa_{f}\approx 1-a(\frac{\min_{j}w(j)}{w(V)})^{1-a}>0$. A special case is $f(X)=|X|^{a}$, with curvature $\kappa_{f}=1-an^{a-1}$, or $f(X)=\log(1+w(X))$ satisfying $\kappa_{f}\approx 1-\frac{\min_{j}w(j)}{w(V)}$. The bounds of Theorem 5.9 hold after the first iteration. Nevertheless, empirically we often found that for problem instances that are not worst-case, subsequent iterations can improve the solution substantially. Using Theorem 5.9, we can bound the number of iterations the algorithm will take. To do so, we assume an $\eta$-approximate version, where we proceed only if $f(X^{t+1})\leq(1-\eta)f(X^{t})$ for some $\eta>0$. In practice, the algorithm usually terminates after 5 to 10 iterations for an arbitrarily small $\eta$. Lemma 5.10. MMin-I runs in $O(\frac{1}{\eta}T\log\frac{n}{1+(n-1)(1-\kappa_{f})})$ time, where $T$ is the time for minimizing a modular function subject to $X\in\mathcal{C}$. Proof. At the end of the first iteration, we obtain a set $X^{1}$ such that $f(X^{1})\leq\frac{n}{1+(n-1)(1-\kappa_{f})}f(X^{*})$. The $\eta$-approximate assumption implies that $f(X^{t+1})\leq(1-\eta)f(X^{t})\leq(1-\eta)^{t}f(X^{1})$. Using that $\log(1-\eta)\leq\eta^{-1}$ and Theorem 5.9, we see that the algorithm terminates after at most $O(\frac{1}{\eta}\log\frac{n}{1+(n-1)(1-\kappa_{f})})$ iterations. ∎ 5.3 Experiments We will next see that, apart from its theoretical properties, MMin is in practice competitive to more complex algorithms. We implement and compare algorithms using Matlab and the SFO toolbox [24]. Unconstrained minimization We first study the results in Section 5.1 for contracting the lattice of possible minimizers. We measure the size of the new lattices relative to the ground set. Applying MMin-I and II (lattice $\mathcal{L}_{+}$) to Iwata’s test function [10], we observe an average reduction of $99.5\%$ in the lattice. MMin-III (lattice $\mathcal{L}$) obtains only about $60\%$ reduction. Averages are taken for $n$ between $20$ and $120$. In addition, we use concave over modular functions $\sqrt{w_{1}(X)}+\lambda w_{2}(V\backslash X)$ with randomly chosen vectors $w_{1},w_{2}$ in $[0,1]^{n}$ and $n=50$. We also consider the application of selecting limited vocabulary speech corpora. [32, 23] use functions of the form $\sqrt{w_{1}(\Gamma(X))}+w_{2}(V\backslash X)$, where $\Gamma(X)$ is the neighborhood function of a bipartite graph. Here, we choose $n=100$ and random vectors $w_{1}$ and $w_{2}$. For both function classes, we vary $\lambda$ such that the optimal solution $X^{*}$ moves from $X^{*}=\emptyset$ to $X^{*}=V$. The results are shown in Figure 3. In both cases, we observe a significant reduction of the search space. When used as a preprocessing step for the minimum norm point algorithm (MN) [10], this pruned lattice speeds up the MN algorithm accordingly, in particular for the speech data. The dotted lines represent the relative time of MN including the respective preprocessing, taken with respect to MN without preprocessing. Figure 3 also shows the average results over $10$ random choices of weights in both cases. In order to obtain accurate estimates of the timings, we run each experiment $5$ times and take the minimum of these timing valuess. Constrained minimization. For constrained minimization, we compare MMin-I to two methods: a simple algorithm (MU) that minimizes the upper bound $g(X)=\sum_{i\in X}f(i)$ [12] (this is identical to the first iteration of MMin-I), and a more complex algorithm (EA) that computes an approximation to the submodular polyhedron [13] and in many cases yields a theoretically optimal approximation. MU has the theoretical bounds of Theorem 5.9, while EA achieves a worst-case approximation factor of $O(\sqrt{n}\log n)$. We show two experiments: the theoretical worst-case and average-case instances. Figure 4 illustrates the results. Worst case. We use a very hard cost function [13] $$f(X)=\min\{|X|,|X\cap\bar{R}|+\beta,\alpha\},$$ (11) where $\alpha=n^{1/2+\epsilon}$ and $\beta=n^{2\epsilon}$, and $R$ is a random set such that $|R|=\alpha$. This function is the theoretical worst case. Figure 4 shows results for cardinality lower bound constraints; the results for other, more complex constraints are similar. As $\epsilon$ shrinks, the problem becomes harder. In this case, EA and MMin-I achieve about the same empirical approximation factors, which matches the theoretical guarantee of $n^{1/2-\epsilon}$. Average case. We next compare the algorithms on more realistic functions that occur in applications. Figure 4 shows the empirical approximation factors for minimum submodular-cost spanning tree, bipartite matching, and shortest path. We use four classes of randomized test functions: (1) concave (square root or log) over modular (CM), (2) clustered CM (CCM) of the form $f(X)=\sum_{i=1}^{k}\sqrt{w(X\cap C_{k})}$ for clusters $C_{1},\cdots,C_{k}$, (3) Best Set (BS) functions where the optimal feasible set $R$ is chosen randomly ($f(X)=I(|X\cap R|\geq 1)+\sum_{j\in R\backslash X}w_{j}$) and (4) worst case-like functions (WC) similar to equation (11). Functions of type (1) and (2) have been used in speech and computer vision [31, 22, 17] and have reduced curvature ($\kappa_{f}<1$). Functions of type (3) and (4) have $\kappa_{f}=1$. In all four cases, we consider both sparse and dense graphs, with random weight vectors $w$. The plots show averages over $20$ instances of these graphs. For sparse graphs, we consider grid like graphs in the form of square grids, grids with diagonals and cubic grids. For dense graphs, we sparsely connect a few dense cluster subgraphs. For matchings, we restrict ourselves to bipartite graphs, and consider both sparse and dense variants of these. First, we observe that in many cases, MMin clearly outperforms MU. This suggests the practical utility of more than one iteration. Second, despite its simplicity, MMin performs comparably to EA, and sometimes even better. In summary, the experiments suggest that the complex EA only gains on a few worst-case instances, whereas in many (average) cases, MMin yields near-optimal results (factor 1–2). In terms of running time, MMin is definitely preferable: on small instances (for example $n=40$), our Matlab implementation of MMin takes 0.2 seconds, while EA needs about 58 seconds. On larger instances ($n=500$), the running times differ on the order of seconds versus hours. 6 Submodular maximization Just like for minimization, for submodular maximization too we obtain a family of algorithms where each member is specified by a distinct schedule of subgradients. We will only select subgradients that are vertices of the subdifferential, i.e., each subgradient corresponds to a permutation of $V$. For any of those choices, MMax converges quickly. To bound the running time, we assume that we proceed only if we make sufficient progress, i.e., if $f(X^{t+1})\geq(1+\eta)f(X^{t})$. Lemma 6.1. MMax with $X^{0}=\operatorname*{argmax}_{j}f(j)$ runs in time $O(T\log_{1+\eta}n)$, where $T$ is the time for maximizing a modular function subject to $X\in\mathcal{C}$. Proof. Let $X^{*}$ be the optimal solution, then $$f(X^{*})\leq\sum_{i\in X^{*}}f(j)\leq n\max_{j\in V}f(j)=nf(X^{0}).$$ (12) Furthermore, we know that $f(X^{t})\geq(1+\eta)^{t}f(X^{0})$. Therefore, we have reached the maximum function value after at most $(\log n)/\log(1+\eta)$ iterations. ∎ In practice, we observe that MMax terminates within 3-10 iterations. We next consider specific subgradients and their theoretical implications. For unconstrained problems, we assume the submodular function to be non-monotone (the results trivially hold for monotone functions too); for constrained problems, we assume the function $f$ to be monotone nondecreasing. Our results rely on the observation that many maximization algorithms actually compute a specific subgradient and run MMax with this subgradient. To our knowledge, this observation is new. 6.1 Unconstrained Maximization Random Permutation (RA/RP). In iteration $t$, we randomly pick a permutation $\sigma$ that defines a subgradient at $X^{t-1}$, i.e., $X^{t-1}$ is assigned to the first $|X^{t-1}|$ positions. At $X^{0}=\emptyset$, this can be any permutation. Stopping after the first iteration (RP) achieves an approximation factor of $1/4$ in expectation, and $1/2$ for symmetric functions. Making further iterations (RA) only improves the solution. Lemma 6.2. When running Algorithm RP with $X^{0}=\emptyset$, it holds after one iteration that $\mathbf{E}(f(X^{1}))\geq\frac{1}{4}f(X^{*})$ if $f$ is a general non-negative submodular function, and $\mathbf{E}(f(X^{1}))\geq\frac{1}{2}f(X^{*})$ if $f$ is symmetric. Proof. Each permutation has the same probability $1/n!$ of being chosen. Therefore, it holds that $$\displaystyle\mathbf{E}(f(X^{1}))$$ $$\displaystyle=\mathbf{E}_{\sigma}(\max_{X\subseteq V}h^{\sigma}_{\emptyset}(X))$$ (13) $$\displaystyle=\frac{1}{n!}\sum_{\sigma}\max_{X\subseteq V}h^{\sigma}_{% \emptyset}(X)$$ (14) Let $\emptyset\subseteq S^{\sigma}_{1}\subseteq S^{\sigma}_{2}\cdots S^{\sigma}_{n}=V$ be the chain corresponding to a given permutation $\sigma$. We can bound $$\max_{X\subseteq V}h^{\sigma}_{\emptyset}(X)\geq\sum_{k=0}^{n}\frac{\binom{n}{% k}}{2^{n}}f(S^{\sigma}_{k})$$ (15) because $\max_{X\subseteq V}h^{\sigma}_{\emptyset}(X)\geq f(S^{\sigma}_{k}),\forall k$ and $\sum_{k=0}^{n}\frac{\binom{n}{k}}{2^{n}}=1$. Together, Equations (14) and (15) imply that $$\displaystyle\mathbf{E}(f(X^{1}))$$ $$\displaystyle\geq\mathbf{E}_{\sigma}(\max_{X\subseteq V}h^{\sigma}_{\emptyset}% (X))$$ (16) $$\displaystyle=\sum_{\sigma}\sum_{k=0}^{n}\frac{\binom{n}{k}}{2^{n}}f(S^{\sigma% }_{k})\frac{1}{n!}$$ (17) $$\displaystyle=\sum_{k=0}^{n}\frac{\binom{n}{k}}{n!2^{n}}\sum_{\sigma}f(S^{% \sigma}_{k})$$ (18) $$\displaystyle=\sum_{k=0}^{n}\frac{\binom{n}{k}}{n!2^{n}}k!(n-k)!\sum_{S:|S|=k}% f(S)$$ (19) $$\displaystyle=\sum_{S}\frac{f(S)}{2^{n}}$$ (20) $$\displaystyle=\mathbf{E}_{S}(f(S))$$ (21) By $\mathbf{E}_{S}(f(S))$, we denote the expected function value when the set $S$ is sampled uniformly at random, i.e., each element is included with probability $1/2$. [8] shows that $\mathbf{E}_{S}(f(S))\geq\frac{1}{4}f(X^{*})$. For symmetric submodular functions, the factor is $\frac{1}{2}$. ∎ Randomized local search (RLS). Instead of using a completely random subgradient as in RA, we fix the positions of two elements: the permutation must satisfy that $\sigma^{t}(|X^{t}|+1)\in\operatorname*{argmax}_{j}f(j|X^{t})$ and $\sigma^{t}(|X^{t}|-1)\in\operatorname*{argmin}_{j}f(j|X^{t}\backslash j)$. The remaining positions are assigned randomly. An $\eta$-approximate version of MMax with such subgradients returns an $\eta$-approximate local maximum that achieves an improved approximation factor of $1/3-\eta$ in $O(\frac{n^{2}\log n}{\eta}$) iterations. Lemma 6.3. Algorithm RLS returns a local maximum $X$ that satisfies $\max\{f(X),f(V\backslash X)\}\geq(\frac{1}{3}-\eta)f(X^{*})$ in $O(\frac{n^{2}\log n}{\eta}$) iterations. Proof. At termination ($t=T$), it holds that $\max_{j}f(j|X^{T})\leq 0$ and $\min_{j}f(j|X^{T}\setminus j)\geq 0$; this implies that the set $X^{t}$ is local optimum. To show local optimality, recall that the subgradient $h^{\sigma^{T}}_{X^{T}}$ satisfies $h^{\sigma^{T}}_{X^{T}}(X^{T})=f(X^{T})$, and $h^{\sigma^{T}}_{X^{T}}(Y)\geq h^{\sigma^{T}}_{X^{T}}(X^{T})$ for all $Y\subseteq V$. Therefore, it must hold that $max_{j\notin X^{T}}f(j|X^{T})=\max_{j\notin X^{T}}h^{\sigma^{T}}_{X^{T}}(j)\leq 0$, and $\min_{j\in X^{T}}f(j|X^{T}\backslash j)=h^{\sigma^{T}}_{X^{T}}(j)\geq 0$, which implies that the set $X^{T}$ is a local maximum. We now use a result by [8] showing that if a set $X$ is a local optimum, then $f(X)\geq\frac{1}{3}f(X^{*})$ if $f$ is a general non-negative submodular set function and $f(X)\geq\frac{1}{2}f(X^{*})$ if $f$ is a symmetric submodular function. If the set is an $\eta$-approximate local optimum, we obtain a $\frac{1}{3}-\eta$ approximation [8]. A complexity analysis similar to Theorem 6.1 reveals that the worst case complexity of this algorithm is $O(\frac{n^{2}\log n}{\eta})$. ∎ Note that even finding an exact local maximum is hard for submodular functions [8], and therefore it is necessary to resort to an $\eta$-approximate version, which converges to an $\eta$-approximate local maximum. Deterministic local search (DLS). A completely deterministic variant of RLS defines the permutation by an entirely greedy ordering. We define permutation $\sigma^{t}$ used in iteration $t$ via the chain $\emptyset=S^{\sigma^{t}}_{0}\subset S^{\sigma^{t}}_{1}\subset\ldots\subset S^{% \sigma^{t}}_{n}$ it will generate. The initial permutation is $\sigma^{0}(j)=\operatorname*{argmax}_{k\notin S^{\sigma^{0}}_{j-1}}f(k|S^{% \sigma^{0}}_{j-1})$ for $j=1,2,\ldots$. In subsequent iterations $t$, the permutation $\sigma^{t}$ is $$\displaystyle\sigma^{t}(j)=\begin{cases}\sigma^{t-1}(j)&\text{ if }t\text{ % even, }j\in X^{t-1}\\ \operatorname*{argmax}_{k}f(k|S^{\sigma^{t}}_{j-1})&\text{ if }t\text{ even, }% j\notin X^{t-1}\\ \operatorname*{argmin}_{k}f(k|S^{\sigma^{t}}_{j+1}\backslash k)&\text{ if }t% \text{ odd, }j\in X^{t-1}\\ \sigma^{t-1}(j)&\text{ if }t\text{ odd, }j\notin X^{t-1}.\end{cases}$$ This schedule is equivalent to the deterministic local search (DLS) algorithm by [8], and therefore achieves an approximation factor of $1/3-\eta$. Bi-directional greedy (BG). The procedures above indicate that greedy and local search algorithms implicitly define specific chains and thereby subgradients. Likewise, the deterministic bi-directional greedy algorithm by [4] induces a distinct permutation of the ground set. It is therefore equivalent to MMax with the corresponding subgradients and achieves an approximation factor of $1/3$. This factor improves that of the local search techniques by removing $\eta$. Moreover, unlike for local search, the $1/3$ approximation holds already after the first iteration. Lemma 6.4. The set $X^{1}$ obtained by Algorithm 1 with the subgradient equivalent to BG satisfies that $f(X)\geq\frac{1}{3}f(X^{*})$. Proof. Given an initial ordering $\tau$, the bi-directional greedy algorithm by [4] generates a chain of sets. Let $\sigma^{\tau}$ denote the permutation defined by this chain, obtainable by mimicking the algorithm. We run MMax with the corresponding subgradient. By construction, the set $S^{\tau}$ returned by the bi-directional greedy algorithm is contained in the chain. Therefore, it holds that $$\displaystyle f(X^{1})$$ $$\displaystyle\geq\max_{X\subseteq V}h^{\sigma^{\tau}}_{\emptyset}(X)$$ (22) $$\displaystyle\geq\max_{k}f(S^{\sigma^{\tau}}_{k})$$ (23) $$\displaystyle\geq f(S^{\tau})$$ (24) $$\displaystyle\geq\frac{1}{3}f(X^{*}).$$ (25) The first inequality follows since the subgradient is tight for all sets in the chain. For the second inequality, we used that $S^{\tau}$ belongs to the chain, and hence $S^{\tau}=S^{\sigma^{\tau}}_{j}$ for some $j$. The last inequality follows from the approximation factor satisfied by $S^{\tau}$ [4]. We can continue the algorithm, using any one of the adaptive schedules above to get a locally optimal solution. This can only improve the solution. ∎ Randomized bi-directional greedy (RG). Like its deterministic variant, the randomized bi-directional greedy algorithm by [4] can be shown to run MMax with a specific subgradient. Starting from $\emptyset$ and $V$, it implicitly defines a random chain of subsets and thereby (random) subgradients. A simple analysis shows that this subgradient leads to the best possible approximation factor of $1/2$ in expectation. Like its deterministic counterpart, the Randomized bi-directional Greedy algorithm (RG) by [4] induces a (random) permutation $\sigma^{\tau}$ based on an initial ordering $\tau$. Lemma 6.5. If the subgradient in MMax is determined by $\sigma^{\tau}$, then the set $X^{1}$ after the first iteration satisfies $\mathbf{E}(f(X^{1}))\geq\frac{1}{2}f(X^{*})$, where the expectation is taken over the randomness in $\sigma^{\tau}$. Proof. The permutation $\sigma^{\tau}$ is obtained by a randomized algorithm, but once $\sigma^{\tau}$ is fixed, the remainder of MMax is deterministic. By an argumentation similar to that in the proof of Lemma 6.4, it holds that $$\displaystyle\mathbf{E}(f(X))$$ $$\displaystyle\geq\mathbf{E}(\max_{X}h^{\sigma^{\tau}}_{\emptyset}(X))$$ (26) $$\displaystyle\geq\mathbf{E}(\max_{k}f(S^{\sigma^{\tau}}_{k}))$$ (27) $$\displaystyle\geq\mathbf{E}(f(S^{\sigma^{\tau}}))$$ (28) $$\displaystyle\geq\frac{1}{2}f(X^{*})$$ (29) The last inequality follows from a result in [4]. ∎ 6.2 Constrained Maximization In this final section, we analyze subgradients for maximization subject to the constraint $X\in\mathcal{C}$. Here we assume that $f$ is monotone. An important subgradient results from the greedy permutation $\sigma^{g}$, defined as $$\displaystyle\sigma^{g}(i)\in\operatorname*{argmax}_{j\notin S^{\sigma^{g}}_{i% -1}\text{ and }S^{\sigma^{g}}_{i-1}\cup\{j\}\in\mathcal{C}}f(j|S^{\sigma^{g}}_% {i-1}).$$ (30) This definition might be partial; we arrange any remaining elements arbitrarily. When using the corresponding subgradient $h^{\sigma^{g}}$, we recover a number of approximation results already after one iteration: Lemma 6.6. Using $h^{\sigma^{g}}$ in iteration 1 of MMax yields the following approximation bounds for $X^{1}$: • $\frac{1}{\kappa_{f}}(1-e^{-\kappa_{f}})$, if $\mathcal{C}=\{X\subseteq V:|X|\leq k\}$ • $\frac{1}{p+\kappa_{f}}$, for the intersection $\mathcal{C}\!=\!\cap_{i=1}^{p}\mathcal{I}_{i}$ of $p$ matroids • $\frac{1}{\kappa_{f}}(1-(\frac{K-\kappa_{f}}{K})^{k})$, for any down-monotone constraint $\mathcal{C}$, where $K$ and $k$ are the maximum and minimum cardinality of the maximal feasible sets in $\mathcal{C}$. Proof. We prove the first result for cardinality constraints. The proofs for the matroid and general down-monotone constraints are analogous. By the construction of $\sigma^{g}$, the set $S^{\sigma^{g}}_{k}$ is exactly the set returned by the greedy algorithm. This implies that $$\displaystyle f(X^{1})$$ $$\displaystyle\geq\operatorname*{argmax}_{X:|X|\leq k}h^{\sigma^{g}}_{\emptyset% }(X)$$ (31) $$\displaystyle\geq h^{\sigma^{g}}_{\emptyset}(S^{\sigma^{g}}_{k})$$ (32) $$\displaystyle=f(S^{\sigma^{g}}_{k})$$ (33) $$\displaystyle\geq\frac{(1-e^{-\kappa_{f}})}{\kappa_{f}}f(X^{*}).$$ (34) The last inequality follows from [39, 5]. ∎ A very similar construction of a greedy permutation provides bounds for budget constraints, i.e., $c(S)\triangleq\sum_{i\in S}c(i)\leq B$ for some given nonnegative costs $c$. In particular, define a permutation as: $$\displaystyle\sigma^{g}(i)\in\operatorname*{argmax}_{j\notin S^{\sigma^{g}}_{i% -1},c(S^{\sigma^{g}}_{i-1}\cup\{j\})\leq B}\frac{f(j|S^{\sigma^{g}}_{i-1})}{c(% j)}.$$ (35) The following result then follows from [30, 43]. Lemma 6.7. Using $\sigma^{g}$ in MMax under the budget constraints yields: $$\max\{\max_{i:c(i)\leq B}f(i),f(X^{1})\}\geq(1-1/\sqrt{e})f(X^{*}).$$ (36) Let $\sigma^{ijk}$ be a permutation with $i,j,k$ in the first three positions, and the remaining arrangement greedy. Running $O(n^{3})$ restarts of MM yields sets $X_{ijk}$ (after one iteration) with $$\max_{i,j,k\in V}f(X_{ijk})\,\geq(1-1/e)f(X^{*}).$$ (37) The proof is analogous to that of Lemma 6.6. Table 1 lists results for monotone submodular maximization under different constraints. It would be interesting if some of the constrained variants of non-monotone submodular maximization could be naturally subsumed in our framework too. In particular, some recent algorithms [27, 28] propose local search based techniques to obtain constant factor approximations for non-monotone submodular maximization under knapsack and matroid constraints. Unfortunately, these algorithms require swap operations along with inserting and deleting elements. We do not currently know how to phrase these swap operations via our framework and leave this relation as an open problem. While a number of algorithms cannot be naturally seen as an instance of our framework, we show in the following section that any polynomial time approximation algorithm for unconstrained or constrained variants of submodular optimization can be ultimately seen as an instance of our algorithm, via a polynomial-time computable subgradient. 6.3 Generality The correspondences between MMax and maximization algorithms hold even more generally: Theorem 6.8. For any polynomial-time unconstrained submodular maximization algorithm that achieves an approximation factor $\alpha$, there exists a schedule of subgradients (obtainable in polynomial time) that, if used within MMax, leads to a solution with the same approximation factor $\alpha$. The proof relies on the following observation. Lemma 6.9. Any submodular function $f$ satisfies $$\max_{X\in\mathcal{C}}f(X)=\max_{X\in\mathcal{C},h\in\mathcal{P}_{f}}h(X)=\max% _{X\in\mathcal{C},\sigma\in\Sigma}h^{\sigma}_{\emptyset}(X).$$ (38) Lemma 6.9 implies that there exists a permutation (and equivalent subgradient) with which MMax finds the optimal solution in the first iteration. Known hardness results [7] imply that this permutation may not be obtainable in polynomial time. Proof. (Lemma 6.9) The first equality in Lemma 6.9 follows from the fact that any submodular function $f$ can be written as $$f(X)=\max_{h\in P_{f}}h(X).$$ (39) For the second equality, we use the fact that a linear program over a polytope has a solution at one of the extreme points of the corresponding polytope. ∎ We can now prove Theorem 6.8 Proof. (Thm. 6.8) Let $Y$ be the set returned by the approximation algorithm; this set is polynomial-time computable by definition. Let $\tau$ be an arbitrary permutation that places the elements in $Y$ in the first $|Y|$ positions. The subgradient $h^{\tau}$ defined by $\tau$ is a subgradient both for $\emptyset$ and for $Y$. Therefore, using $X^{0}=\emptyset$ and $h^{\tau}$ in the first iteration, we obtain a set $X^{1}$ with $$\displaystyle f(X^{1})\geq h^{\tau}_{\emptyset}(X^{1})\geq h^{\tau}_{\emptyset% }(Y)=f(Y)\geq\alpha f(X^{*}).$$ (40) The equality follows from the fact that $Y$ belongs to the chain of $\tau$. ∎ While the above theorem shows the optimality of MMax in the unconstrained setting, a similar result holds for the constrained case: Corollary 6.10. Let $\mathcal{C}$ be any constraint such that a linear function can be exactly maximized over $\mathcal{C}$. For any polynomial-time algorithm for submodular maximization over $\mathcal{C}$ that achieves an approximation factor $\alpha$, there exists a schedule of subgradients (obtainable in polynomial time) that, if used within MMax, leads to a solution with the same approximation factor $\alpha$. The proof of Corollary 6.10 follows directly from the Theorem 6.8. Lastly, we pose the question of selecting the optimal subgradient in each iteration. An optimal subgradient $h$ would lead to a function $m_{h}$ whose maximization yields the largest improvement. Unfortunately, obtaining such an “optimal” subgradient is impossible: Theorem 6.11. The problem of finding the optimal subgradient $\sigma^{OPT}=\operatorname*{argmax}_{\sigma,X\subseteq V}h^{\sigma}_{X^{t}}(X)$ in Step 4 of Algorithm 1 is NP-hard even when $\mathcal{C}=2^{V}$. Given such an oracle, however, MMax using subgradient $\sigma^{OPT}$ returns a global optimizer. Proof. Lemma 6.9 implies that an optimal subgradient at $X^{0}=\emptyset$ or $X^{0}=V$ is a subgradient at an optimal solution. An argumentation as in Equation (40) shows that using this subgradient in MM leads to an optimal solution. Since this would solve submodular maximization (which is NP-hard), it must be NP-hard to find such a subgradient. To show that this holds for arbitrary $X^{t}$ (and correspondingly at every iteration), we use that the submodular subdifferential can be expressed as a direct product between a submodular polyhedron and an anti-submodular polyhedron [9]. Any problem involving an optimization over the sub-differential, can then be expressed as an optimization over a submodular polyhedron (which is a subdifferential at the empty set) and an anti-submodular polyhedron (which is a subdifferential at $V$) [9]. Correspondingly, Equation (38) can be expressed as the sum of two submodular maximization problems. ∎ 6.4 Experiments We now empirically compare variants of MMax with different subgradients. As a test function, we use the objective of [29], $f(X)=\sum_{i\in V}\sum_{j\in X}s_{ij}-\lambda\sum_{i,j\in X}s_{ij}$, where $\lambda$ is a redundancy parameter. This non-monotone function was used to find the most diverse yet relevant subset of objects in a large corpus. We use the objective with both synthetic and real data. We generate $10$ instances of random similarity matrices $\{s_{ij}\}_{ij}$ and vary $\lambda$ from $0.5$ to 1. Our real-world data is the Speech Training data subset selection problem [29] on the TIMIT corpus [11], using the string kernel metric [41] for similarity. We use $20\leq n\leq 30$ so that the exact solution can still be computed with the algorithm of [14]. We compare the algorithms DLS, BG, RG, RLS, RA and RP, and a baseline RS that picks a set uniformly at random. RS achieves a $1/4$ approximation in expectation [8]. For random algorithms, we select the best solution out of 5 repetitions. Figure 5 shows that DLS, BG, RG and RLS dominate. Even though RG has the best theoretical worst-case bounds, it performs slightly poorer than the local search ones and BG. Moreover, MMax with random subgradients (RP) is much better than choosing a set uniformly at random (RS). In general, the empirical approximation factors are much better than the theoretical worst-case bounds. Importantly, the MMax variants are extremely fast, about 200-500 times faster than the exact branch and bound technique of [14]. 7 Discussion and Conclusions In this paper, we introduced a general MM framework for submodular optimization algorithms. This framework is akin to the class of algorithms for minimizing the difference between submodular functions [37, 17]. In addition, it may be viewed as a special case of a proximal minimization algorithm that uses Bregman divergences derived from submodular functions [19]. To our knowledge this is the first generic and unifying framework of combinatorial algorithms for submodular optimization. An alternative framework relies on relaxing the discrete optimization problem by using a continuous extension (the Lovász extension for minimization and multilinear extension for maximization). Relaxations have been applied to some constrained [16] and unconstrained [2] minimization problems as well as maximization problems [4]. Such relaxations, however, rely on a final rounding step that can be challenging — the combinatorial framework obviates this step. Moreover, our results show that in many cases, it yields good results very efficiently. Acknowledgments: We thank Karthik Mohan, John Halloran and Kai Wei for discussions. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1162606, and by a Google, a Microsoft, and an Intel research award. This material is also based upon work supported in part by the Office of Naval Research under contract/grant number N00014-11-1-068, NSF CISE Expeditions award CCF-1139158 and DARPA XData Award FA8750-12-2-0331, and gifts from Amazon Web Services, Google, SAP, Cisco, Clearstory Data, Cloudera, Ericsson, Facebook, FitWave, General Electric, Hortonworks, Intel, Microsoft, NetApp, Oracle, Samsung, Splunk, VMware and Yahoo!. References [1] I. Averbakh and O. Berman. Categorized bottleneck-minisum path problems on networks. Operations Research Letters, 16:291–297, 1994. [2] F. Bach. Learning with Submodular functions: A convex Optimization Perspective. Arxiv, 2011. [3] Y. Boykov and M.P. Jolly. Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In ICCV, 2001. [4] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A tight (1/2) linear-time approximation to unconstrained submodular maximization. In FOCS, 2012. [5] M. Conforti and G. Cornuejols. Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the Rado-Edmonds theorem. Discrete Applied Mathematics, 7(3):251–274, 1984. [6] A. Delong, O. Veksler, A. Osokin, and Y. Boykov. Minimizing sparse high-order energies by submodular vertex-cover. In In NIPS, 2012. [7] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 1998. [8] U. Feige, V. Mirrokni, and J. Vondrák. Maximizing non-monotone submodular functions. SIAM J. COMPUT., 40(4):1133–1155, 2007. [9] S. Fujishige. Submodular functions and optimization, volume 58. Elsevier Science, 2005. [10] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization, 7:3–17, 2011. [11] J. Garofolo, Fisher Lamel, L., J. W., Fiscus, D. Pallet, and N. Dahlgren. Timit, acoustic-phonetic continuous speech corpus. In DARPA, 1993. [12] G. Goel, C. Karande, P. Tripathi, and L. Wang. Approximability of combinatorial problems with multi-agent submodular cost functions. In FOCS, 2009. [13] M.X. Goemans, N.J.A. Harvey, S. Iwata, and V. Mirrokni. Approximating submodular functions everywhere. In SODA, pages 535–544, 2009. [14] B. Goldengorin, G.A. Tijssen, and M. Tso. The maximization of submodular functions: Old and new proofs for the correctness of the dichotomy algorithm. University of Groningen, 1999. [15] D.R. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, 2004. [16] S. Iwata and K. Nagano. Submodular function minimization under covering constraints. In In FOCS, pages 671–680. IEEE, 2009. [17] R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular functions, with applications. In UAI, 2012. [18] R. Iyer and J. Bilmes. The submodular Bregman and Lovász-Bregman divergences with applications. In NIPS, 2012. [19] R. Iyer, S. Jegelka, and J. Bilmes. Mirror descent like algorithms for submodular optimization. NIPS Workshop on Discrete Optimization in Machine Learning (DISCML), 2012. [20] S. Jegelka. Combinatorial Problems with submodular coupling in machine learning and computer vision. PhD thesis, ETH Zurich, 2012. [21] S. Jegelka and J. A. Bilmes. Approximation bounds for inference using cooperative cuts. In ICML, 2011. [22] S. Jegelka and J. A. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In CVPR, 2011. [23] S. Jegelka, H. Lin, and J. Bilmes. On fast approximate submodular minimization. In NIPS, 2011. [24] A. Krause. SFO: A toolbox for submodular function optimization. JMLR, 11:1141–1144, 2010. [25] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. JMLR, 9:235–284, 2008. [26] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. arXiv preprint arXiv:1207.6083, 2012. [27] J. Lee, V.S. Mirrokni, V. Nagarajan, and M. Sviridenko. Non-monotone submodular maximization under matroid and knapsack constraints. In STOC, pages 323–332. ACM, 2009. [28] Jon Lee, Maxim Sviridenko, and Jan Vondrák. Submodular maximization over multiple matroids via generalized exchange properties. In APPROX, 2009. [29] H. Lin and J. Bilmes. How to select a good training-data subset for transcription: Submodular active selection for sequences. In Interspeech, 2009. [30] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In NAACL, 2010. [31] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In ACL, 2011. [32] H. Lin and J. Bilmes. Optimal selection of limited vocabulary speech corpora. In Interspeech, 2011. [33] S Thomas McCormick. Submodular function minimization. Discrete Optimization, 12:321–391, 2005. [34] G.J. McLachlan and T. Krishnan. The EM algorithm and extensions. New York, 1997. [35] K. Nagano, Y. Kawahara, and K. Aihara. Size-constrained submodular minimization through minimum norm base. In ICML, 2011. [36] K. Nagano, Y. Kawahara, and S. Iwata. Minimum average cost clustering. In NIPS, 2010. [37] M. Narasimhan and J. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. In UAI, 2005. [38] M. Narasimhan, N. Jojic, and J. Bilmes. Q-clustering. NIPS, 18:979, 2006. [39] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978. [40] J.B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming, 118(2):237–251, 2009. [41] J. Rousu and J. Shawe-Taylor. Efficient computation of gapped substring kernels on large alphabets. Journal of Machine Learning Research, 6(2):1323, 2006. [42] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In NIPS, 2010. [43] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41–43, 2004. [44] Z. Svitkina and L. Fleischer. Submodular approximation: Sampling-based algorithms and lower bounds. In FOCS, pages 697–706, 2008. [45] P.-J. Wan, G. Calinescu, X.-Y. Li, and O. Frieder. Minimum-energy broadcasting in static ad hoc wireless networks. Wireless Networks, 8:607–617, 2002. [46] A.L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In NIPS, 2002.
Possible origin of the slow-diffusion region around Geminga Kun Fang${}^{1}$ Xiao-Jun Bi${}^{1,2}$ Peng-Fei Yin${}^{1}$ ${}^{1}$ Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China ${}^{2}$ School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China fangkun@ihep.ac.cnbixj@ihep.ac.cnyinpf@ihep.ac.cn Abstract Geminga pulsar is surrounded by a multi-TeV $\gamma$-ray halo radiated by high energy electrons and positrons accelereted by the central pulsar wind nebula (PWN). The angular profile of the $\gamma$-ray emission reported by HAWC indicates an anomalously slow diffusion for cosmic-ray electrons and positrons in the halo region around Geminga. In the paper we study the possible mechanism for the origin of slow diffusion. At first, we consider the self-generated Alfvén waves due to streaming instability of the electrons and positrons released by Geminga. However, even considering a very optimistic scenario for the wave growth, we find this mechanism DOES NOT work to account for the extremely slow diffusion at the present day if taking the proper motion of the pulsar into account. The reason is straightforward as the PWN is too weak to generate enough high energy electrons and positrons to stimulate strong turbulence at the late time. We then propose an assumption that the strong turbulence is generated by the shock wave of the parent supernova remnant of Geminga. Geminga may still be in the downstream region of the shock wave and embedded in a slow-diffusion circumstance. The TeV halos around PSR B0656+14 and Vela X may also be explained under this assumption. keywords: cosmic rays – ISM: individual objects: Geminga nebula – ISM: supernova remnants – turbulence 1 Introduction The well-known $\gamma$-ray pulsar Geminga is surrounded by a multi-TeV $\gamma$-ray halo, which was first detected by Milagro (Abdo et al., 2007). In late 2017, the High-Altitude Water Cherenkov Observatory (HAWC) collaboration further reported the spatially resolved observation of the $\gamma$-ray halo (Abeysekara et al., 2017a). As these very-high-energy (VHE) $\gamma$ rays are emitted by electrons and positrons111Electrons will denote both electrons and positrons hereafter in this paper. mainly through inverse Compton scattering of the cosmic microwave background photons, the surface brightness profile of the $\gamma$-ray emission can be a good indicator for the propagation of electrons near the source. However, the derived diffusion coefficient of $\sim$60 TeV222The average energy of the $\gamma$ rays observed by HAWC is 20 TeV. Considering the inverse Compton scattering process and the energy spectrum of the parent electrons, 60 TeV electrons contribute most to the $\gamma$ rays of 20 TeV. electrons is hundreds times slower than the average value in the Galaxy as infered from the boron-to-carbon ratio (B/C) measurements (Aguilar et al., 2016). This is an evidence that the diffusion coefficient may be highly inhomogeneous in small scale. Investigating the origin of this slow-diffusion region could be meaningful to understand the particle propagation near cosmic-ray sources. A plausible explanation is that the relatively large particle density near the source may lead to the resonant growth of Alfvén waves, which in turn scatter the particles and therefore suppress the diffusion velocity (Ptuskin et al., 2008; Malkov et al., 2013; D’Angelo et al., 2016). Based on this mechanism, the diffusion coefficient around Geminga can be significantly reduced in the case of a hard injection spectrum of electrons and a weak ambient magnetic field (this calculation is presented in Appendix A. See also Evoli et al. (2018)). However, the precondition of this interpretation is that Geminga need to be at rest so that the plenty of electrons produced in the early age of Geminga could create a slow-diffusion environment. While due to the proper motion of Geminga, it has already left 70 pc away from its birthplace (Faherty et al., 2007), which means the observed slow-diffusion region must not be formed in the early stage of Geminga. The injection power of Geminga in the present day should be much weaker than that in the early time, and we will show that the diffusion coefficient cannot be remarkably suppressed even in the abscence of wave dissipation. Apart from the self-generated scenario, the slow-diffusion region around Geminga may also be a pre-existing structure. The diffusion coefficient inside a supernova remnant (SNR) should be significantly smaller than that of the interstellar medium (ISM), as this region has been swept by the blast wave and accquired more turbulent energy. So if Geminga is still inside its associated SNR, it could be embedded in a region with small diffusion coefficient, which may explain the observed $\gamma$-ray halo. As Geminga may have a 70 pc offset from its birthplace at the present day, there are works considering that Geminga has already left behind its associated SNR. However, we will show below that if the progenitor of Geminga is in a rarefied circumstance, the present scale of the SNR could be large enough to include Geminga in. In the light of this explaination, the problems encountered in the self-confinement scenario could be avoided. In this work, we first test the self-confinement picture in Section 2 with very optimistic assumptions, including the disregard of the wave dissipation. We consider the impacts brought by the proper motion of Geminga, which is an unavoidable factor. Then in Section 3, we introduce in detail the new interpretation of the inefficient diffusion halo, in which the electrons injected by Geminga are diffusing in the turbulent environment inside its parent SNR. In Section 4, we give some further discussion about this topic, including a brief analysis of some other TeV inefficient diffusion halos, and an alternative scenario of the pre-existing kind of origin for the Geminga halo. Finally, we conclude in Section 5. 2 The self-confined diffusion scenario A large density gradient of cosmic-ray particles can induce the streaming instability, which may amplify the Alfvén waves in background plasma (Skilling, 1971). To derive the diffusion coefficient in the vicinity of a source, we must simultaneously solve the equations of particle transportation and the evolution of the Alfvén waves. A full numerical solution of the coupled equations is presented in Appendix A. Here we only show an optimistic scenario for the turbulence growth where the energy loss of electrons and the Alfvén wave dissipation are ignored. The analysis shows clearly why the self-generated mechanism cannot work to stimulate the turbulence. We neglect the radiative energy loss of electrons, which leads to a larger gradient of the number density of electrons. Then the propagation equation is expressed as $$\frac{\partial N}{\partial t}-\nabla\cdot(D\nabla N)=Q\,,$$ (1) where $N$ is the differential number density of electrons, $D$ is the diffusion coefficient, and $Q$ is the source function. The energy density of Alfvén waves is denoted with $W$, which is defined by $\int W(k)dk=\delta B^{2}/B_{0}^{2}$, where $k$ is the wave number, $B_{0}$ is the regular magnetic field strength, and $\delta B$ is the turbulent magnetic field. Here we ignore the wave dissipation and only consider the growth of the Alfvén waves through streaming instability. The evolution of $W$ can be then calculated by $$\frac{\partial W}{\partial t}=\Gamma_{\rm cr}W=-\frac{4\pi v_{A}E_{\rm res}^{2% }}{3B_{0}^{2}k}\nabla N(E_{\rm res})\,,$$ (2) where $\Gamma_{\rm cr}=-4\pi v_{A}E_{\rm res}^{2}/(3B_{0}^{2}kW)\nabla N(E_{\rm res})$ is the growth rate according to Skilling (1971), $v_{A}$ is the Alfvén velocity, and $E_{\rm res}$ is the energy of electrons satisfying $r_{g}(E_{\rm res})=1/k$, where $r_{g}$ is the Larmor radius of electrons. The diffusion coefficient is related with $W$ by (Skilling, 1971) $$D(E_{\rm res})=\frac{1}{3}r_{g}c\cdot\frac{1}{kW(k)}\,.$$ (3) Combining Equation (2) and (3), we get $$\frac{1}{D^{2}}\frac{\partial D}{\partial t}=\frac{4\pi ev_{A}E}{Bc}\nabla N\,.$$ (4) If Geminga is initially in a typical environment of ISM with $\delta B\ll B_{0}$, the diffusion coefficient along the magnetic field lines should be about $(\delta B/B_{0})^{-4}$ times larger than the cross-field diffusion coefficient (Drury, 1983). So the propagation of electrons should be initially in a tube of regular magnetic field lines, corresponding to a one-dimensional diffusion. We set $x$ as the coordinate along the regular magnetic field lines, and we get the following expression from Equation (1) and (4): $$\frac{\partial}{\partial t}\left(N-\frac{Bc}{4\pi ev_{A}E}\frac{\partial\ln D}% {\partial x}\right)=\delta(x)\dot{Q}(t)\,,$$ (5) where we assume Geminga is a point-like source, and $\dot{Q}(t)$ is the time profile of electron injection. Then for any $x>0$, it can be derived from Equation (5) that $$N-\frac{Bc}{4\pi ev_{A}E}\frac{\partial\ln D}{\partial x}=0\,.$$ (6) Integrating Equation (6) from $x$ to $\infty$, we finally obtain $$D(x)=D_{\rm ISM}\,{\rm exp}\left(-\frac{4\pi ev_{A}E}{B_{0}c}\int_{x}^{\infty}% Ndx^{\prime}\right)\,,$$ (7) with $D_{\rm ISM}=D(\infty)$. Geminga has a proper motion of about 200 km s${}^{-1}$ (Faherty et al., 2007), and the direction of motion is suggested to be nearly transverse to the line of sight (Caraveo et al., 2003). These indicate that Geminga has left its birthplace for about 70 pc now. Meanwhile, the motion of Geminga is almost perpendicular to the Galactic disk (Gehrels & Chen, 1993). This means that Geminga has been cutting the magnetic field lines of ISM, as the magnetic field in the Galactic disk is dominated by the horizontal component (Han & Qiao, 1994). Thus, the electrons injected in the early age of Geminga should have escaped along the magnetic field lines which is almost perpendicular to the path of Geminga motion and can not help to generate the present slow-diffusion region. In other words, the slow-diffusion region around Geminga observed today must be formed in the recent age if the region is self-excited by Geminga. We assume the electrons injected during the last third of the age of Geminga (228 kyr$\sim$342 kyr) contribute to the generation of the current slow-diffusion region; this should also be an optimistic assumption considering the very fast energy loss of high energy electrons. The injection time function is set to have the same profile with the spin-down luminosity of pulsar, which leads to $\dot{Q}(t,E)=\dot{Q}_{0}(1+t/\tau_{0})^{-2}E^{-\gamma}$, where $\tau_{0}=10$ kyr (Hooper et al., 2017). We assume all the spin-down energy of Geminga pulsar is converted to the injected electrons with energy from 1 GeV to 500 TeV, to determine the normalization $\dot{Q}_{0}$. As we have neglected the energy loss of electrons, the following relation can be obtained according to particle conservation: $$2S\int_{x}^{\infty}N(x^{\prime},E)dx^{\prime}<\int_{t_{1}}^{t_{2}}\dot{Q}(t^{% \prime},E)dt^{\prime}\,,$$ (8) where $t_{1}=228$ kyr, $t_{2}=342$ kyr, and $S$ is the cross-section of the magnetic flux tube which is assumed to have a scale of 1 pc. Combining Equation (7) and (8), we can then calculate the lower limit of the diffusion coefficient. The Alfvén velocity is decided by $B_{0}$ and the ion density $\rho_{i}$ as $v_{A}=B_{0}/\sqrt{4\pi\rho_{i}}$. Then Equation (7) indicates that $D(x)$ is indenpendent of $B_{0}$ in our calculation. Considering the morphology of the bow-shock structure observed in X-ray (Caraveo et al., 2003) and the latest distance measure of Geminga (Faherty et al., 2007), the ISM density around Geminga $\rho_{\rm ISM}$ is derived to be 0.02 atoms cm${}^{-3}$. Since the ionization around Geminga is very high (Caraveo et al., 2003), we have $\rho_{i}\approx\rho_{\rm ISM}$. The injection spectral index $\gamma$ cannot be well constrained, as HAWC provides only the energy-integrated result at present. In Figure 1, we present the lower limits of diffusion coefficient for varying $\gamma$. When $\gamma\approx 2.24$ as provided by HAWC, we have $D({\rm 60\,TeV})>0.85\,D_{\rm ISM}({\rm 60\,TeV})$. For 60 TeV particles, the minimum of the lower limit appears at $\gamma=1.54$, where $D({\rm 60\,TeV})>0.21\,D_{\rm ISM}({\rm 60\,TeV})$. However, this is still too far from the level of suppression required by HAWC observation, which is only about $10^{-3}$ of the normal diffusion value in ISM as determined by fitting the latest B/C valute in Yuan et al. (2017). Therefore it is clearly shown that the self-confinement mechanism cannot serve as the main reason for the suppression of $D(\sim\rm 60\,TeV)$ around Geminga. 3 Electron and positron diffusion inside the SNR In the SNR shock frame, the upstream plasma loses part of kinetic energy when streaming through the shock front, and this part of energy is transfered into turbulence and thermal energy behind the shock (Bell, 1978). Thus, the downstream region should be highly turbulent, although the turbulence may be gradually dissipated after the passage of the shock. So if Geminga is still inside its associated SNR, the slow-diffusion region around it may be explained. We first give an estimate of the possible scale of the Geminga parent SNR. We adopt the calculator provided by Leahy & Williams (2017). This implement is created for modeling the evolution of SNR, and consistently combines different models for different stages of SNR evolution. The dynamic evolution of SNR is decided by the parameters such as the initial energy of the ejecta $E_{0}$ and the density of the ISM $n_{\rm ISM}$. Figure 2 shows the radius of an SNR at the age of Geminga (342 kyr), with different $E_{0}$ and $n_{\rm ISM}$. The ejecta mass is fixed at 1.4$M_{\odot}$. We find that if the ambient density is relatively low and the initial energy is higher than the typical value of $1\times 10^{51}$ erg, the scale of SNR can be as large as $\sim$ 100 pc. The corresponding shock temperature is about $10^{5}$ K, and the temperature inside the SNR should be higher. This is consistent with the high ionization degree around the pulsar wind nebula (PWN) of Geminga, as indicated by the measurement of H$\rm\alpha$ luminosity (Caraveo et al., 2003). As mentioned above, Geminga has left its birthplace for about 70 pc now. Considering the scale of the observed Geminga halo ($\sim$ 20 pc), we may envisage a scenario in which Geminga has been chasing the shock wave of its SNR, as presented in the left of Figure 3. We assume the turbulence is generated just behind the shock with an initial wave spectrum $W_{\rm ini}(k)$, and the wave transport in the downstream region can be described by $$\left\{\begin{aligned} &\displaystyle\frac{\partial W}{\partial t}+u_{2}(t)% \frac{\partial W}{\partial x}=-\Gamma_{\rm dis}W\,,\\ &\displaystyle W(t,0)=W_{\rm ini}(t)\,.\\ \end{aligned}\right.$$ (9) where the shock rest frame is adopted ($x>0$ for the downstream region), and $u_{2}$ is the downstream fluid velocity. The calculator of Leahy & Williams (2017) provides the time-dependent shock speed $v_{s}(t)$, $u_{2}(t)$ can be then estimated with $u_{2}\approx v_{s}/4$. We assume the wave dissipation is dominated by the non-linear Landau damping (Cesarsky & Kulsrud, 1981), since the ion-neutral damping is not significant for a high ionization environment (Kulsrud & Cesarsky, 1971). The simplified dissipation rate of the Kolmogorov type non-linear damping is given by (Ptuskin & Zirakashvili, 2003): $$\Gamma_{\rm dis}=(2C_{K})^{-3/2}v_{A}k^{3/2}W^{1/2}\,,$$ (10) where $C_{K}\approx 3.6$. The initial wave spectrum just behind the shock is assumed to be the Kolmogorov type as $W_{\rm ini}(k,t)=W_{0}(t)k^{-5/3}$, where $W_{0}$ is the time-denpendent normalization. As the turbulent energy downstream is extracted from the kinetic energy of the upstream plasma, we may have $\delta B^{2}\sim v_{s}^{2}$ just behinde the shock. Therefore it is reasonable to assume that $W_{0}(t)\sim[v_{s}(t)]^{2}$. We search for the best $W_{0}(t)$ so that the average $D(\rm 60\,TeV)$ in the Geminga halo (50$-$90 pc from the SNR center) is consistent with the result of HAWC. Meanwhile, López-Coto & Giacinti (2018) pointed out that the HAWC observation of Geminga halo favors an rms magnetic field of 3 $\mu$G, and a coherence length of the magnetic field of $\sim 1$ pc, which corresponds to an outer scale of $\sim 5$ pc for Kolmogorov turbulence. These indicate $B_{0}=0.6$ $\mu$G, and $\delta B/B_{0}>1$ in the present Geminga halo. The wave damping rate given by Equation (9) is proportional to $v_{A}$, which is small for our case as $v_{A}=9.2\times 10^{5}$ cm s${}^{-1}$. This means the downstream wave dissipation is relatively slow. We will show below that the damping of the waves inside the SNR is slower than the decline of $W_{\rm ini}$, so the inner region of the SNR can be more turbulent than the region near the shock. As Geminga is now embeded in a strongly turbulent environment with $\delta B/B_{0}>1$, it should not be very closely behind the shock at present. This means the scale of the SNR should be larger than 90 pc now. We note that Geminga is in the southeast of Monogem Ring on the sky map (in the Galactic coordinate), and the distance of Monogem Ring is believed to be similar with that of Geminga. The ISM density in the south of Monogem Ring is derived to be $0.034$ cm${}^{-3}$ (Knies et al., 2018), so we assume that the SNR of Geminga has similar mediu density $n_{\rm ISM}$. Meanwhile, we assume $E_{0}=2\times 10^{51}$ erg for the progenitor of Geminga and get the current scale of the shock at $\sim 110$ pc. We solve Equation (9) numerically with the up-wind differencing scheme; $\Delta t$ and $\Delta r$ are set to be 1 kyr and 2 pc so that the scheme is stable even for a large $u_{2}(t)$. The calculated spatial distribution of $D({\rm 60\,TeV})$ is shown in the right graph of Figure 3 in blue. As we are only interested in the downstream region of the shock, we simply set $D=D_{\rm ISM}$ in the upstream region. The average $D({\rm 60\,TeV})$ of the shaded area, namely the Geminga halo, is $3.8\times 10^{27}$ cm${}^{2}$ s${}^{-1}$ as measured by HAWC. We can see the diffusion coefficient just behind the shock is close to $D_{\rm ISM}$, which indicates the current shock can no longer accelerate cosmic rays. This is consistent with the fact that the parent SNR of Geminga is not detected. Besides, since we have assumed $W_{0}\sim v_{s}^{2}$, the downstream region near the shock should be much more turbulent in the early age of the SNR than the present day. At the age of 1000 yr of Geminga, our best $W_{0}(t)$ leads to $\delta B/B_{0}\sim 7$ just behind the shock, if the outer scale of the turbulence is 5 pc. A large $\delta B/B_{0}$ should be common for young SNRs to ensure the effective acceleration of high energy particles. In the right of Figure 3, we also show the distribution of $D({\rm 60\,TeV})$ in the case of a 90 pc current scale ($n_{\rm ISM}=0.034$ cm${}^{-3}$ and $E_{0}=0.8\times 10^{51}$ erg) with a light blue line, assuming the same $W_{0}(t)$ with the former case. The average $D({\rm 60\,TeV})$ for this case is larger than the result of HAWC. We can also see from Figure 3, the distribution of $D(\rm 60\,TeV)$ is very flat in the inner region of the SNR. This is because in the local rest frame, the downstream plasma has an outward velocity and has been following the shock wave. So the distribution of the diffusion coefficient should be ’compressed’ towards the shock, compared with the case in which the downstream plasma is at rest in the local frame. 4 Discussion 4.1 Other TeV halos The mechanism to explain the slow diffusion around Geminga proposed in the previous section can be examined in other similar TeV $\gamma$-ray halos around pulsars. Besides Geminga there are other pulsars that are observed to be surrounded by slow diffusion halos in TeV. The spatial profile of $\gamma$-ray emission around PSR B0656+14 was reported by HAWC along with that of Geminga, and the indicated diffusion coefficient is 5 times larger compared with the Geminga case (Abeysekara et al., 2017a). While unlike Geminga, the associated SNR of PSR B0656+14, namely the Monogem Ring, is still observable in X-ray (Bunner et al., 1971; Plucinsky et al., 1996), as it is much younger than Geminga ($\sim$100 kyr). The position of PSR B0656+14 on the sky map is well inside the Monogem Ring, while the observation of the PWN of PSR B0656+14 suggests that the motion of the pulsar is almost parallel to the line of sight (Bîrzan et al., 2016). Since Monogem Ring is an extended structure with a scale of $\sim 80$ pc (Knies et al., 2018), the TeV halo of PSR B0656+14 can still be included by the SNR as long as its radial velocity is not faster than 600 km s${}^{-1}$. If so, the origin of the slow-diffusion region may also be explained by the scenario of Section 3. Vela X, as the PWN of Vela pulsar, is close to the center of Vela SNR (Sushch et al., 2011). H.E.S.S has detected an extended TeV structure around Vela pulsar with a scale of $\sim 6$ pc (Aharonian et al., 2006; Abramowski et al., 2012), which is considered to be correlated with the X-ray filament (Hinton et al., 2011). The TeV halo is more extended than the X-ray filament, and the derived magnetic field is only $\sim 4$ $\mu$G, much smaller than that close to the pulsar (Hinton et al., 2011). So it is possible that the TeV structure is produced by the escaping electrons that are wandering in the turbulent environment inside the Vela SNR. Huang et al. (2018) also indicates that Vela X should be surrounded by a slow-diffusion environment, so that its lepton flux at the Earth will not conflict with the current experiments. On the other hand, we pay attention to another source PSR B1957+20, around which no TeV structure has been detected so far. PSR B1957+20 is an old millisecond pulsar, which is definitely travelling in the ISM now. The bow-shock PWN associated to the pulsar has been detected by Chandra in 0.3–8 keV (Stappers et al., 2003; Huang et al., 2012), and the magnetic field of PWN is estimated to be 17.7 $\mu$G (Huang et al., 2012). So the parent electrons of the X-ray emission should be as high as tens of TeV, which means this PWN can indeed accelerate electrons to VHE. Besides, Aharonian et al. (1997) pointed out that ground based telescopes should be able to detect VHE $\gamma$-ray emission around the pulsar if the spin-down luminosity of the pulsar is larger than $10^{34}(r/1\,{\rm kpc})^{2}$ erg s${}^{-1}$, where $r$ is the distance of the pulsar. For the case of PSR B1957+20, $r$ is inferred as 2.5 kpc (Cordes & Lazio, 2002), and the spin-down luminosity of the pulsar is $7.48\times 10^{34}$ erg s${}^{-1}$ (Manchester et al., 2005), meeting the criterion above. All these imply that if the accelerated electrons are effectively confined near the source, TeV structure ought to be revealed. However, according to the present work, the VHE electrons may not be able to bound themselves by self-generation waves, and the turbulence in the ISM is far from adequate to confine the escaping electrons, unlike the case inside the SNR. Thus, plenty of electrons might have effectively spread out and the non-detection of VHE emission can be understood. 4.2 Is Geminga inside a stellar-wind bubble? The scenario described in Section 3 may not be the unique possible case for a pre-existing slow-diffusion environment. It is also possible that Geminga is running into an unrelated turbulent region at present. Kim et al. (2007) discovered a large ring-like structure in H$\alpha$ emission that is centered at (97.14${}^{\circ}$, 21.33${}^{\circ}$) in equatorial coordinates, dubbed the ’Gemini H$\alpha$ Ring’. As can be seen in Figure 4, the most intense part of the Geminga TeV halo is included by the Gemini H$\alpha$ Ring in projection. There is evidence that the Gemini H$\alpha$ Ring is interacting with the Monogem Ring (Kim et al., 2007). As the distance of Monogem Ring is estimated to be $\sim 300$ pc (Knies et al., 2018), the Gemini H$\alpha$ Ring should have a similar distance. Knies et al. (2018) pointed out that the Gemini H$\alpha$ Ring is most likely a stellar-wind bubble, as there are several OB type stars in the direction of the H$\alpha$ ring with distances of 200-350 pc. Meanwhile, the uncertainty of the trigonometric parallax of Geminga is still large, and the derived distance ranges from 190 pc to 370 pc (Faherty et al., 2007). Thus, Geminga is possibly inside the stellar-wind bubble now, and the shocked wind may provide Geminga a turbulent circumstance. Besides, the high ionization and low density environment of Geminga are also consistent with the features of stellar-wind bubble (Castor et al., 1975). 5 Conclusion We study the possible origin of the slow-diffusion region around Geminga observed by HAWC. Considering the proper motion of Geminga, we verify that the mechanism of self-generated Alfvén waves due to streaming instability cannot work to produce such a low diffusion coefficient even in the most optimistic scenario where the energy loss of electrons and the dissipation of the Alfvén waves are neglected. The reason is simple as Geminga is too weak to generate enough high energy electrons at the late age. We get an analytical result of the lower limit of the diffusion coefficient at 60 TeV, which are at most suppressed to about 0.2 times of the ISM value. This is much larger than the value required by the HAWC observation, which derives a diffusion coefficient hundreds times smaller than that of the ISM. We further propose a scenario that the slow diffusion is an environment effect and Geminga is still inside its parent SNR, which is not identified at present. We show that if the ambient density is low, the scale of the SNR can be large enough to include Geminga and the TeV halo inside. We assume the magnetic turbulence is generated just behind the shock of the SNR with the Kolmogorov form, and the energy density of the turbulence is proportional to the square of the shock velocity. We solve the transport equation of the plasma wave downstream of the shock, and obtain the distribution of the diffusion coefficient inside the SNR. We find that Geminga can be reasonably embeded in a turbulent environment now, and the diffusion coefficient at 60 TeV can accommodate the result of HAWC. Another possible interpretation is also briefly presented, in which Geminga is now running into the stellar-wind bubble that creates the Gemini H$\alpha$ Ring. We further discuss some other sources with TeV halos, such as PSR B0656+14 and Vela X. These cases also favor our new interpretation. It is still ambiguous if the inefficeint diffusion region around pulsar is universal or not. As the ground-based Cherenkov instruments have identified plenty of VHE $\gamma$-ray sources associated with pulsars (of course many of them are PWNe rather than halos produced by escaping electrons) (Abeysekara et al., 2017b; H. E. S. S. Collaboration et al., 2018), we expect to investigate more cases in the future work. Besides, if energy-resolved observation of Geminga halo could be provided in the future, it is very helpful to give further judgement to the origin of the slow-diffusion region. Acknowledgement We thank Prof. Hui Li for helpful discussion. This work is supported by the National Key Program for Research and Development (No. 2016YFA0400200) and by the National Natural Science Foundation of China under Grants No. U1738209, 11851303. References Abdo et al. (2007) Abdo, A. A., Allen, B., Berley, D., et al. 2007, ApJ, 664, L91 Abeysekara et al. (2017a) Abeysekara, A. U., Albert, A., Alfaro, R., et al. 2017a, Science, 358, 911 Abeysekara et al. (2017b) —. 2017b, ApJ, 843, 40 Abramowski et al. (2012) Abramowski, A., Acero, F., Aharonian, F., et al. 2012, A&A, 548, A38 Aguilar et al. (2016) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2016, Physical Review Letters, 117, 231102 Aharonian et al. (2006) Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2006, A&A, 448, L43 Aharonian et al. (1997) Aharonian, F. A., Atoyan, A. M., & Kifune, T. 1997, MNRAS, 291, 162 Bell (1978) Bell, A. R. 1978, MNRAS, 182, 147 Bîrzan et al. (2016) Bîrzan, L., Pavlov, G. G., & Kargaltsev, O. 2016, ApJ, 817, 129 Bunner et al. (1971) Bunner, A. N., Coleman, P. L., Kraushaar, W. L., & McCammon, D. 1971, ApJ, 167, L3 Caraveo et al. (2003) Caraveo, P. A., Bignami, G. F., De Luca, A., et al. 2003, Science, 301, 1345 Castor et al. (1975) Castor, J., McCray, R., & Weaver, R. 1975, ApJ, 200, L107 Cesarsky & Kulsrud (1981) Cesarsky, C. J., & Kulsrud, R. M. 1981, in IAU Symposium, Vol. 94, Origin of Cosmic Rays, ed. G. Setti, G. Spada, & A. W. Wolfendale, 251 Cordes & Lazio (2002) Cordes, J. M., & Lazio, T. J. W. 2002, ArXiv Astrophysics e-prints, astro-ph/0207156 D’Angelo et al. (2016) D’Angelo, M., Blasi, P., & Amato, E. 2016, Phys. Rev. D, 94, 083003 Drury (1983) Drury, L. O. 1983, Reports on Progress in Physics, 46, 973 Evoli et al. (2018) Evoli, C., Linden, T., & Morlino, G. 2018, ArXiv e-prints, arXiv:1807.09263 Faherty et al. (2007) Faherty, J., Walter, F. M., & Anderson, J. 2007, Ap&SS, 308, 225 Fang et al. (2018) Fang, K., Bi, X.-J., Yin, P.-F., & Yuan, Q. 2018, ApJ, 863, 30 Gehrels & Chen (1993) Gehrels, N., & Chen, W. 1993, Nature, 361, 706 H. E. S. S. Collaboration et al. (2018) H. E. S. S. Collaboration, Abdalla, H., Abramowski, A., et al. 2018, A&A, 612, A1 Han & Qiao (1994) Han, J. L., & Qiao, G. J. 1994, A&A, 288, 759 Hinton et al. (2011) Hinton, J. A., Funk, S., Parsons, R. D., & Ohm, S. 2011, ApJ, 743, L7 Hooper et al. (2017) Hooper, D., Cholis, I., Linden, T., & Fang, K. 2017, Phys. Rev. D, 96, 103013 Huang et al. (2012) Huang, R. H. H., Kong, A. K. H., Takata, J., et al. 2012, ApJ, 760, 92 Huang et al. (2018) Huang, Z.-Q., Fang, K., Liu, R.-Y., & Wang, X.-Y. 2018, ApJ, 866, 143 Kim et al. (2007) Kim, I.-J., Min, K.-W., Seon, K.-I., et al. 2007, ApJ, 665, L139 Knies et al. (2018) Knies, J. R., Sasaki, M., & Plucinsky, P. P. 2018, MNRAS, 477, 4414 Kulsrud & Cesarsky (1971) Kulsrud, R. M., & Cesarsky, C. J. 1971, Astrophys. Lett., 8, 189 Leahy & Williams (2017) Leahy, D. A., & Williams, J. E. 2017, AJ, 153, 239 López-Coto & Giacinti (2018) López-Coto, R., & Giacinti, G. 2018, MNRAS, 479, 4526 Malkov et al. (2013) Malkov, M. A., Diamond, P. H., Sagdeev, R. Z., Aharonian, F. A., & Moskalenko, I. V. 2013, ApJ, 768, 73 Manchester et al. (2005) Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993 Plucinsky et al. (1996) Plucinsky, P. P., Snowden, S. L., Aschenbach, B., et al. 1996, ApJ, 463, 224 Ptuskin & Zirakashvili (2003) Ptuskin, V. S., & Zirakashvili, V. N. 2003, A&A, 403, 1 Ptuskin et al. (2008) Ptuskin, V. S., Zirakashvili, V. N., & Plesser, A. A. 2008, Advances in Space Research, 42, 486 Skilling (1971) Skilling, J. 1971, ApJ, 170, 265 Stappers et al. (2003) Stappers, B. W., Gaensler, B. M., Kaspi, V. M., van der Klis, M., & Lewin, W. H. G. 2003, Science, 299, astro-ph/0302588 Sushch et al. (2011) Sushch, I., et al. 2011, A&A, 525, A154 Yuan et al. (2017) Yuan, Q., Lin, S.-J., Fang, K., & Bi, X.-J. 2017, Phys. Rev. D, 95, 083007 Appendix A Numerical solution of the self-confinement scenario Neglecting the proper motion of Geminga, we numerically solve the complete forms of Equation (1) and (2) in the following. The radiative cooling of electrons and the damping of Alfvén waves are considered, then Equation (1) and (2) are rewritten as $$\frac{\partial N}{\partial t}-\nabla\cdot(D\nabla N)-\frac{\partial}{\partial E% }(bN)=Q\,$$ (11) and $$\frac{\partial W}{\partial t}+v_{A}\nabla W=(\Gamma_{\rm cr}-\Gamma_{\rm dis})% W\,.$$ (12) The calculation of the cooling rate $b$ is identical with that in Fang et al. (2018), and the Kolmogorov type wave dissipation rate is still adopted. In fact, for the ISM that is not infulenced by the source, the wave growth and convection can be neglected, while the wave dissipation still exists. We universally add a compensatory growth term to keep intact the diffusion coefficient far from the source. We solve Equation (11) and (12) iteratively in the one-dimensional scenario. For Equation (11), we apply the operator splitting method to deal with the diffusion operator and the energy-loss operator seperately. For each operator, we derive the differencing scheme with the finite volume method. This is important especially for the diffusion operator, as $D$ can be changed abruptly in space. One may refer to Fang et al. (2018) for the details of the differencing schemes. The intial value of $N$ is zero everywhere. For the boundary conditions, we set the the maximum injection energy to be 500 TeV. The typical scale of the Galactic random field is 100 pc, which means the one-dimensional diffusion is only valid within this scale around the source. If particles escape farther, the diffusion should switch to three-dimensional, and $N$ will sharply declines. So we set the spacial outer boundary at 100 pc, namely $N(100\,{\rm pc})=0$. The radius of the one-dimensional flux tube is assumed to be 1 pc. As to Equation (12), we discretize it with the well-known upwind scheme. The initial $W$ is decided by the diffusion coefficient in the ISM. To ensure the accuracy of the solutions, $D\Delta t/(\Delta x)^{2}$ and $v_{A}\Delta t/\Delta x$ should not be much larger than 1, where $\Delta t$ is the time step and $\Delta x$ is the radial step for both Equation (11) and (12). As $D_{\rm ISM}(60\,{\rm TeV})\approx 3\times 10^{30}$ cm${}^{2}$ s${}^{-1}$, we set $\Delta t=0.1$ yr and $\Delta x=1$ pc. The spectral index $\gamma$ of the injection spectrum and the mean magnetic field $B_{0}$ are important parameters for the self-confinement scenario. The former affects the growth rate of the Alfvén waves, while the later is decisive for the damping rate of the waves. In the left of Figure 5, we show the calculated $D({\rm 60\,TeV})$ in the current age of Geminga with different $\gamma$ and $B_{0}$. For the case of harder $\gamma$ (1.5) and smaller $B_{0}$ (0.6 $\mu$G), the diffusion coefficient around Geminga is significantly suppressed even in the current age, and the average $D$ within 20 pc is comparative to the result of HAWC. As can be seen in the right of Figure 5, the diffusion coefficient declines quickly in the early age, then the wave damping dominates and the diffusion coefficient gradually rises. However, in addition to the unrealistic assumption that Geminga is at rest, the assumption of one-dimensional diffusion is not always valid. The turbulence need to be weak, namely $\delta B\ll B_{0}$. The right of Figure 5 shows that in the early age of Geminga, the diffusion coefficient can be suppressed to very low value, corresponding to strong turbulence. When $\delta B$ approaches $B_{0}$, the diffusion mode should switch to three-dimensional, and the wave growth due to streaming instability is significantly reduced compared with the case of one-dimensional. This implies that the diffusion coefficient cannot be reduced to so low as calculated here, while a self-consistent calculation should be complex and beyond the scope of this work.
Orthogonal Polynomials for Seminonparametric Instrumental Variables Model††thanks: Many of the results of this paper were presented as part of a larger project at University of Chicago and Cowles Foundation, Yale University, econometrics research seminar in the spring of 2010, as well as the 2010 World Congress of the Econometric Society in Shanghai. We would like to thank participants of those seminars for valuable comments and questions. We would also like to thank the editors and an anonymous referee for valuable comments. This work was partially supported by a grant from the Simons Foundation ($\#$284262 to Yevgeniy Kovchegov). Yevgeniy Kovchegov111Department of Mathematics, Oregon State University, Kidder Hall, Corvallis, OR 97331; Email: kovchegy@math.oregonstate.edu; Phone: 541-737-1379; Fax: 541-737-0517.                            Neşe Yıldız222Corresponding author: Department of Economics, University of Rochester, 231 Harkness Hall, Rochester, NY 14627; Email: nese.yildiz@rochester.edu; Phone: 585-275-5782; Fax: 585-256-2309. () Abstract We develop an approach that resolves a polynomial basis problem for a class of models with discrete endogenous covariate, and for a class of econometric models considered in the work of Newey and Powell [17], where the endogenous covariate is continuous. Suppose $X$ is a $d$-dimensional endogenous random variable, $Z_{1}$ and $Z_{2}$ are the instrumental variables (vectors), and $Z=\left(\begin{array}[]{c}Z_{1}\\ Z_{2}\end{array}\right)$. Now, assume that the conditional distributions of $X$ given $Z$ satisfy the conditions sufficient for solving the identification problem as in Newey and Powell [17] or as in Proposition 1.1 of the current paper. That is, for a function $\pi(z)$ in the image space there is a.s. a unique function $g(x,z_{1})$ in the domain space such that $$E[g(X,Z_{1})~{}|~{}Z]=\pi(Z)\qquad Z-a.s.$$ In this paper, for a class of conditional distributions $X|Z$, we produce an orthogonal polynomial basis $\{Q_{j}(x,z_{1})\}_{j=0,1,\ldots}$ such that for a.e. $Z_{1}=z_{1}$, and for all $j\in\mathbb{Z}_{+}^{d}$, and a certain $\mu(Z)$, $$P_{j}(\mu(Z))=E[Q_{j}(X,Z_{1})~{}|~{}Z],$$ where $P_{j}$ is a polynomial of degree $j$. This is what we call solving the polynomial basis problem. Assuming the knowledge of $X|Z$ and an inference of $\pi(z)$, our approach provides a natural way of estimating the structural function of interest $g(x,z_{1})$. Our polynomial basis approach is naturally extended to Pearson-like and Ord-like families of distributions. MSC Numbers: 33C45, 62, 62P20. KEYWORDS: Orthogonal polynomials, Stein’s method, nonparametric identification, instrumental variables, semiparametric methods. 1 Introduction In this paper we start with a small step of extending the set of econometric models for which nonparametric or semiparametric identification of structural functions is guaranteed to hold by showing completeness when the endogenous covariate is discrete with unbounded support. Note that the case of discrete endogenous covariate $X$ with unbounded support is not covered by the sufficiency condition given in [17]. Then, using the theory of differential equations we develop a novel orthogonal polynomial basis approach for a large class of the distributions given in Theorem 2.2 in [17], and in the case of discrete endogenous covariate $X$ for which the identification problem is solved in this paper. Our approach is new in economics and provides a natural link between identification and estimation of structural functions. We also discuss how our polynomial basis results can be extended to the case when the conditional distribution of $X|Z$ belongs to either the modified Pearson or modified Ord family. Experimental data are hard to find in many social sciences. As a result, social scientists often have to devise statistical methods to recover causal effects of variables (covariates) on outcomes of interest. When the structural relationship between a dependent variable and the explanatory variables (i.e. $g(x,z_{1})$) is parametrically specified Instrumental variables (IV) method is typically used to get consistent and asymptotically normal estimators for the finite dimensional vector of parameters, and thus, the structural function of interest.333A keyword search for “instrumental variables” in JSTOR returned more than 20,000 entries. However the parametric estimators are not robust to misspecification of the underlying structural relationship, $g(x,z_{1})$. For example, in the context of the analysis of consumer behavior recent empirical studies have suggested the need to allow for a more flexible role for the total budget variable to capture the observed consumer behavior at the microeconomic level. (See [3] and the references therein.) Failure of robustness of parametric methods raises the question whether it is possible to extend the instrumental variables estimation to non-parametric framework. This question was first studied in [17]. Thus far, however, the development of theoretical analysis and empirical implementation of nonparametric instrumental variables methods have been slow. This may have to do with the fact that identification is very hard to attain in these models. In addition, although there are some results about convergence rates of nonparametric estimators of the structural function, or on asymptotic distribution of the structural function evaluated at finitely many values of covariates444See [9, 6, 5, 7, 12]. to date the asymptotic distribution of the estimator for the structural function is still unknown. In this paper we suggest a semiparametric approach. This suggestion is motivated by the fact that sufficient conditions for nonparametric identification are closely related to the conditional distribution of the endogenous covariate given the instruments, which can be estimated non-parametrically since it only depends on observable quantities. We suggest a way of nonparametrically estimating the structural function while assuming that the conditional distribution of the endogenous covariate given instruments belongs to a large family for which identification of the structural function is guaranteed to hold. Ours is not the first paper which suggests taking a related semiparametric approach to attack this problem. [10] and [3] both take a semiparametric approach in analyzing the Engel curve relationship. The semiparametric approach in [10] is different from the one taken by [3], and is more closely related to the one taken in this paper. In particular, [3] assume $g(X,Z_{1})=h(X-\phi(Z_{1}^{T}\theta_{1}))+Z_{1}^{T}\theta_{2}$, with $\theta_{1},\theta_{2}$ as finite dimensional parameters, $\phi$ having a known functional form, and $h$ non-parametric, but leave the distribution of $X$ given $Z$ to be more flexible than in [10]. In contrast, [10] leave specification of $g$ more flexible, but assume that the joint distribution of $X$ and $Z_{2}$ conditional on $Z_{1}$ is normal. The Engel curve relationship describes the expansion path for commodity demands as the household’s budget increases. In Engel curve analysis $Y$ denotes budget share of the household spent on a subgroup of goods, $X$ denotes log total expenditure allocated by the household to the subgroup of goods of interest, $Z_{1}$ are variables describing other observed characteristics of households, and $U$ represents unobserved heterogeneity across households. The (log) total expenditure variable, $X$, is a choice variable in the household’s allocation of income across consumption goods and savings. Thus, household’s optimization suggests that $X$ is jointly determined with household’s demands for particular goods and is, therefore, likely to be an endogenous regressor, or a regressor that is related to $U$, in the estimation of Engel curves. This means that the conditional mean of $Y$ estimated by nonparametric least squares regression cannot be used to estimate the economically meaningful structural Engel curve relationship. Fortunately, as argued in [3], household’s allocation model does suggest exogenous sources of income that will provide suitable instrumental variables for total expenditure in the Engel curve regression. In particular, log disposable household income is believed to be exogenous because the driving unobservables like ability are assumed to be independent of the preference orderings which play an important role in household’s allocation decision and are included in $U$ (see [10]). Consequently, log disposable income is usually taken as the excluded instrument, $Z_{2}$. [10] demonstrates that log expenditure and log disposable income variables are both well characterized by joint normality, conditional on other variables describing household characteristics. Under the assumption that the joint distribution of $X$ and $Z_{2}$ conditional on $Z_{1}$ is normal [10] provide a semiparametric estimator for the structural Engel curve and give convergence rates for their estimator. In parametric models normality is typically associated with nice behavior, but in a nonparametric regression with endogenous regressors the situation is very different. Indeed, it is well established that joint normality can lead to very slow rates of convergence (see [3, 8, 19]). In contrast to [10] we suggest an estimation method that is directly related to the information contained in the identification condition and that covers any conditional distribution of $X$ given $Z$ (not just normal distribution) that belongs to a large family for which identification of the structural function is known to hold. By exploiting this information our method eliminates one step of estimation. As a result, we expect estimators that are based on our method will have a faster rate of convergence. Specifically, the case where the joint distribution of $X$ and $Z_{2}$ conditional on $Z_{1}$ is normal as in [10] fits right into the orthogonal polynomial framework of this paper. This correspondence will be pointed out in a remark in Subsection 2.2. The follow-up paper that includes a least square analysis for normal conditional distributions is being prepared by the authors. Our approach to choosing the orthogonal polynomials for approximating structural function is semiparametric and is motivated by the form of the conditional density (either with respect to Lebesgue or counting measure) of covariates given instruments. Using the form of this density function we can derive a second-order Stein operator (called Stein-Markov operator in [18]) whose eigenfunctions are orthogonal polynomials (in covariates) under certain sufficient conditions. This step utilizes the generator approach from Stein’s theory originated in Barbour [2] and extensively studied in Schoutens [18]. One could use the eigenfunctions of the Stein-Markov operator to approximate the structural functions of interest in such models. Since the conditional expectations of these orthogonal basis functions given instruments are known up to a certain function of the instruments (namely, they are polynomials in $\mu(Z)$, which will be defined below), this approach is likely to simplify estimation. The in-depth information on Stein’s method and Stein operators can be found in [1, 2, 4, 18, 20] and references therein. A common way of estimating the structural function, which depends on the endogenous regressor $X$, starts with picking a basis, $\{Q_{j}\}_{j=1}^{\infty}$, for the space the structural function of interest belongs to. Finitely many elements of this basis is used to approximate the structural function. To estimate the coefficients on the elements of the basis, both the left hand side, or dependent variable, and the finite linear combination of the basis functions are first projected on the space defined by the instrument $Z$, and then the projection of the dependent variable is regressed onto the linear combination of the projections of basis functions. When this is done, typically, the choice of basis functions has little to do with the conditional distribution of $X|Z$, and hence, with the conditions that ensure identification of the structural function. As a result, the projections of the basis functions on the instrument are not known analytically, but have to be estimated by non-parametric regression. In this paper, we propose a method that links the condition for identification of the structural function to the choice of the basis used to approximate this function in estimation stage. We do this by exploiting the form of the conditional density of covariates given instruments. As suggested above we propose the use of the eigenfunctions of the Stein-Markov operator to approximate the structural function. Since the conditional expectations of these orthogonal basis functions given instruments are known up to a certain function of the instruments, this would eliminate one step of the estimation of the structural function. It should be stressed, however, even assuming the conditional density of covariates given instruments is known up to finite dimensional parameters, does not imply that the conditional expectations of arbitrary basis functions given instruments are necessarily known analytically. The paper is organized as follows. Subsection 1.1 discusses the identification result for the case of discrete endogenous covariate $X$ with unbounded support. Section 2 contains the orthogonal polynomial approach for the basis problem. Finally, Section 3 contains the concluding remarks. 1.1 An identification result As it will be shown in Subsection 2.3, our approach to choosing orthogonal basis works for many cases in which the endogenous variable is discrete and has unbounded support. To be able to talk about such cases we state an identification result that covers those cases. This theorem as well as Theorem 2.2 of [17] follow from Theorem 1 on p.132 of [15]. We let $X$ denote the endogenous random variable and $Z=\left(\begin{array}[]{c}Z_{1}\\ Z_{2}\end{array}\right)$ denote the vector of instrumental variables. Proposition 1.1. Let $X$ be a random variable, with conditional density (w.r.t. either Lebesgue or counting measure) of $X|Z$ given by $$p(x|Z=z):=p(x|z)=t(z)s(x,z_{1})\prod\limits_{j=1}^{d}[\mu_{j}(z)-m_{j}]^{\tau_% {j}(x,z_{1})}\qquad\tau(x,z_{1})\in\mathbb{Z}_{+}^{d},$$ where $t(z)>0$, $s(x,z_{1})>0$, $\tau(x,z_{1})=(\tau_{1}(x,z_{1}),\dots,\tau_{d}(x,z_{1}))$ is one-to-one in $x$, and the support of $\mu(Z)=(\mu_{1}(Z),\dots,\mu_{d}(Z))$ given $Z_{1}$ contains a non-trivial open set in $\mathbb{R}^{d}$, and $\mu_{j}(Z)>m_{j}~{}~{}(Z-a.s.)$ for each $j=1,\dots,d$. Then $$E[g(X,Z_{1})|Z_{1},Z_{2}]=0\quad Z-a.s.\quad\text{ implies }\quad g(X,Z_{1})=0% \quad(X,Z_{1})-a.s.$$ Proof. 555For the case in which $X$ is discrete an alternative proof can be found in [14]. Note that $$p(x|z)=t(z)s(x,z_{1})\exp{\left[\sum_{i=1}^{d}\tau_{i}(x,z_{1})\log{(\mu_{i}(z% )-m_{i})}\right]}.$$ Then letting $A(\eta)=0$, and $\eta_{i}=\log{(\mu_{i}(z)-m_{i})}$, we see that the result follows from [16]. See also [15]. ∎ The above theorem extends Theorem 2.2 in [17], where it was shown that if with probability one conditional on $Z$, the distribution of $X$ is absolutely continuous w.r.t. Lebesgue measure, and its conditional density is given by $$f_{X|Z}(x|z)=t(z)s(x,z_{1})\exp{\left[\mu(z)\cdot\tau(x,z_{1})\right]},$$ (1.1) where $t(z)>0$, $s(x,z_{1})>0$, $\tau(x,z_{1})$ is one-to-one in $x$, and the support of $\mu(Z)$ given $Z_{1}$ contains a non-trivial open set, then for each $g(x,z_{1})$ with finite expectation $E[g(X,Z_{1})|Z]=0\;\;$ $(Z-a.s.)$ implies that $g(X,Z_{1})=0\;\;$ $(X,Z_{1})-a.s$. The condition requiring the support of $\mu(Z)$ given $Z_{1}$ to contain a nontrivial open set in $\mathbb{R}^{d}$ in both our Theorem 1.1 and Theorem 2.2 in [17] can be weakened to requiring that the support of $\mu(Z)$ given $Z_{1}$ be a countable set that is dense in a nontrivial open set in $\mathbb{R}^{d}$. 2 Polynomial basis results Once again, let $X$ be a $d$-dimensional endogenous random variable, $Z_{1}$ and $Z_{2}$ be the instrumental variables (vectors), and $Z=\left(\begin{array}[]{c}Z_{1}\\ Z_{2}\end{array}\right)$. Now, assume that the conditional distributions of $X$ given $Z$ satisfy the conditions sufficient for solving the identification problem as in Theorem 2.2 of [17] or as in Proposition 1.1 of the current paper. Then, for a function $\pi(z)$ in the image space there is a unique function $g(x,z_{1})$ in the domain space such that $$E[g(X,Z_{1})~{}|~{}Z]=\pi(Z)\qquad Z~{}a.s.$$ In this section we will use Stein-Markov operators to solve the polynomial basis problem for a class of conditional distributions $X|Z$. Specifically, we will develop an approach to finding an orthogonal polynomial basis $\{Q_{j}(x,z_{1})\}_{j=0,1,\ldots}$ such that for a.e. $Z_{1}=z_{1}$, and for all $j\in\mathbb{Z}_{+}^{d}$, and a function $\mu(Z)$ defined in Section 1, $$P_{j}(\mu(Z))=E[Q_{j}(X,Z_{1})~{}|~{}Z],$$ where $P_{j}$ is a polynomial of degree $j$. See [1, 4, 18, 20] for comprehensive studies and reviews of Stein-Markov operators and Stein’s method. In the examples with no instrumental variable $Z_{1}$, i.e. $Z=Z_{2}$, polynomials $Q_{j}(x,z_{1})$ will be denoted by $Q_{j}(x)$. 2.1 Sturm-Liouville Equations and Stein operators Let open set $\Omega(z)\in\mathbb{R}^{d}$ be the support of $X$ given $Z=z$, and let $\partial\Omega(z)$ denote the boundary of $\Omega(z)$. Consider a continuous conditional density function $f_{X|Z}(x|z)=s(x,z_{1})t(z)e^{\mu(z)^{T}\tau(x,z_{1})}$ as in Theorem 2.2 in [17] with $x=(x_{1},\dots,x_{d})^{T}$ and $\mu(z)=\big{(}\mu_{1}(z),\dots,\mu_{d}(z)\big{)}^{T}$ in $\mathbb{R}^{d}$, and $t(z)>0$. Assume that for $a.e.\;Z_{1}=z_{1}$, $~{}\tau(x,z_{1})=\big{(}\tau_{1}(x,z_{1}),\ldots,\tau_{d}(x,z_{1})\big{)}^{T}$ is a twice differentiable invertible one-to-one function from $\Omega(z)\subseteq\mathbb{R}^{d}$ to $\mathbb{R}^{d}$ with nonzero partial derivatives, and $s(x,z_{1}):\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a differentiable function in $x$. Next denote by $\nabla_{x,\tau}$ the following first order linear operator $$\nabla_{x,\tau}f(x):=\left({\partial\over\partial x_{1}}\left[{f(x)\over{% \partial\tau_{1}(x,z_{1})\over\partial x_{1}}}\right],\dots,{\partial\over% \partial x_{d}}\left[{f(x)\over{\partial\tau_{d}(x,z_{1})\over\partial x_{d}}}% \right]\right)$$ We differentiate $f_{X|Z}(x|z)$ to obtain $$\nabla_{x,\tau}f_{X|Z}(x|z)={\nabla_{x,\tau}s(x,z_{1})\over s(x,z_{1})}f_{X|Z}% (x|z)+\mu(Z)^{T}f_{X|Z}(x|z)\qquad\text{ for all }x\in\Omega(z).$$ The following statement holds for almost every $Z=z$. For a function $Q(x,z_{1})$ that is differentiable in $x$ and satisfies $~{}Q(x,z_{1})s(x,z_{1})\Big{/}{\partial\tau_{i}(x,z_{1})\over\partial x_{i}}=0% ~{}$ for each $i$ and each $x\in\partial\Omega(z)$,666If $\partial\Omega(z)$ contains a singularity or a point at infinity, this statement should be taken to hold in the limit. we integrate by parts to obtain $$E[AQ(X,Z_{1})|Z]=-\mu(Z)^{T}E[Q(X,Z_{1})|Z]\qquad Z~{}a.s.,$$ (2.1) where $$AQ(x,z_{1})=\frac{1}{s(x,z_{1})}\nabla_{x,\tau}[s(x,z_{1})Q(x,z_{1})]={\big{(}% \nabla_{x,\tau}s(x,z_{1})\big{)}Q(x,z_{1})\over s(x,z_{1})}+\sum_{i=1}^{d}{~{}% \frac{\partial Q(x,z_{1})}{\partial x_{i}}~{}\over{\partial\tau_{i}(x,z_{1})% \over\partial x_{i}}}.$$ (2.2) Now, for a given $z$, let $L^{2}(\mathbb{R}^{d},s(x,z_{1}))$ denote the space of Lebesgue measurable $u(x,z_{1})$ in $x$ such that $\int\limits_{\Omega(z)}u^{2}(x,z_{1})s(x,z_{1})\mathrm{d}x<\infty$, with the inner product $$\big{<}u,v\big{>}_{s}:=\int\limits_{\Omega(z)}u(x,z_{1})v(x,z_{1})s(x,z_{1})% \mathrm{d}x.$$ Next define the following Sturm-Liouville operator: $${\cal A}Q:={1\over s(x,z_{1})}\nabla_{x,\tau}\Big{[}s(x,z_{1})\nabla_{x}Q(x,z_% {1}))\Big{]}={\nabla_{x,\tau}s(x,z_{1})\cdot\nabla_{x}Q(x,z_{1})\over s(x,z_{1% })}+\sum_{i=1}^{d}{1\over{\partial\tau_{i}(x,z_{1})\over\partial x_{i}}}\frac{% \partial^{2}Q(x,z_{1})}{\partial x_{i}^{2}},$$ where $~{}\nabla_{x}:=\left({\partial\over\partial x_{1}},\dots,{\partial\over% \partial x_{d}}\right)^{T}$ is standard gradient. Here $A$ is a Stein operator for the distribution that has Lebesgue density equal to ${s(x,z_{1})\over\int s(x,z_{1})\mathrm{d}x}$, and $\mathcal{A}$ is the corresponding Stein-Markov operator. Then, integration by parts shows $\cal{A}$ is a self-adjoint operator with respect to $\big{<}\cdot,\cdot\big{>}_{s}$. Specifically, $~{}\big{<}{\cal A}u,v\big{>}_{s}=\big{<}u,{\cal A}v\big{>}_{s}$ provided the following standard boundary conditions $$\sum_{i=1}^{d}\int\limits_{\partial\Omega(z)}\left[\left({\partial\over% \partial x_{i}}u(x,z_{1})\right)v(x,z_{1})-\left({\partial\over\partial x_{i}}% v(x,z_{1})\right)u(x,z_{1})\right]{s(x,z_{1})\over{\partial\tau_{i}(x,z_{1})% \over\partial x_{i}}}~{}d\Gamma(x)=0$$ (2.3) $Z~{}a.s.$ for all $u(x,z_{1})$ and $v(x,z_{1})$ in $\mathcal{C}^{2}(\mathbb{R}^{d})\cap L^{2}(\mathbb{R}^{d},s(x,z_{1}))$ for almost every $~{}Z_{1}=z_{1}$. Trivially, the above boundary conditions (2.3) are satisfied if $$\left({\partial\over\partial x_{i}}u(x,z_{1})\right)v(x,z_{1})-\left({\partial% \over\partial x_{i}}v(x,z_{1})\right)u(x,z_{1})\equiv 0\quad\text{ on }~{}% \partial\Omega(z).$$ (2.4) In the case of a singularity or a point at infinity on the boundary the above boundary conditions (2.4) will need to hold in the limit. The eigenvalues $\lambda_{j}$ of $\cal{A}$ are all real, and the corresponding eigenfunctions $Q_{j}(x,z_{1})$ solve the following Sturm-Liouville differential equation $$\sum_{i=1}^{d}{s(x,z_{1})\over{\partial\tau_{i}(x,z_{1})\over\partial x_{i}}}% \frac{\partial^{2}Q_{j}(x,z_{1})}{\partial x_{i}^{2}}+\sum_{i=1}^{d}\frac{% \partial}{\partial x_{i}}\!\!\left({s(x,z_{1})\over{\partial\tau_{i}(x,z_{1})% \over\partial x_{i}}}\right)\frac{\partial Q_{j}(x,z_{1})}{\partial x_{i}}-% \lambda_{j}s(x,z_{1})Q_{j}(x,z_{1})=0.$$ (2.5) These $~{}Q_{j}(x,z_{1})~{}$ form a basis of $L^{2}(\mathbb{R}^{d},s(x,z_{1}))$, orthogonal with respect to $\big{<}\cdot,\cdot\big{>}_{s}$. 2.1.1 A special case Assume that for $a.e.~{}Z_{1}=z_{1}$, $s(x,z_{1})\in C^{\infty}(\mathbb{R}^{d})$ w.r.t. variable $x$, for each nonnegative integer $j=(j_{1},\dots,j_{d})$. Consider a special case when $~{}Q_{j}(x,z_{1})={(-1)^{j_{1}+\dots+j_{d}}\over s(x,z_{1})}{\partial^{j_{1}+% \dots+j_{d}}\over\partial x^{j_{1}}_{1}\dots\partial x^{j_{d}}_{d}}s(x,z_{1})~{}$ are the orthogonal eigenfunctions in $L^{2}(\mathbb{R}^{d},s(x,z_{1}))$, then their projections $$P_{j}(Z):=E[Q_{j}(X,Z_{1})|Z]=\prod_{k=1}^{d}\mu_{k}(Z)^{j_{k}}=\mu(Z)^{j}$$ due to integration by parts under the boundary conditions requiring the corresponding boundary integral to be zero. Example: In particular, using the Rodrigues’ formula for the Sturm-Liouville boundary value problem, we can show that when $$s(x,z_{1})=\gamma(z_{1})\exp{\left[\alpha(z_{1})\frac{x^{T}x}{2}+\beta(z_{1})% \right]},$$ with $\alpha(z_{1})<0$ for each $z_{1}$, there is a series of eigenvalues $\lambda_{0},\lambda_{1},\lambda_{2},...$ that lead to solutions $\{Q_{j}(x,z_{1})\}_{j=0}^{\infty}$, where each $~{}Q_{j}(x,z_{1})={(-1)^{j_{1}+\dots+j_{d}}\over s(x,z_{1})}{\partial^{j_{1}+% \dots+j_{d}}\over\partial x^{j_{1}}_{1}\dots\partial x^{j_{d}}_{d}}s(x,z_{1})~{}$ is a multidimensional Hermite-type orthogonal polynomial basis for $L^{2}(\mathbb{R}^{d},s(x,z_{1}))$.777When $s(x,z_{1})$ is of this form $Q_{j}(x,z_{1})$ are polynomials. In general equation (2.5) may have solutions for other $s(x,z_{1})$ that are not necessarily polynomials. 2.2 The orthogonal polynomial basis results for continuous $X$ We assume that $d=1$ in this subsection with the exception of Example 2 below. Then $$\frac{\partial f_{X|Z}(x|z)}{\partial x}=\frac{\frac{\partial s(x,z_{1})}{% \partial x}}{s(x)}f_{X|Z}(x|z)+\mu(z)\frac{\partial\tau(x,z_{1})}{\partial x}f% _{X|Z}(x|z).$$ and $$AQ(x,z_{1})=\frac{\partial}{\partial x}\left(\frac{s(x,z_{1})Q(x,z_{1})}{\frac% {\partial\tau(x,z_{1})}{\partial x}}\right)\frac{1}{s(x,z_{1})}=\frac{\frac{% \partial Q(x,z_{1})}{\partial x}}{\frac{\partial\tau(x,z_{1})}{\partial x}}+% \frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})}\frac{Q(x,z_{1})}{% \frac{\partial\tau(x,z_{1})}{\partial x}}-\frac{Q(x,z_{1})\frac{\partial^{2}% \tau(x,z_{1})}{\partial x^{2}}}{\left[\frac{\partial\tau(x,z_{1})}{\partial x}% \right]^{2}}$$ as in (2.2). Once again, equation (2.1) is satisfied if $~{}Q(x,z_{1})s(x,z_{1})\Big{/}\frac{\partial\tau(x,z_{1})}{\partial x}=0~{}$ on $\partial\Omega(z)$ for $a.e.\;Z=z$. here, for $d=1$, Stein-Markov operator is $$\mathcal{A}Q(x,z_{1}):=A{\partial Q(x,z_{1})\over\partial x}=\frac{\frac{% \partial^{2}Q(x,z_{1})}{\partial x^{2}}}{\frac{\partial\tau(x,z_{1})}{\partial x% }}+\left(\frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})}\frac{1}{% \frac{\partial\tau(x,z_{1})}{\partial x}}-\frac{\frac{\partial^{2}\tau(x,z_{1}% )}{\partial x^{2}}}{\left[\frac{\partial\tau(x,z_{1})}{\partial x}\right]^{2}}% \right)\frac{\partial Q(x,z_{1})}{\partial x}.$$ We would like to find eigenfunctions $Q_{j}$ and eigenvalues $\lambda_{j}$ of ${\cal A}$ such that $~{}\mathcal{A}Q_{j}=\lambda_{j}Q_{j}$. We define $$\phi(x,z_{1}):=-\frac{1}{\frac{\partial\tau(x,z_{1})}{\partial x}}\quad\text{ % and }\quad\psi(x,z_{1}):=-\frac{1}{\frac{\partial\tau(x,z_{1})}{\partial x}}% \left[\frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})}-\frac{\frac{% \partial^{2}\tau(x,z_{1})}{\partial x^{2}}}{\frac{\partial\tau(x,z_{1})}{% \partial x}}\right],$$ Then Sturm-Liouville differential equation (2.5) can be rewritten as $$\phi(x,z_{1})\frac{\partial^{2}Q(x,z_{1})}{\partial x^{2}}+\psi(x,z_{1})\frac{% \partial Q(x,z_{1})}{\partial x}+\lambda Q(x,z_{1})=0.$$ (2.6) with the boundary conditions (2.4) rewritten as $$\displaystyle c_{1}Q(\alpha_{1}(z_{1}),z_{1})+c_{2}\frac{\partial Q(\alpha_{1}% (z_{1}),z_{1})}{\partial x}=0$$ $$\displaystyle c_{1}^{2}+c_{2}^{2}>0,$$ (2.7) $$\displaystyle d_{1}Q(\alpha_{2}(z_{1}),z_{1})+d_{2}\frac{\partial Q(\alpha_{2}% (z_{1}),z_{1})}{\partial x}=0$$ $$\displaystyle d_{1}^{2}+d_{2}^{2}>0,$$ where $~{}\Omega(z)=\big{(}\alpha_{1}(z_{1}),\alpha_{2}(z_{1})\big{)}$ denotes the support of $X$ conditioned on $Z_{1}=z_{1}$. The solution to this Sturm-Liouville type problem exists when one of the three sufficient conditions listed below is satisfied. See [21] and [18].888[18] and [21] give results for Hermite, Laguerre and Jacobi polynomials, the other cases are obtained by defining $\tilde{x}=ax+b$ and applying the results in [18] and [21]. Also note that these conditions are sufficient for the solutions to be polynomials. Solutions that are not polynomials, but nevertheless form an orthogonal basis might exist under less restrictive conditions. Moreover, in the cases we list below, the solutions are orthogonal polynomials with respect to the weight function $~{}s(x,z_{1})$, and for each $j$, the corresponding eigenfunction $Q_{j}(x,z_{1})$ is proportional to $$\frac{1}{s(x,z_{1})}\frac{\partial^{j}}{\partial x^{j}}\left(s(x,z_{1})[\phi(x% ,z_{1})]^{j}\right).$$ Here $Q_{0}$ is a constant eigenfunction corresponding to $\lambda_{0}=0$. Finally, iterating equation (2.1) proves the following important result. Theorem 2.1. Suppose $Q_{j}(x,z_{1})$ are an orthogonal polynomial basis $Z~{}a.s.$ Then functions $P_{j}(Z)=E[Q_{j}(X,Z_{1})|Z]$ are $j^{th}$ order polynomials in $\mu(Z)$ with its coefficients being functions of $Z_{1}$. Proof. Observe that $P_{0}\equiv Q_{0}$ is a constant. Consider $j>0$, since $f_{X|Z}(x|z)$ satisfies the unique identification condition stated in Theorem 2.2 of [17] (that in turn is a Corollary of Theorem 1 of [15]), $E[\mathcal{A}Q_{j}(X,Z_{1})|Z]=\lambda_{j}E[Q_{j}(X,Z_{1})|Z]\not=0$. Therefore $\lambda_{j}\not=0$, and since $~{}\mathcal{A}Q_{j}=\lambda_{j}Q_{j}$, $$P_{j}(Z)=E[Q_{j}(X,Z_{1})|Z]={1\over\lambda_{j}}E[\mathcal{A}Q_{j}(X,Z_{1})|Z]% ={1\over\lambda_{j}}E\left[A{\partial\over\partial x}Q_{j}(X,Z_{1})\Big{|}Z% \right],$$ where $~{}{\partial\over\partial x}Q_{j}(x,z_{1})=\sum\limits_{i=0}^{j-1}a_{i}Q_{i}(x% ,z_{1})$ is a polynomial of degree $j-1$ in $x$. Therefore $$P_{j}(Z)={a_{0}P_{0}\over\lambda_{j}}+\sum\limits_{i=1}^{j-1}{a_{i}\over% \lambda_{j}}E[AQ_{i}(X,Z_{1})~{}|Z]={a_{0}P_{0}\over\lambda_{j}}-\mu(Z)\sum% \limits_{i=1}^{j-1}{a_{i}\over\lambda_{j}}P_{i}(Z)$$ by (2.1). The statement of the theorem follows by induction. ∎ Next we list the sufficient conditions for the eigenfunctions $\{Q_{j}(x,z_{1})\}_{j=0}^{\infty}$ to be orthogonal polynomials in $x$ that form a basis in $L^{2}(\mathbb{R}^{d},s(x,z_{1}))$, together with the corresponding examples of continuous conditional densities $f_{X|Z}(x|z)$. 1. Hermite-like polynomials: $\phi$ is a non-zero constant, $\psi$ is linear and the leading term of $\psi$ has the opposite sign of $\phi$. In this case, let $\phi(x,z_{1})=c(z_{1})\neq 0$, then $\tau(x,z_{1})=-\frac{1}{c(z_{1})}x+d(z_{1})$. Then, $\psi(x,z_{1})=c(z_{1})\frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})% }=a(z_{1})x+b(z_{1})$. Thus, we have $\frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})}=\frac{a(z_{1})}{c(z_% {1})}x+\frac{b(z_{1})}{c(z_{1})}$. Let $\alpha(z_{1}):=a(z_{1})/c(z_{1})$ and $\beta(z_{1}):=b(z_{1})/c(z_{1})$, where $\alpha(z_{1})<0$ $\forall z_{1}$, since $a(z_{1})$ and $c(z_{1})$ always have opposite signs. Solving for $s(x,z_{1})$ we get $s(x,z_{1})=\gamma(z_{1})\exp{\big{(}\alpha(z_{1})x^{2}/2+\beta(z_{1})x\big{)}}$. Example 1: Given a function $\sigma(z_{1})\not=0$, and suppose $d=1$. Consider $$f_{X|Z}(x|z)=\frac{1}{\sqrt{2\pi\sigma^{2}(z_{1})}}\exp{\left\{-\frac{(x-% \tilde{\mu}(z))^{2}}{2\sigma^{2}(z_{1})}\right\}}.$$ Then $~{}~{}t(z)=\frac{1}{\sqrt{2\pi\sigma^{2}(z_{1})}}\exp{\left\{-{z_{2}^{2}\over 2% \sigma^{2}(z_{1})}\right\}}$, $~{}~{}s(x,z_{1})=\exp{\left\{-{x^{2}\over 2\sigma^{2}(z_{1})}\right\}}$, $~{}~{}\mu(z)=\tilde{\mu}(z)/\sigma^{2}(z_{1})$, and ${\tau(x,z_{1})=x}$. The orthogonal polynomials $Q_{j}(x,z_{1})$ are $$Q_{j}(x,z_{1})=(-1)^{j}e^{{x^{2}\over 2\sigma^{2}(z_{1})}}\frac{\mathrm{d}^{j}% }{\mathrm{d}x^{j}}e^{-{x^{2}\over 2\sigma^{2}(z_{1})}},$$ $P_{j}(z)={\tilde{\mu}(z)^{j}\over\sigma^{2j}(z_{1})}=\big{[}\mu(z)\big{]}^{j}$ and $\lambda_{j}=-j$ for each $j>1$. Remark: In [10] it is assumed that $$\displaystyle\left(\begin{array}[]{c}X\\ Z_{2}\end{array}\right)|Z_{1}=z_{1}~{}~{}\sim N\left(\left(\begin{array}[]{c}% \mu_{X}(z_{1})\\ \mu_{Z_{2}}(z_{1})\end{array}\right),\left[\begin{array}[]{cc}\sigma^{2}_{X}(z% _{1})&\sigma_{XZ_{2}}(z_{1})\\ \sigma_{XZ_{2}}(z_{1})&\sigma^{2}_{Z_{2}}(z_{1})\end{array}\right]\right).$$ This corresponds to Example 1 above with $$\tilde{\mu}(z_{1},z_{2})=\mu_{X}(z_{1})+\frac{\sigma_{XZ_{2}}(z_{1})}{\sigma^{% 2}_{X}(z_{1})}(z_{2}-\mu_{Z_{2}}(z_{1}))$$ and $$\sigma^{2}(z_{1})=\left[1-\frac{\sigma^{2}_{XZ_{2}}(z_{1})}{\sigma^{2}_{X}(z_{% 1})\sigma^{2}_{Z_{2}}(z_{1})}\right]\sigma^{2}_{X}(z_{1}).$$ Example 2: Suppose $d>1$. For $x=(x_{1},\dots,x_{d})^{T}$ and $z_{2}=(z^{\prime}_{1},\dots,z^{\prime}_{d})^{T}$, let $f_{X|Z}(x|z)={\sqrt{\det M}\over(2\pi)^{d\over 2}}e^{-{(x-z_{2})^{T}M(x-z_{2})% \over 2}}$, where $M=M(z_{1})$ is the inverse of the variance-covariance $d\times d$ matrix function with $~{}\det M(z_{1})>0$. Then $~{}t(z)={\sqrt{\det M}\over(2\pi)^{d\over 2}}e^{-{z^{T}Mz\over 2}}$, $~{}s(x,z_{1})=e^{-{x^{T}Mx\over 2}}$, $\mu(z)=Mz_{2}$, and $\tau(x,z_{1})=x$. For each nonnegative integer-valued $j=(j_{1},\dots,j_{d})$, the orthogonal polynomial $Q_{j}(x,z_{1})$ is given by $$Q_{j}(x,z_{1})=(-1)^{j_{1}+\dots+j_{d}}e^{x^{T}Mx\over 2}{\partial^{j_{1}+% \dots+j_{d}}\over\partial^{j}x^{j_{1}}_{1}\dots\partial x^{j_{d}}_{d}}e^{-{x^{% T}Mx\over 2}}.$$ Then $$P_{j}(Z)=E[Q_{j}(X)|Z]~{}=(e_{1}^{T}MZ_{2})^{j_{1}}\dots(e_{d}^{T}MZ_{2})^{j_{% d}}~{}=\big{(}e_{1}\cdot\mu(Z)\big{)}^{j_{1}}\dots\big{(}e_{d}\cdot\mu(Z)\big{% )}^{j_{d}}=\big{[}\mu(z)\big{]}^{j},$$ where $e_{1},\ldots,e_{d}$ denote standard basis vectors, and for any vector $w=(w_{1},w_{2},\ldots,w_{d})^{T}$, $~{}w^{j}:=w_{1}^{j_{1}}w_{2}^{j_{2}}\ldots w_{d}^{j_{d}}$ . 2. Laguerre-like polynomials: $\phi$ and $\psi$ are both linear, the roots of $\phi$ and $\psi$ are different, and the leading terms of $\phi$ and $\psi$ have the same sign if the root of $\psi$ is less than the root of $\phi$ or vice versa. Suppose $\phi(x,z_{1})=a(z_{1})x+b(z_{1})$ and $\psi(x,z_{1})=c(z_{1})x+d(z_{1})$ with $b(z_{1})/a(z_{1})\neq d(z_{1})/c(z_{1})$. Then $$\frac{\partial\tau(x,z_{1})}{\partial x}=\frac{1}{-a(z_{1})x-b(z_{1})},$$ so $$\tau(x,z_{1})=\frac{1}{a(z_{1})}\log[{a(z_{1})x+b(z_{1})|}+C(z_{1}).$$ Moreover, $$\psi(x,z_{1})=[a(z_{1})x+b(z_{1})]\frac{\frac{\partial s(x,z_{1})}{\partial x}% }{s(x,z_{1})}+a(z_{1})=c(z_{1})x+d(z_{1})\Leftrightarrow\frac{\frac{\partial s% (x,z_{1})}{\partial x}}{s(x,z_{1})}=\frac{c(z_{1})x+d^{*}(z_{1})}{a(z_{1})x+b(% z_{1})},$$ where $d^{*}(z_{1})=d(z_{1})-a(z_{1})$. This means that $$s(x,z_{1})=\rho(z_{1})\exp{\left\{\int\frac{c(z_{1})x+d^{*}(z_{1})}{a(z_{1})x+% b(z_{1})}\mathrm{d}x\right\}}.$$ Example: Suppose $d=1$. Let $\delta,r>0$ and a function $g:\mathbb{R}\rightarrow\mathbb{R}$ be given, and let $\Gamma(\cdot)$ denote the gamma function. Consider $$f_{X|Z}(x|z)={1\over\Gamma(r+z_{2})}\delta^{r+z_{2}}\big{(}x-g(z_{1})\big{)}^{% r+z_{2}-1}e^{-\delta(x-g(z_{1}))}~{}~{}\text{ for }~{}x>g(z_{1}),$$ where $Z_{2}>-r$. Then $~{}~{}t(z)={1\over\Gamma(r+z_{2})}\delta^{r+z_{2}}$, $~{}~{}s(x,z_{1})=\big{(}x-g(z_{1})\big{)}^{r-1}e^{-\delta(x-g(z_{1}))}$, $\mu(z)=z_{2}$, and $~{}~{}\tau(x,z_{1})=\log{\big{(}x-g(z_{1})\big{)}}$, since $\big{(}x-g(z_{1})\big{)}^{z_{2}}=e^{z_{2}\log{(x-g(z_{1}))}}$. In this case, $\phi(x,z_{1})=-\big{(}x-g(z_{1})\big{)}$ and $\psi(x,z_{1})=\delta\big{(}x-g(z_{1})\big{)}-r$. The orthogonal polynomials $Q_{j}(x,z_{1})$ are $$Q_{j}(x,z_{1})={{\big{(}x-g(z_{1})\big{)}^{-(r-1)}e^{\delta(x-g(z_{1}))}}\over j% !}~{}\frac{\mathrm{d}^{j}}{\mathrm{d}x^{j}}\left[\big{(}x-g(z_{1})\big{)}^{j+r% -1}e^{-\delta(x-g(z_{1}))}\right],$$ for $j>1$, $~{}P_{j}(z)=z_{2}(z_{2}-1)\cdots(z_{2}-n+1)$, and $~{}\lambda_{j}=-\delta j$. 3. Jacobi-like polynomials: $\phi$ is quadratic, $\psi$ is linear, $\phi$ has two distinct real roots, the root of $\psi$ lies between the two roots of $\phi$, and the leading terms of $\phi$ and $\psi$ have the same sign. In this case, $$\frac{\partial\tau(x,z_{1})}{\partial x}=-\frac{1}{(x-r_{1}(z_{1}))(x-r_{2}(z_% {1}))},$$ with $r_{1}\neq r_{2}$ and $x$ not equal to either one of them. In this case, however, $\tau$ is not one-to-one on $x$, and the condition given in Theorem 2.2 of [17] does not hold unless specific support conditions are met. Solving the last differential equation we get $$\tau(x,z_{1})=\frac{1}{r_{1}(z_{1})-r_{2}(z_{1})}\left[\log{|x-r_{2}(z_{1})|}-% \log{|x-r_{1}(z_{1})|}\right]+c(z_{1}).$$ Plugging this into the formula for $\psi$ yields $$\psi(x,z_{1})=(x-r_{1}(z_{1}))(x-r_{2}(z_{1}))\left[\frac{\frac{\partial s(x,z% _{1})}{\partial x}}{s(x,z_{1})}+\frac{2x-r_{1}(z_{1})-r_{2}(z_{1})}{(x-r_{1}(z% _{1}))(x-r_{2}(z_{1}))}\right]=a(z_{1})x+b(z_{1}).$$ Rearranging terms gives us $$\displaystyle\frac{\frac{\partial s(x,z_{1})}{\partial x}}{s(x,z_{1})}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2x-r_{1}(z_{1})-r_{2}(z_{1})}{(x-r_{1}(z_{1}))(x-r_{2}(z_{% 1}))}$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{r_{1}(z_{1})-r_{2}(z_{1})}\left[\frac{a(z_{1})r_{1}(z_{1% })+b(z_{1})}{x-r_{1}(z_{1})}-\frac{a(z_{1})r_{2}(z_{1})+b(z_{1})}{x-r_{2}(z_{1% })}\right]$$ $$\displaystyle=:$$ $$\displaystyle\kappa(x,z_{1}).$$ Let $\alpha(x,z_{1}):=\int\kappa(x,z_{1})\mathrm{d}x$. Then $$\displaystyle\alpha(x,z_{1})$$ $$\displaystyle=$$ $$\displaystyle-\log{|(x-r_{1}(z_{1}))(x-r_{2}(z_{2}))|}$$ $$\displaystyle+$$ $$\displaystyle\frac{a(z_{1})r_{1}(z_{1})+b(z_{1})}{r_{1}(z_{1})-r_{2}(z_{1})}% \log{|x-r_{1}(z_{1})|}$$ $$\displaystyle-$$ $$\displaystyle\frac{a(z_{1})r_{2}(z_{1})+b(z_{1})}{r_{1}(z_{1})-r_{2}(z_{1})}% \log{|x-r_{2}(z_{1})|},$$ and $$s(x,z_{1})=\rho(z_{1})\exp{[\alpha(x,z_{1})]}.$$ Example: Suppose for simplicity that there is no $Z_{1}$ (so that $z=z_{2}$), and $$f_{X|Z}(x|z)={1\over{\cal B}(a+z,b-z)}x^{a+z-1}(1-x)^{b-z-1}~{}~{}\text{ for }% x\in(0,1),$$ where $\mathcal{B}(\cdot,\cdot)$ denotes the beta function. Suppose the following condition is satisfied: $$\lim\limits_{x\rightarrow 0+}x^{a+Z}Q(x)=\lim\limits_{x\rightarrow 1-}(1-x)^{b% -Z}Q(x)=0\qquad Z-a.s.$$ (2.8) We also assume the support of $Z$ is in $(-a,b)$. Then $~{}\mu(z)=z$, $~{}t(z)={{\cal B}(a,b)\over{\cal B}(a+z,b-z)}$, and $~{}s(x)={1\over{\cal B}(a,b)}x^{a-1}(1-x)^{b-1}$. Finally, $\tau(x)=\log{\left({x\over 1-x}\right)}$ since $\left({x\over 1-x}\right)^{z}=\exp{\left[z\log{\left({x\over 1-x}\right)}% \right]}$. Then $\phi(x)=-x(1-x)$ and $\psi(x)=(a-b)x-a$. The orthogonal polynomial $Q_{j}$ are the scaled Jacobi polynomials and satisfy the following hypergeometric differential equations of Gauss: $$x(1-x)Q_{j}^{\prime\prime}+(a-(a+b)x)Q_{j}^{\prime}+j(j+a+b-1)Q_{j}=0$$ for each degree $j=0,1,\dots$. See section 4.21 of [21], and [22]. These scaled Jacobi polynomials can be expressed with the hypergeometric functions $$Q_{j}(x):=P_{j}^{(a-1,b-1)}(1-2x)={(\alpha)_{j}\over j!}\cdot~{}_{2}F_{1}(-j,j% +a+b-1;a;x)~{},$$ where $(\alpha)_{j}:=\alpha(\alpha+1)\cdots(\alpha+j-1)$, and for $c\notin\mathbb{Z}_{-}$, ${}_{2}F_{1}(a,b;c;x):=\sum_{j=0}^{\infty}\frac{(a)_{j}(b)_{j}x^{j}}{(c)_{j}j!}$. Note that these $Q_{j}$’s satisfy equation (2.8). Moreover, the eigenvalues are $~{}\lambda_{j}=-j(j+a+b-1)$ and for $j>1$, $$P_{j}(Z)=E[Q_{j}(X)|Z]=-{Z\over\lambda_{j}}E[Q_{j}^{\prime}(X)|Z].$$ 2.3 The orthogonal polynomial basis results for discrete $X$ Here we show that the orthogonal polynomial basis results of the previous section go through when $X$ is discrete and satisfies the conditions in Theorem 1.1. Suppose for simplicity $X$ is one-dimensional with its conditional distribution given by $$P(X=x|Z=z):=p(x|z)=t(z)s(x,z_{1})[\mu(z)-m]^{x}$$ (2.9) for $$x\in a+\mathbb{Z}_{+}=\{a,a+1,a+2,\dots\},$$ where $\mu(Z)>m\;a.s.$, and a given $-\infty\leq a<\infty$. For a function $h$, define respectively the backwards and forwards difference operators as $$\displaystyle\nabla h(x)$$ $$\displaystyle:=$$ $$\displaystyle h(x)-h(x-1),$$ $$\displaystyle\Delta h(x)$$ $$\displaystyle:=$$ $$\displaystyle h(x+1)-h(x).$$ Let $Ah(x,z_{1}):=\frac{s(x-1,z_{1})}{s(x,z_{1})}\nabla h(x,z_{1})-\left[m+\frac{s(% x-1,z_{1})}{s(x,z_{1})}\right]h(x,z_{1})$, and let $s(a-1,z_{1})=0$ for almost every $Z=z$. Lemma 2.1. Suppose $g$ is such that $E[g(X,Z_{1})]<\infty$. Then $$E[Ag(X,Z_{1})|Z]=-\mu(Z)E[g(X,Z_{1})|Z]\qquad(Z-a.s.)$$ Proof. $$\displaystyle E[Ag(X,Z_{1})|Z]$$ $$\displaystyle=$$ $$\displaystyle\sum_{x\in a+\mathbb{Z}_{+}}\frac{s(x-1,Z_{1})}{s(x,Z_{1})}[g(x,Z% _{1})-g(x-1,Z_{1})]t(z)s(x,Z_{1})[\mu(Z)-m]^{x}$$ $$\displaystyle-$$ $$\displaystyle\sum_{x\in a+\mathbb{Z}_{+}}\left[m+\frac{s(x-1,Z_{1})}{s(x,Z_{1}% )}\right]g(x,Z_{1})t(z)s(x,Z_{1})[\mu(Z)-m]^{x}$$ $$\displaystyle=$$ $$\displaystyle[m-\mu(Z)]\sum_{x\in a+\mathbb{Z}_{+}}g(x-1,Z_{1})t(z)s(x-1,Z_{1}% )[\mu(Z)-m]^{x-1}$$ $$\displaystyle-$$ $$\displaystyle m\sum_{x\in a+\mathbb{Z}_{+}}g(x,Z_{1})t(z)s(x,Z_{1})[\mu(Z)-m]^% {x}=-\mu(Z)E[g(X,Z_{1}|Z].$$ ∎ Note that the result holds when the support of $p(x|z)=P(x=x|Z=z)$ is $$a-\mathbb{Z}_{+}=\{\dots,a-2,a-1,a\}$$ with $-\infty<a<\infty,~{}$ $~{}Ah(x,z_{1}):=\frac{s(x+1,z_{1})}{s(x,z_{1})}\Delta h(x,z_{1})-\left[m+\frac% {s(x+1,z_{1})}{s(x,z_{1})}\right]h(x,z_{1}),~{}$ and $s(a+1,z_{1})=0$ for almost every $Z=z$. From the above lemma we see that equation (2.1) holds, and iterating on that equation yields $$E[A^{k}g(X)|Z]=(-\mu(Z))^{k}E[g(X)|Z].$$ (2.10) The corresponding Stein-Markov operator ${\cal A}$ is defined as ${\cal A}h=A\Delta h$. The eigenfunctions of ${\cal A}$ are orthogonal polynomials $Q_{j}$ such that $${\cal A}Q_{j}(x,z_{1})=\lambda_{j}Q_{j}(x,z_{1}).$$ See [21], [18]. Then by (2.1) and (2.10) we have $$\lambda_{j}E[Q_{j}(X,Z_{1})|Z]=E[A\Delta Q_{j}(X)|Z]=-\mu(Z)E[\Delta Q_{j}(X,Z% _{1})|Z],$$ so that $$E[Q_{j}(X,Z_{1})|Z]={-\mu(Z)\over\lambda_{j}}E[\Delta Q_{j}(X,Z_{1})|Z]$$ for $j>1$. Thus, we know recursively that $P_{j}(Z):=E[Q_{j}(X,Z_{1})|Z]$ is a $j$-th degree polynomial in $\mu(Z)$, as in Theorem 2.1 of the preceding subsection. We now present the following specific examples. 1. Charlier polynomials: Suppose there is no $Z_{1}$, and $X|Z$ has a Poisson distribution with density $p(x|z)={e^{-(\tilde{m}_{0}+z)}[\tilde{m}_{0}+z]^{x}\over x!}=e^{-z}{e^{-\tilde% {m}_{0}}\tilde{m}_{0}^{x}\over x!}\left[1+{z\over\tilde{m}_{0}}\right]^{x}$, for $x\in\mathbb{N}$, so that $t(z)=e^{-z}$, $s(x)={e^{-\tilde{m}_{0}}\tilde{m}_{0}^{x}\over x!}$, $m_{0}=1$, and $\mu(z)={z\over\tilde{m}_{o}}$. Then $Ah(x)=h(x)-{x\over\tilde{m}_{0}}h(x-1)$ is the Stein operator. The eigenfunctions of the Stein-Markov operator are the Charlier polynomials $Q_{j}(x)=C_{j}(x;\tilde{m}_{0})(x)=\sum_{r=0}^{j}\binom{j}{r}(-1)^{j-r}\tilde{% m}_{0}^{-r}x(x-1)\dots(x-r+1)$ which are orthogonal w.r.t. Poisson-Charlier weight measure $\rho(x):={e^{-\tilde{m}_{0}}\tilde{m}_{0}^{x}\over x!}\sum_{k=0}^{\infty}% \delta_{k}(x)$, where $\delta_{k}(x)$ equals 1 if $k=x$, and 0 otherwise. See [18]. Finally, $P_{j}(Z)=E[Q_{j}(X)|Z]=\sum_{r=0}^{j}\sum_{x=r}^{\infty}e^{-(\tilde{m}_{0}+Z)}% {(\tilde{m}_{0}+Z)^{x}\over(x-r)!}\binom{j}{r}(-1)^{j-r}\tilde{m}_{0}^{-r}={Z^% {j}\over\tilde{m}_{0}^{j}}.$ 2. Meixner polynomials: Suppose there is no $Z_{1}$, and for $x\in\mathbb{N}$ and $\alpha$ an integer greater than or equal to 1, $p(x|z)=\binom{x+\alpha-1}{x}p^{\alpha}[1-p+\mu(z)]^{x}t(z)$, where $t(z)=\left[\sum_{x=0}^{\infty}{\Gamma(x+\alpha)\over x!\Gamma(\alpha)}p^{% \alpha}[1-p+\mu(z)]^{x}\right]^{-1}$. The above lemma applies with $s(x)=\binom{x+\alpha-1}{x}p^{\alpha}$, $m_{0}=1-p$. Then $Ah(x)=(1-p)h(x)-{x\over x+\alpha}h(x-1)$ is the Stein operator. The eigenfunctions of the Stein-Markov operator are the Meixner polynomials $Q_{j}(x)=M_{j}(x;\alpha,p)(x)=\sum_{k=0}^{j}(-1)^{k}\binom{j}{k}\binom{x}{k}k!% (x-\alpha)_{j-k}p^{-k}$, where $(a)_{j}:=a(a+1)\ldots(a+j-1)$. which are orthogonal w.r.t. weight measure $\rho(x):=s(x)\sum_{k=0}^{\infty}\delta_{k}(x)$. 2.4 Extension to Pearson-like and Ord-like Families Suppose there is no $Z_{1}$, i.e. $Z=Z_{2}$. Suppose $\phi(x)$ is a polynomial of degree at most two and $\psi(x)$ is a decreasing linear function on an interval $(a,b)$. Also $\phi(x)>0$ for $a<x<b$, $\phi(a)=0$ if $a$ is finite, and $\phi(b)=0$ if $b$ is finite. If $\xi$ is a random variable with either Lebesgue density or density with respect to counting measure $f(x)$ on $(a,b)$ that satisfies $$D[\phi(x)f(x)]=\psi(x)f(x),$$ (2.11) where $D$ denotes derivative when $\xi$ is continuous, and the forward difference operator $\Delta$ when $\xi$ is discrete. Then the above relation (2.11) describes the Pearson family when $\xi$ is continuous and Ord family, when $\xi$ is discrete. Many continuous distributions fall into the Pearson family, and many discrete ones fall into Ord’s family. See [18] and the references therein. Suppose $\xi$ is a random variable in either Pearson or Ord family. Following [18], define its Stein operator as $$AQ(x)=\phi(x)D^{*}Q(x)+\psi(x)Q(x)$$ for all $Q$ such that $~{}E[Q(\xi)]<\infty~{}$ and $~{}E[D^{*}Q(\xi)]<\infty$, where $D^{*}$ denotes the derivative when $\xi$ is continuous and the backwards difference operator $\nabla$ when $\xi$ is discrete. Then $E[AQ(\xi)]=0$. Let the corresponding Stein-Markov operator, $\cal{A}$, be defined as $\mathcal{A}Q:=ADQ$. Now, consider a Stein operator $AQ(x)=\phi(x)D^{*}Q(x)+\psi(x)Q(x)$ together with the corresponding Stein-Markov operator $\cal{A}$ for some random variable in either Pearson or Ord family. Let $Q_{j}$ be the orthogonal polynomial eigenfunctions of $\cal{A}$. Consider random variables $X$ and $Z$, where the conditional distribution of $X$ given $Z$ is such that the Stein operator of $X$ given $Z$ equals $$A_{\mu}Q=\phi D^{*}Q+(\psi+c\mu(Z))Q,$$ where $c$ is a constant. Then $E[A_{\mu}Q(X)|Z]=0$. Now, since $Q_{j}$ are eigenfunctions of $\cal{A}$, $$\displaystyle\lambda_{j}E[Q_{j}(X)|Z]$$ $$\displaystyle=$$ $$\displaystyle E[\mathcal{A}Q_{j}(X)|Z]=E[ADQ_{j}(X)|Z]=E[(A-A_{\mu})DQ_{j}(X)|Z]$$ $$\displaystyle=$$ $$\displaystyle-c\mu(Z)E[DQ_{j}(X)|Z].$$ Letting $P_{j}(Z):=E[Q_{j}(X)|Z]$ we see that $P_{j}$’s are $j^{th}$-order polynomials in $\mu(Z)$ as $DQ_{j}(x)$ can be expressed as a linear combination of $Q_{0}(x),Q_{1}(x),\ldots,Q_{j-1}(x)$ in the above equation analogous to (2.1). Thus our main result Theorem 2.1 applies whenever the Stein operator of $X|Z$ is expressed as $A_{\mu}Q=\phi D^{*}Q+(\psi+c\mu(Z))Q$. The question then arises for which, if any, conditional distributions of $X|Z$ the Stein operator is of this form. It should be pointed out that this current approach extends to multidimensional discrete $X|Z$, and other types of distributions with well defined Stein operators. We now give some examples for such discrete distributions. Examples: 1. Binomial distribution: It is known that $$AQ(x)=(1-p)x\nabla Q(x)+[pN-x]Q(x)$$ is the Stein operator for a Binomial random variable with parameters $N$ and $p$. In this case, $\phi(x)=(1-p)x$ and $\psi(x)=pN-x$. See [18]. Suppose $X|Z\sim Bin(N+\mu(Z),p)$, with $\mu(Z)\in\mathbb{Z}_{+}$. Then $$A_{\mu}Q(x)=(1-p)x\nabla Q(x)+[pN+p\mu(Z)-x]Q(x)$$ Let $Q_{-1}(x):=0$, $Q_{1}(x)=0$, and $Q_{j}(x)=K_{j}(x,N,p)=\sum_{l=0}^{j}(-1)^{j-l}\binom{N-x}{j-l}\binom{x}{l}p^{j% -l}(1-p)^{l}$, the Krawtchouk polynomials, are orthogonal with respect to the binomial $Bin(N,p)$ distribution. 2. Pascal / Negative binomial distribution: It is known that $$AQ(x)=x\nabla Q(x)+[(1-p)\alpha-px]Q(x)$$ is the Stein operator for a negative binomial random variable with parameters $\alpha$ and $p$. In this case, $\phi(x)=x$ and $\psi(x)=(1-p)\alpha-px$. See [18]. Suppose $$P(X=x|Z=z)=p(x|z)=\binom{x+\alpha+\mu(z)-1}{x}p^{\alpha+\mu(z)}(1-p)^{x},$$ for $x\in\mathbb{N}_{+}$. Then $$A_{\mu}Q(x)=x\nabla Q(x)+[(1-p)\alpha+(1-p)\mu(Z)-px]Q(x)$$ In this case, $Q_{j}=M_{j}(x;\alpha,p)$, where $M_{j}(x;\alpha,p)$ denote Meixner polynomials which were defined in the previous section and are orthogonal with respect to the Pascal distribution with parameter vector $(\alpha,p)$. 3 Conclusion In this paper we introduced an identification problem for nonparametric and semiparametric models in the case when the conditional distribution of $X$ given $Z$ belongs to the generalized power series distributions family.999We borrow this term from [13] Using an approach based on differential equations, Sturm-Liouville theory specifically, we solved orthogonal polynomial basis problem for the conditional expectation transformation, $E[g(X)|Z]$. Finally, we discussed how our polynomial basis results can be extended to the case when the conditional distribution of $X|Z$ belongs to either the modified Pearson or modified Ord family. In deriving our results we encountered a second order differential (or difference, in the case of discrete $X$) equation with boundary values, which is a Sturm-Luiouville type equation. In this paper we focused on cases in which the solutions to the Sturm-Liuouville problem, which are the eigenfunctions of the operator $\mathcal{A}$, are an orthogonal polynomial basis. Our approach is more general than this. In particular, one might question for what conditional distributions the eigenfunctions of the Stein-Markov operator $\cal{A}$ are orthogonal basis functions, but not necessarily orthogonal polynomials. Our paper does not address this question. Addressing this question is left for future research. Finally, the work of applying the orthogonal polynomial basis approach for estimating structural functions is nearing completion. References [1] Barbour, A. D. and Chen, L.H.Y. (2005): An introduction to Stein’s method, Singapore University Press [2] Barbour, A. D. (1990): Stein’s method for diffusion approximations, Probability Theory and Related Fields 84 Vol. 3, 297-322. [3] Blundell, R., X. Chen, and D. Kristensen (2007): Semi-Nonparametric IV Estimation of Shape-Invariant Engel Curves, Econometrica, 75, 1613-1669. [4] Chen, L.H.Y., Goldstein, L., and Shao, Q.M (2011): Normal approximation by Stein’s method, Springer [5] Chen X. and D. Pouzo (2012): Estimation of Nonparametric Conditional Moment Models With Possibly Nonsmooth Generalized Residuals, Econometrica, 80, 277-321. [6] Chen X. and M. Reiss (2011): On Rate Optimality for Ill-posed Inverse Problems in Econometrics, Econometric Theory, 27, 497-521. [7] Chernozhukov, V., P. Gagliardini and O. Scaillet (2008): Nonparametric Instrumental Variable Estimation of Quantile Structural Effects, Working Paper, HEC University of Geneva and Swiss Finance Institute. [8] Darolles, S., J. P. Florens, and E. Renault (2006): Nonparametric Instrumental Regression, Econometrica, 79, 1541-1565. [9] Hall, P. and J.L. Horowitz (2005): Nonparametric methods for inference in the presence of instrumental variables, Annals of Statistics 33, 2904-2929. [10] Hoderlein, S. and H. Holzmann (2011): Demand analysis as an ill-posed problem with semiparametric specification, Econometric Theory, 27, 460-471. [11] Hörmander, L. (1973): An Introduction to Complex Analysis in Several Variables (second ed.), North-Holland Mathematical Library, Vol. 7. [12] Horowitz, J. L. and S. Lee (2012): Uniform Confidence Bands for Functions Estimated Nonparametrically with Instrumental Variables, Journal of Econometrics, 168, 175-188. [13] Johnson, N. L., S. Kotz and A. W. Kemp (1992): Univariate Discrete Distributions(second ed.), Wiley Series in Probability and Statistics [14] Kovchegov, Y. V. and N. Yıldız (2011): Identification via completeness for discrete covariates and orthogonal polynomials, Oregon State University Technical Report. [15] Lehmann, E. L. (1959): Testing Statistical Hypotheses, Wiley, New York [16] Lehmann, E. L., S. Fienberg (Contributor) and G. Casella (1998): Theory of Point Estimation, Springer Texts in Statistics [17] Newey, W. K. and J. L. Powell (2003): Instrumental Variable Estimation of Nonparametric Models, Econometrica, 71, 1565-1578. [18] Schoutens, W. (2000): Stochastic Processes and Orthogonal Polynomials, Lecture notes in statistics (Springer-Verlag), Vol. 146. [19] Severini, T.A. and G. Tripathi (2006): Some identification issues in nonparametric linear models with endogenous regressors, Econometric Theory 22, 258-278. [20] Stein, C. (1986): Approximate computation of expectations, Institute of Mathematical Statistics Lecture Notes, Monograph Series [21] Szegö, G. (1975): Orthogonal Polynomials (fourth ed.), AMS Colloquium Publications, Vol. 23. [22] Whittaker, E. T. and G. N. Watson (1935): A Course of Modern Analysis (fourth ed.), Cambridge Mathematical Library.
Bijections for Baxter Families and Related Objects Stefan Felsner***Partially supported by DFG grant FE-340/7-1 Institut für Mathematik, Technische Universität Berlin. felsner@math.tu-berlin.de    Éric Fusy Laboratoire d’Informatique (LIX) École Polytechnique. fusy@lix.polytechnique.fr    Marc Noy Departament de Matemàtica Aplicada II Universitat Politècnica de Catalunya. marc.noy@upc.edu    David Orden†††Research partially supported by grants MTM2005-08618-C02-02 and S-0505/DPI/0235-02. Departamento de Matemáticas Universidad de Alcalá david.orden@uah.es () Abstract The Baxter number $B_{n}$ can be written as $B_{n}=\sum_{0}^{n}\Theta_{k,n-k-1}$ with $$\Theta_{k,\ell}=\frac{2}{(k+1)^{2}\,(k+2)}{k+\ell\choose k}{k+\ell+1\choose k}% {k+\ell+2\choose k}.$$ These numbers have first appeared in the enumeration of so-called Baxter permutations; $B_{n}$ is the number of Baxter permutations of size $n$, and $\Theta_{k,\ell}$ is the number of Baxter permutations with $k$ descents and $\ell$ rises. With a series of bijections we identify several families of combinatorial objects counted by the numbers $\Theta_{k,\ell}$. Appart from Baxter permutations, these include plane bipolar orientations with $k+2$ vertices and $\ell+2$ faces, 2-orientations of planar quadrangulations with $k+2$ white and $\ell+2$ black vertices, certain pairs of binary trees with $k+1$ left and $\ell+1$ right leaves, and a family of triples of non-intersecting lattice paths. This last family allows us to determine the value of $\Theta_{k,\ell}$ as an application of the lemma of Gessel and Viennot. The approach also allows us to count certain other subfamilies, e.g., alternating Baxter permutations, objects with symmetries and, via a bijection with a class of plan bipolar orientations also Schnyder woods of triangulations, which are known to be in bijection with 3-orientations. Mathematics Subject Classifications (2000). 05A15, 05A16, 05C10, 05C78 1 Introduction This paper deals with combinatorial families enumerated by either the Baxter numbers or the summands $\Theta_{k,\ell}$ of the usual expression of Baxter numbers. Many of the enumeration results have been known, even with bijective proofs. Our contribution to these cases lies in the integration into a larger context and in simplified bijections. We use specializations of the general bijections to count certain subfamilies, e.g., alternating Baxter permutations, objects with symmetries and Schnyder woods, i.e., 3-orientations of triangulations. This introduction will not include definitions of the objects we deal with, nor bibliographic citations, which are gathered in notes throughout the article. Therefore, we restrict it to a kind of commented table of contents. 2 Separating Decompositions and Book Embeddings $\;.\;$  2 Separating decompositions of plane quadrangulations are defined. It is shown that separating decompositions are in bijection with 2-orientations. Separating decompositions induce book embeddings of the underlying quadrangulation on 2 pages. These special book embeddings decompose into twin pairs of alternating trees, i.e., pairs of alternating trees with reverse reduced fingerprints. Actually, there is a bijection between twin pairs of alternating trees and separating decompositions. 3 Alternating Trees and other Catalan Families $\;.\;$  3 A bijection between alternating trees and full binary trees with the same fingerprint is obtained. Fingerprint and bodyprint yield a bijection between full binary trees with $k$ left and $\ell$ right leaves and certain pairs of non-intersecting lattice paths. The lemma of Gessel and Viennot allows to identify their number as the Narayana number $N(k+\ell-1,k)$. 4 Twin Pairs of Trees and the Baxter Numbers $\;.\;$  4 Twin pairs of alternating trees are in bijection with twin pairs of binary trees, these in turn are shown to be in bijection to certain rectangulations and to triples of non-intersecting lattice paths. Via the lemma of Gessel and Viennot this implies that there are $$\Theta_{k,\ell}=\frac{2}{(k+1)^{2}\,(k+2)}{k+\ell\choose k}{k+\ell+1\choose k}% {k+\ell+2\choose k}.$$ twin pairs of binary trees with $k+1$ left and $\ell+1$ right leaves. The bijections of previous sections yield a list of families enumerated by the number $\Theta_{k,\ell}$. 5 More Baxter Families $\;.\;$  5 We prove bijectively that $\Theta_{k,\ell}$ counts Baxter permutations with $k$ descents and $\ell$ rises. The bijections involve the Min- and Max-tree of a permutation and the rectangulations from the previous section. Some remarks on the enumeration of alternating Baxter permutations are added. 5.2 Plane Bipolar Orientations $\;.\;$  5.2 We explain a bijection between separating decompositions and bipolar orientations. The idea is to interpret the quadrangulation supporting the separating decompositions as an angular map. 5.3 Digression: Duality, Completion Graph, and Hamiltonicity. $\;.\;$  5.3 Combining ideas involving the angular map and the existence of a 2-book embedding for quadrangulations we derive a Hamiltonicity result. 6 Symmetries $\;.\;$  6 The bijections between families counted by $\Theta_{k,\ell}$ have the nice property that they commute with a half-turn rotation. This is exploited to count symmetric structures. 7 Schnyder Families $\;.\;$  7 Schnyder woods and 3-orientations of triangulations are known to be in bijection. We add a bijection between Schnyder woods and bipolar orientations with a special property. Tracing this special property through the bijections, we are able to find the number of Schnyder woods on $n$ vertices via Gessel and Viennot. This reproves a formula first obtained by Bonichon. 2 Separating Decompositions and Book Embeddings In the context of this paper a quadrangulation††margin: quadrangulation is a plane graph $Q=(V\cup\{s,t\},E)$ with only quadrangular faces. More precisely, $Q$ is a maximal bipartite plane graph, with $n+2$ vertices, prescribed color classes black and white and two distinguished black vertices $s$ and $t$ on the outer face. Note that $Q$ has $n$ faces and $2n$ edges. Definition 2.1. An orientation of the edges of $Q$ is a 2-orientation††margin: 2-orientation if every vertex, except the two special vertices $s$ and $t$, has outdegree two. From easy arguments on double-counting the edges of $Q$, $s$ and $t$ are sinks in every 2-orientation. Definition 2.2. An orientation and coloring of the edges of $Q$ with colors red and blue is a separating decomposition††margin: separating decomposition if: (1)All edges incident to $s$ are red and all edges incident to $t$ are blue. (2)Every vertex $v\neq s,t$ is incident to an interval of red edges and an interval of blue edges. If $v$ is white, then, in clockwise order, the first edge in the interval of a color is outgoing and all the other edges of the interval are incoming. If $v$ is white the outgoing edge is the clockwise last in its color. (c.f. Figure LABEL:fig:vertex-2-cond) \PsFigCap 28vertex-2-condEdge orientations and colors at white and black vertices. Theorem 2.1. Let $Q=(V\cup\{s,t\},E)$ be a plane quadrangulation. Separating decompositions and 2-orientations of $Q$ are in bijection. Proof. A separating decomposition clearly yields a 2-orientation, just forget the coloring. For the converse, let $(v,w)$ be an oriented edge, and define the left-right path††margin: left-right path of the edge as the directed path starting with $(v,w)$ and taking a left-turn in black vertices and a right-turn in white vertices. Claim A.  Every left-right path ends in one of the special vertices. Proof of the claim. Suppose a left-right path closes a cycle $C$. The length of $C$ is an even number $2k$. Let $r$ be the number of vertices interior to $C$. Consider the inner quadrangulation of $C$. By Euler’s formula it has $2r+3k-2$ edges. However, when we sum up the outdegrees of the vertices we find that $k$ vertices on $C$ contribute 1 while all other vertices contribute 2 which gives a total of $2r+3k$, contradiction. $\triangle$ Now, let the special vertex where a left-right path ends determine the color of all the edges along the path. Claim B.  The two left-right paths starting at a vertex do not meet again. Proof of the claim. Suppose that the two paths emanating from $v$ meet again at $w$. The two paths form a cycle $C$ of even length $2k$ with $r$ inner vertices. By Euler’s formula the inner quadrangulation of $C$ has $2r+3k-2$ edges. From the left-right rule we know that one neighbor of $v$ on $C$ has an edge pointing into the interior of $C$, from which it follows that there are at least $k-1$ edges pointing from $C$ into its interior. Hence, there are at least $2r+3k-1$ edges, contradiction. $\triangle$ Consequently, the two outgoing edges of a vertex $v$ receive different colors. It follows that the orientation and coloring of edges is a separating decomposition. From the proof we obtain an additional property of a separating decomposition: (3)The red edges form a directed tree rooted in $s$ and the blue edges form a tree rooted in $t$. Note. De Fraysseix and Ossona de Mendez [14] defined a separating decomposition via properties (1), (2) and (3), i.e., they included the tree-property into the definition. They also proved Claim A and B and concluded Theorem 2.1. In [14] it is also shown that every quadrangulation admits a 2-orientation. An embedding of a plane graph is called a 2-book embedding if the vertices are arranged on a single line so that all edges are either below or above the line. As we show next, a separating decomposition $S$ easily yields a 2-book embedding of the underlying quadrangulation $Q$. Observe that each inner face of $Q$ has exactly two bicolored angles, i.e., angles where edges of different color meet; this follows from the rules given in Definition 2.2. Define the equatorial line††margin: equatorial line of $S$ as the union of all diagonals connecting bicolored angles of an inner face. The definition of a separating decomposition implies that each inner vertex of $Q$ has degree two in the equatorial line, while $s$ and $t$ have degree zero and the other two outer vertices have degree one. This implies that the equatorial line consists of an edge-disjoint union of a path and possibly a collection of cycles. Lemma 2.1. Given a quadrangulation $Q$ endowed with a separating decomposition $S$, the equatorial line of $S$ consists of a single path that traverses every inner vertex and every inner face of $Q$ exactly once. Proof. Assume that the equatorial line has a cycle $C$. Consider a plane drawing of $Q\cup C$. The cycle $C$ splits the drawing into an inner and an outer part, both special vertices $s$ and $t$ being in the outer part. The red edges of all vertices of $C$ emanate to one side of $C$ while the blue edges go to the other side. Therefore, it is impossible to have a monochromatic path from a vertex $v\in C$ to both special vertices. With property (3) of separating decompositions, it thus follows that there are no cycles, i.e., the equatorial line is a single path. The equatorial line $L$ has the two outer non-special vertices of $Q$ as extremities. It has degree 2 for each inner vertex, hence it traverses each inner vertex once. In addition, $L$ separates the blue and the red edges, hence it must pass through the interior of each inner face at least once. Since the $n$ non-special vertices of $Q$ delimit $n-1$ intervals on $L$ and since $Q$ has $n-1$ inner faces, $L$ can only pass through each inner face exactly once. To produce a 2-book embedding, extend the equatorial line in the outer face so that it visits also the two special vertices; then stretch the equatorial line as a straight horizontal line that has $s$ as its leftmost vertex and $t$ as its rightmost vertex, see Figure LABEL:fig:sep2bookWithEqLine. Observe that one page gathers the blue edges, the other page gathers the red edges, and the spine for the two pages is the equatorial line. The equatorial line will be useful again for proving a Hamiltonicity result in Section 5.2. \PsFigCap 22sep2bookWithEqLineA quadrangulation $Q$ with a separating decomposition $S$, and the 2-book embedding induced by the equatorial line of $S$. Definition 2.3. An alternating layout††margin: alternating layout of a tree $T$ with $n+1$ vertices is a non-crossing drawing of $T$ such that its vertices are mapped to different points on the non-negative $x$-axis and all edges are embedded in the halfplane above the $x$-axis (or all below). Moreover, for every vertex $v$ it holds that all its neighbors are on one side, either they are all left of $v$ or all right of $v$. In these cases we call the vertex $v$ respectively a right††margin: right or a left vertex††margin: left vertex of the alternating layout. A tree with an alternating layout is an alternating tree. ††margin: alternating tree As one can check in the example of Figure LABEL:fig:sep2bookWithEqLine, the red and the blue trees are both alternating. Indeed, by the rules given in Definition 2.2, white vertices are right in the red tree and left in the blue tree while black vertices are left in red and right in blue. We summarize: Proposition 2.1. The 2-book embedding induced by a separating decomposition yields simultaneous alternating layouts of the two trees such that each white vertex is left in the blue tree and right in the red tree, while each black vertex is right in the blue tree and left in the red tree. Note. A proof of Proposition 2.1 was given by Felsner, Huemer, Kappes and Orden [20]. These authors study what they call strong binary labelings of the angles of a quadrangulation. They show that these labelings are in bijection with 2-orientations and separating decompositions. In this context they find the 2-book embedding; their method consists in ranking each vertex $v$ on the spine of the 2-book embedding according to the number of faces in a specific region $R(v)$. The original source for a 2-book embedding of a quadrangulation is [15], by de Fraysseix, Ossona de Mendez and Pach. General planar graphs may require as many as 4 pages for a book embedding, Yannakakis [43]. Alternating trees in our sense were studied by Rote, Streinu and Santos [36] as non-crossing alternating trees. There it is pointed out that the name alternating tree is sometimes used to denote a tree with a numbering such that every vertex is a local extremum, e.g., [40, Exercise 5.41]. Non-crossing alternating trees were studied by Gelfand et al. [26] under the name of standard trees; there it is shown that these trees are a Catalan family. In [36] connections with rigidity theory and the geometry of the associahedron are established. Consider alternating layouts of rooted ordered trees with the property that the root is extreme, i.e., the root is either the leftmost or the rightmost of the vertices. In this setting an alternating layout is completely determined by the placement of the root (left/right) and the choice of a halfplane for the edges (above/below). We denote the four choices with symbols, e.g., $\swarrow$ denotes that the root is left and the halfplane below; the symbols $\nwarrow$, $\nearrow$ and $\searrow$ represent the other three possibilities. The unique $\swarrow$-alternating layout of $T$, is obtained by starting at the root and walking clockwise around $T$, thereby numbering the vertices with consecutive integers according to the following rules: The root is numbered $0$ and all vertices in the color class of the root receive a number at the first visit while the vertices in the other color class receive a number at the last visit. Figure LABEL:fig:alt-tree2 shows an example. Rules for the other types of alternating layouts are: $\nwarrow$-layout: walk counterclockwise, root class at first visit, other at last visit. $\nearrow$-layout: walk counterclockwise, root class at last visit, other at first visit. $\searrow$-layout: walk clockwise, root class at last visit, other at first visit. \PsFigCap 22alt-tree2A tree, the numbering and the $\swarrow$-alternating layout. The $\swarrow$-fingerprint††margin: $\swarrow$-fingerprint , denoted $\alpha_{\scriptscriptstyle\swarrow}(T)$, of a rooted ordered tree $T$, is a $0,1$ string which has a $1$ at position $i$, i.e., $\alpha_{i}=1$, if the $i$th vertex in the $\swarrow$-alternating layout of $T$ is a left vertex, otherwise, if the vertex is a right vertex $\alpha_{i}=0$. The $\swarrow$-fingerprint of the tree $T$ from Figure LABEL:fig:alt-tree2 is $\alpha_{\scriptscriptstyle\swarrow}(T)=1010001010000110$. Other types of fingerprints are defined by the same rule. For example the $\nearrow$-fingerprint of the tree in Figure LABEL:fig:alt-tree2 is $\alpha_{\scriptscriptstyle\nearrow}(T)=1001111010111010$. With the numbering from Figure LABEL:fig:alt-tree2 this corresponds to the vertex order $0,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1$. The first vertex is always a left vertex and the last a right vertex, therefore, a fingerprint has always a 1 as first entry and a 0 as last entry. A reduced fingerprint††margin: reduced fingerprint $\hat{\alpha}_{\scriptscriptstyle\swarrow}(T)$, of a tree $T$ is obtained by omitting the first and the last entry from the corresponding fingerprint. For a $0,1$ string $s$ we define $\rho(s)$ to be the reverse string and $\overline{s}$ to be the complemented string. Example: if $s=11010$, then $\rho(s)=01011$, $\overline{s}=00101$, and $\overline{\rho(s)}=\rho(\overline{s})=10100$. Lemma 2.2. For every tree $T$ we have $\alpha_{\scriptscriptstyle\swarrow}(T)=\overline{\rho(\alpha_{% \scriptscriptstyle\nearrow}(T))}$ (and $\alpha_{\scriptscriptstyle\nwarrow}(T)=\overline{\rho(\alpha_{% \scriptscriptstyle\searrow}(T))}$). Proof. Take the $\nearrow$-alternating layout of $T$ and rotate it by $180^{\circ}$. This results in the $\swarrow$-alternating layout. Observe what happens to the fingerprint. Definition 2.4. A pair $(S,T)$ of rooted, oriented trees whose fingerprints satisfy $\hat{\alpha}_{\scriptscriptstyle\swarrow}(S)=\overline{\hat{\alpha}_{% \scriptscriptstyle\nearrow}(T)}$, or equivalently $\hat{\alpha}_{\scriptscriptstyle\nearrow}(S)=\rho(\hat{\alpha}_{% \scriptscriptstyle\nearrow}(T))$, is called a twin-alternating pair of trees††margin: twin-alternating pair of trees . Theorem 2.2. There is a bijection between twin-alternating pairs of trees $(S,T)$ on $n$ vertices and 2-orientations of quadrangulations on $n+2$ vertices. Proof. (The bijection is illustrated with an example in Figure LABEL:fig:small-ex.) Augment both rooted ordered trees $S$ and $T$ by a new vertex which is made the rightmost child of the root. Let $S^{+}$ and $T^{+}$ be the augmented trees. Note that $\hat{\alpha}_{\scriptscriptstyle\swarrow}(S^{+})=0+\hat{\alpha}_{% \scriptscriptstyle\swarrow}(S)$ and $\hat{\alpha}_{\scriptscriptstyle\nearrow}(T^{+})=\hat{\alpha}_{% \scriptscriptstyle\nearrow}(T)+1$. Since the first entry of a non-reduced fingerprint is always 1 and the last one is always 0 it follows that $\alpha_{\scriptscriptstyle\swarrow}(S^{+})+0=\overline{1+\alpha_{% \scriptscriptstyle\nearrow}(T^{+})}$. Consider the $\swarrow$-alternating layout of $S^{+}$ and move the vertices in this layout to the integers $0,..,n$. Similarly, the $\nearrow$-alternating layout of $T^{+}$ is placed such that the vertices correspond to the integers $1,..,n+1$. At every integer $0<i<n+1$ a vertex of $S^{+}$ and a vertex of $T^{+}$ meet. We identify them. As a consequence of the complemented fitting of the fingerprints, every non-special vertex is a left vertex in one of the layouts and a right vertex in the other. This has strong consequences: $\bullet$A pair $uv$ can be an edge in at most one of $S$ and $T$, otherwise $u$ would have a neighbor on its right in both $S$ and $T$, a contradiction. $\bullet$There is no triangle with edges from $S\cup T$. Suppose $u,v,w$ would be such a triangle. Two edges must be from the same tree, say from $S$. These cannot be the two edges incident to the middle vertex $v$. If they are incident to $w$ the vertex $u$ has neighbors to its right in both trees, contradiction. Hence, the graph with edges $S^{+}\cup T^{+}$ is simple, triangle-free and non-crossing. Since it has $n+2$ vertices and $2n$ edges, it must be a quadrangulation. The $2$-orientation is obtained by orienting both trees towards the root. \PsFigCap 22small-exA twin-alternating pair of trees $(S,T)$. The $\nearrow$-alternating layout of $T^{+}$ and the $\swarrow$-alternating layout of $S^{+}$ properly adjusted. The induced 2-orientation of a quadrangulation. The converse direction, from the 2-orientation of a quadrangulation on $n+2$ vertices to trees $(S,T)$ with appropriate fingerprints was already indicated. To recapitulate: A 2-orientation of $Q$ yields a separating decomposition (Theorem 2.1). In particular the edges are decomposed into two trees, the red tree $S^{+}$ and the blue tree $T^{+}$. The corresponding 2-book embedding (Proposition 2.1) induces a simultaneous alternating layout of the two trees with the property that every non-special vertex is left in one of the trees and right in the other (Proposition 2.1), i.e., $\alpha_{\scriptscriptstyle\swarrow}(S^{+})+0=\overline{1+\alpha_{% \scriptscriptstyle\nearrow}(T^{+})}$. Trees $S$ and $T$ are obtained by deleting the left child of the root in $S^{+}$ and the right child of the root in $T^{+}$; they are both leaves and correspond to the two non-special outer vertices of $Q$. Trees $S$ and $T$ satisfy $\hat{\alpha}_{\scriptscriptstyle\swarrow}(S)=\overline{\hat{\alpha}_{% \scriptscriptstyle\nearrow}(T)}$, i.e., $(S,T)$ is a twin-alternating pair of trees. 3 Alternating Trees and other Catalan Families A full binary tree††margin: full binary tree is a rooted ordered tree such that each inner vertex has exactly two children. The fingerprint††margin: fingerprint of a full binary tree $T$ is a $0,1$ string which has a $1$ at position $i$ if the $i$th leaf of $T$ is a left child, otherwise, if the leaf is a right child the entry is $0$. In Figure LABEL:fig:alt-bin the tree $T$ on the right side has $\alpha(T)=1011101011110010$. The reduced fingerprint††margin: reduced fingerprint $\hat{\alpha}(T)$ is obtained by omitting the first and the last entry from $\alpha(T)$. Note that the first entry is always 1 and the last one is always 0. Proposition 3.1. There is a bijection $T\to T^{\lambda}$ which takes an alternating tree $T$ with $n$ vertices to a full binary tree $T^{\lambda}$ with $n$ leaves such that $\hat{\alpha}_{\scriptscriptstyle\nearrow}(T)=\hat{\alpha}(T^{\lambda})$. Proof. The bijection makes a correspondence between edges of the alternating tree and inner vertices of the full binary tree, see Figure LABEL:fig:alt-bin. Embed $T$ with vertices on integers from $0$ to $n$. With an edge $i,j$ of $T$ associate an inner vertex $x_{ij}$ for $T^{\lambda}$ which is to be placed at $(\frac{i+j}{2},\frac{j-i}{2})$. Draw line segments from the vertex $(i,0)$ to $x_{ij}$ and from $(j,0)$ to $x_{ij}$. Doing this for every edge of $T$ results in a drawing of the binary tree $T^{\lambda}$. \PsFigCap22alt-binAn $\nearrow$- alternating tree $T$ and the full binary tree $T^{\lambda}$. The converse is even simpler. Every inner node $x$ of the binary tree gives rise to an edge connecting the leftmost leaf below $x$ to the rightmost leaf below $x$. Note. Full binary trees with $n+1$ leaves are counted by the Catalan number††margin: Catalan number $C_{n}=\frac{1}{n+1}\binom{2n}{n}$. Catalan numbers are found in The On-Line Encyclopedia of Integer Sequences [39] as sequence A000108. From Proposition 3.1 it follows that alternating trees with $n+1$ vertices are another Catalan family, which in [26] was proved constructing a bijection inductively. Stanley [40, Exercise 6.19] collected 66 Catalan families. Although the subject is well-studied, we include a particular proof showing that full binary trees are a Catalan family. Actually, we prove a more refined count related to Narayana numbers. The proof will be used later in the context of Baxter numbers. To start with, we associate another $0,1$ string with a full binary tree $T$. The bodyprint††margin: bodyprint  $\beta(T)$ of $T$ is obtained from a visit to the inner vertices of $T$ in in-order. The $i$th entry of $\beta$ is a $1$, i.e., $\beta_{i}=1$, if the $i$th inner vertex is a right-child or it is the root. If the vertex is a left-child, then $\beta_{i}=0$. Note that if the tree $T$ is drawn such that all leaves are on a horizontal line, then there is a one-to-one correspondence between inner vertices and the gaps between adjacent leaves (Gap between leaves $v_{i}$ and $v_{i+1}$ $\mapsto$ least common ancestor of $v_{i}$ and $v_{i+1}$. Inner vertex $x$ $\mapsto$ gap between rightmost leaf in left subtree below $x$ and leftmost leaf in right subtree below $x$). This correspondence maps the left-to-right order of gaps between leaves to the in-order of inner vertices. Since the root contributes a $1$ the last entry of the bodyprint of a tree is always $1$. Therefore, it makes sense to define the reduced bodyprint††margin: reduced bodyprint $\hat{\beta}(T)$ as $\beta(T)$ minus the last entry. Figure LABEL:fig:enc-trees shows an example. \PsFigCap82enc-treesA full binary tree with reduced bodyprint $\hat{\beta}$ and reduced fingerprint $\hat{\alpha}$. Lemma 3.1. Let $T$ be a full binary tree with with $k$ left leaves and $n-k+1$ right leaves. The reduced fingerprint $\hat{\alpha}(T)$ and the reduced bodyprint $\hat{\beta}(T)$ both have length $n-1$. Moreover: (1) $\displaystyle\sum_{i=1}^{n-1}\hat{\alpha}_{i}=\sum_{i=1}^{n-1}\hat{\beta}_{i}=% k-1$             (2) $\displaystyle\sum_{i=1}^{j}\hat{\alpha}_{i}\geq\sum_{i=1}^{j}\hat{\beta}_{i}$ for all $j=1,\ldots,n-1$. Proof. Consider a drawing of $T$ where every edge has slope $1$ or $-1$, as in Figure LABEL:fig:enc-trees. The maximal segments of slope $1$ in this drawing define a matching $M$ between the $k$ left leaves, i.e., 1-entries of $\alpha(T)$, and inner vertices which are right-childs including the root, i.e., 1-entries of $\beta(T)$. The left part of Figure LABEL:fig:enc-trees-the-1s indicates the correspondence. The reduction $\hat{\alpha}$ (resp. $\hat{\beta}$) has exactly one 1-entry less than $\alpha$ (resp. $\beta$). This proves (1). \PsFigCap 72enc-trees-the-1sIllustrations for the proof of Lemma 3.1. The pair $(v_{i},x_{j})$ is in $M$. For (2) let $v_{0},v_{1},\ldots,v_{n}$ be the set of leaves in left-to-right order and let $x_{1},\ldots,x_{n}$ be the in-order of inner vertices. Note that $v_{i}$ determines $\alpha_{i}$ and $x_{i}$ determines $\beta_{i}$. Let $(v_{i},x_{j})$ be a pair from the matching $M$ defined above, i.e., $\alpha_{i}=1$ and $\beta_{j}=1$. Since $v_{i}$ is the leftmost leaf below $x_{j}$ and the gap corresponding to $x_{j}$ starts at the rightmost vertex $v_{j-1}$ of the left subtree of $x_{j}$ we find that $i\leq j-1$. This gives a matching between the 1-entries of $\alpha$ and the 1-entries of $\beta$ with the property that the index of the 1-entry of $\alpha$ is always less that the index of the mate in $\beta$. To conclude the inequality for the reduced strings we have to address another detail: The mate of the root in $M$ is the leaf $v_{0}$, which is not represented in $\hat{\alpha}$, and there is a leaf whose mate in $M$ is the last inner vertex $x_{n}$, which is not represented in $\hat{\beta}$. Consider the ordered sequence $x_{j_{0}},x_{j_{1}},\ldots,x_{j_{s}}$ of all vertices on the rightmost branch of $T$, such that $x_{j_{0}}$ is the root $r$ and $x_{j_{s}}=x_{n}$. The right part of Figure LABEL:fig:enc-trees-the-1s may help to see that in $M$ we have the following pairs $(v_{0},x_{j_{0}}),(v_{j_{0}},x_{j_{1}}),\ldots(v_{j_{s-1}},x_{n})$; in particular $\alpha_{0}=\alpha_{j_{0}}=\ldots=\alpha_{j_{s-1}}=1$ and $\beta_{j_{0}}=\beta_{j_{1}}=\ldots=\beta_{n}=1$. Hence we can define a matching $M^{\prime}$ which is as $M$ except that $v_{0}$ and $x_{n}$ remain unmatched and the pairs $(v_{j_{i}},x_{j_{i}})$ with $0\leq i\leq s-1$ are matched. This matching $M^{\prime}$ between the 1-entries of $\hat{\alpha}$ and the 1-entries of $\hat{\beta}$ has the property that the index of the 1-entry of $\hat{\alpha}$ is always at most the index of the mate in $\hat{\beta}$. This proves (2). Definition 3.1. With $\sigma\in\binom{k+\ell}{k}$ we denote that $\sigma$ is a $0,1$ string of length $n=k+\ell$ with $k$ entries $1$ and $\ell$ entries $0$, i.e., $\sum_{i=1}^{n}\sigma_{i}=k$. For $\sigma,\tau\in\binom{k+\ell}{k}$ we define $\tau\geq_{{\sf dom}}\sigma$, i.e., $\tau$ dominates††margin: dominates $\sigma$, if $\sum_{i=1}^{j}\tau_{i}\geq\sum_{i=1}^{j}\sigma_{i}$ for all $j=1,\ldots,n$. Theorem 3.1. The mapping $T\leftrightarrow(\hat{\beta},\hat{\alpha})$ is a bijection between full binary trees with $k+1$ left leaves and $\ell+1$ right leaves and pairs $(\hat{\beta},\hat{\alpha})$ of $0,1$ strings in $\binom{k+\ell}{k}$ with $\hat{\alpha}\geq_{{\sf dom}}\hat{\beta}$. Proof. From Lemma 3.1 we know that reduced body- and fingerprint have the required properties. To show that the mapping $T\leftrightarrow(\hat{\beta},\hat{\alpha})$ is a bijection we use induction. First note that $\hat{\alpha}=0^{\ell}1^{k}$ implies $\hat{\beta}=\hat{\alpha}$, and that there are unique trees with these reduced finger- and bodyprints. If $\hat{\alpha}$ has a different structure, then there is an $i$ such that $\hat{\alpha}_{i-1}\hat{\alpha}_{i}=10$. In a hypothetical tree $T$ corresponding to $(\hat{\beta},\hat{\alpha})$, the pair $v_{i-1},v_{i}$ is a left leaf followed by a right leaf. The leaves $v_{i-1}$ and $v_{i}$ are children of the inner vertex $x_{i}$, the value $\hat{\beta}_{i}=1$ or $\hat{\beta}_{i}=0$ depends on whether $x_{i}$ is itself a left or a right child. Consider the tree $T^{*}$ obtained by pruning the two leaves $v_{i-1}$ and $v_{i}$. Note that if $\hat{\alpha}(T)=\hat{\alpha}^{\prime}\>\hat{\alpha}_{i-1}\>\hat{\alpha}_{i}\>% \hat{\alpha}^{\prime\prime}$ and $\hat{\beta}(T)=\hat{\beta}^{\prime}\>\hat{\beta}_{i}\>\hat{\beta}^{\prime\prime}$, then $\hat{\beta}(T^{*})=\hat{\beta}^{\prime}\>\hat{\beta}^{\prime\prime}$ and $\hat{\alpha}(T^{*})=\hat{\alpha}^{\prime}\>\delta\>\hat{\alpha}^{\prime\prime}$ where $\delta=1$ if $\hat{\beta}_{i}=0$ and $\delta=0$ if $\hat{\beta}_{i}=1$. Hence, we define $\hat{\alpha}^{*}=\hat{\alpha}^{\prime}\>\delta\>\hat{\alpha}^{\prime\prime}$ and $\hat{\beta}^{*}=\hat{\beta}^{\prime}\>\hat{\beta}^{\prime\prime}$. Depending on the value of $\delta=\overline{\hat{\beta}_{i}}$, this can be interpreted as either having removed the two entries $\hat{\beta}_{i}=1$ and $\hat{\alpha}_{i-1}=1$ or the two entries $\hat{\beta}_{i}=0$ and $\hat{\alpha}_{i}=0$ from $\hat{\alpha}$ and $\hat{\beta}$. It is easy to check that $\hat{\alpha}^{*}\geq_{{\sf dom}}\hat{\beta}^{*}$. By induction there is a unique tree $T^{*}$ with $n$ leaves such that $(\hat{\beta}(T^{*}),\hat{\alpha}(T^{*}))=(\hat{\beta}^{*},\hat{\alpha}^{*})$. Making the $i$th leaf of $T^{*}$ an inner vertex with two leaf children yields the unique tree $T$ with $(\hat{\beta}(T),\hat{\alpha}(T))=(\hat{\beta},\hat{\alpha})$. There is a natural correspondence between $0,1$ strings $\sigma\in\binom{k+\ell}{k}$ and upright lattice paths††margin: upright lattice paths  $P_{\sigma}$ from $(0,0)$ to $(\ell,k)$, which takes an entry $1$ from $\sigma$ to a step to the right, i.e., the addition of $(1,0)$ to the current position, and an entry $0$ from $\sigma$ to a step upwards, i.e., the addition of $(0,1)$ to the current position. This correspondence is the heart of a correspondence between pairs $(\sigma,\tau)\in\binom{k+\ell}{k}$ with $\tau\geq_{{\sf dom}}\sigma$, and pairs $(P_{\sigma},P_{\tau})$ of non-intersecting lattice paths, where $P_{\sigma}$ is from $(0,1)$ to $(k,\ell+1)$ and $P_{\tau}$ is from $(1,0)$ to $(k+1,\ell)$. This yields a cryptomorphic version of Theorem 3.1. Theorem 3.2. There is a bijection between full binary trees with $k+1$ left leaves and $\ell+1$ right leaves and pairs $(P_{\beta},P_{\alpha})$ of non-intersecting upright lattice paths, where $P_{\beta}$ is from $(0,1)$ to $(\ell,k+1)$ and $P_{\alpha}$ is from $(1,0)$ to $(\ell+1,k)$. The advantage of working with non-intersecting lattice paths is that now we can apply the Lemma of Gessel-Viennot [27]; see also [2]. Theorem 3.3. The number of full binary trees with $k+1$ left leaves and $\ell+1$ right leaves is $$\det\begin{pmatrix}{k+\ell\choose k}&{k+\ell\choose k-1}\\ {k+\ell\choose k+1}&{k+\ell\choose k}\end{pmatrix}=\frac{1}{k+\ell+1}{k+\ell+1% \choose k}{k+\ell+1\choose k+1}$$ This is the Narayana number††margin: Narayana number $N(k+\ell+1,k+1)$. From an elementary application of Vandermond’s convolution, $\sum_{k=1}^{n-1}N(n,k)=\frac{1}{n}{2n\choose n-1}=C_{n}$. The following proposition summarizes our findings about Narayana families. Proposition 3.2. The Narayana number $N(k+\ell+1,k+1)$ counts $\bullet$alternating trees with $k+1$ left vertices and $\ell+1$ right vertices, $\bullet$full binary trees with $k+1$ left leaves and $\ell+1$ right leaves, $\bullet$pairs $(\sigma,\tau)$ of $0,1$ strings in $\binom{k+\ell}{k}$ with $\tau\geq_{{\sf dom}}\sigma$, $\bullet$pairs $(P_{1},P_{2})$ of non-intersecting upright lattice paths, where $P_{1}$ is from $(0,1)$ to $(k,\ell+1)$ and $P_{2}$ is from $(1,0)$ to $(k+1,\ell)$. 4 Twin Pairs of Trees and the Baxter Numbers After the Catalan digression we come back to twin pairs of trees. Definition 4.1. A pair $(A,B)$ of full binary trees whose fingerprints satisfy $\hat{\alpha}(A)=\rho(\hat{\alpha}(B))$ is called a twin-binary pair of trees††margin: twin-binary pair of trees . Theorem 4.1. There is a bijection between twin-alternating pairs of trees on $n$ vertices and twin-binary pairs of trees with $n$ leaves. Proof. Let $(A,B)$ be twin-binary trees. Apply the correspondence from Proposition 3.1 to both. This yields trees $S$ and $T$ such that $\hat{\alpha}_{\scriptscriptstyle\nearrow}(S)=\hat{\alpha}(A)$ and $\hat{\alpha}_{\scriptscriptstyle\nearrow}(T)=\hat{\alpha}(B)$ and $S^{\lambda}=A$ and $T^{\lambda}=B$. From $\hat{\alpha}(A)=\rho(\hat{\alpha}(B))$ we conclude $\hat{\alpha}_{\scriptscriptstyle\nearrow}(S)=\rho(\hat{\alpha}_{% \scriptscriptstyle\nearrow}(T))$ which is the defining property for twin-alternating trees. In the proof of Theorem 2.2 we have seen how a twin-alternating pair of trees can be extended and then glued together to yield a 2-book embedding of a quadrangulation; see also Figure LABEL:fig:small-ex. Doing a similar gluing for a twin-binary pair of trees, with both trees drawn as in the proof of Proposition 3.1, yields a particular rectangulation of the square. Figure LABEL:fig:rectangulation shows an example. \PsFigCap 17rectangulationA twin-binary pair of trees and the associated rectangulation. Definition 4.2. Let $X$ be a set of points in the plane and let $R$ be an axis-aligned rectangle which contains $X$ in its open interior. A rectangulation of $X$††margin: rectangulation of $X$ is a subdivision of $R$ into rectangles by non-crossing axis-parallel segments, such that every segment contains a point of $X$ and every point lies on a segment. We are mainly interested in rectangulations of diagonal sets, i.e., of the sets $X_{n-1}=\{(i,n-i):1\leq i\leq n-2\}$. In this case the enclosing rectangle $R$ can be chosen to be the square spanned by $(0,0)$ and $(n,n)$. Figure LABEL:fig:rectangulation shows a rectangulation of $X_{13}$. The following theorem is immediate from the definitions. Theorem 4.2. There is a bijection between twin-binary pairs of trees with $n$ leaves and rectangulations of $X_{n-2}$. Note. Hartman et al. [29] and later independently de Fraysseix et al. [15] prove that it is possible to assign a set of internally disjoint vertical and horizontal segments to the vertices of any bipartite graph $G$ such that two segments touch if, and only if, there is an edge between the corresponding vertices. A proof of this result can be given along the following line: Extend $G$ by adding edges and vertices to a quadrangulation $Q$. Augment $Q$ with a 2-orientation and trace the mappings from 2-orientations via twin-alternating pairs of trees to a rectangulation of a diagonal point set. The horizontal and vertical segments through the points are a touching segment representation for $Q$. Deleting some and retracting the ends of some other segments yields a representation for $G$. A similar observation was made by Ackerman, Barequet and Pinter [1]. Let $(S,T)$ be a twin pair of binary trees with $k+1$ left and $\ell+1$ right leaves. The bijection from Theorem 3.2 maps $Y\in\{S,T\}$ to a pair $(P_{\beta}(Y),P_{\alpha}(Y))$ of non-intersecting upright lattice paths, where $P_{\beta}(Y)$ is from $(0,1)$ to $(\ell,k+1)$ and $P_{\alpha}(Y)$ is from $(1,0)$ to $(\ell+1,k)$. Since by definition $\hat{\alpha}(S)=\rho(\hat{\alpha}(T))$ a point reflection of $P_{\alpha}(T)$ at $(0,0)$ followed by a translation by $(\ell+2,k)$ maps $P_{\alpha}(T)$ to the path $P^{*}_{\alpha}(T)$ defined as $P^{*}_{\alpha}(T)=P_{\alpha}(S)$. The same geometric transformation maps $P_{\beta}(T)$ to $P^{*}_{\beta}(T)$ from $(2,-1)$ to $(\ell+2,k-1)$ and of course $(P^{*}_{\beta}(T),P^{*}_{\alpha}(T))$ is again a pair of non-intersecting upright lattice paths. Actually, $(P_{\beta}(S),P_{\alpha}(S),P^{*}_{\beta}(T))=(P_{\beta}(S),P^{*}_{\alpha}(T),% P^{*}_{\beta}(T))$ is a triple of non-intersecting upright lattice paths. Since the first two of these paths uniquely determine $S$ and the last two uniquely determine $T$ we obtain, via a translation of the three paths one unit up, the following theorem. \PsFigCap 42three-pathsA twin-binary pair of trees and its triple of non-intersecting lattice paths. Theorem 4.3. There is a bijection between twin pairs of full binary trees with $k+1$ left leaves and $\ell+1$ right leaves and triples $(P_{1},P_{2},P_{3})$ of non-intersecting upright lattice paths, where $P_{1}$ is from $(0,2)$ to $(k,\ell+2)$, $P_{2}$ is from $(1,1)$ to $(k+1,\ell+1)$, and $P_{3}$ is from $(2,0)$ to $(k+2,\ell)$. Again we can apply the Lemma of Gessel-Viennot. Theorem 4.4. The number of twin pairs of full binary trees with $k+1$ left leaves and $\ell+1$ right leaves is $$\det\begin{pmatrix}{k+\ell\choose k}&{k+\ell\choose k-1}&{k+\ell\choose k-2}\\ {k+\ell\choose k+1}&{k+\ell\choose k}&{k+\ell\choose k-1}\\ {k+\ell\choose k+2}&{k+\ell\choose k+1}&{k+\ell\choose k}\end{pmatrix}=2\;% \frac{(k+\ell)!\;(k+\ell+1)!\;(k+\ell+2)!}{k!\;(k+1)!\;(k+2)!\;\ell!\;(\ell+1)% !\;(\ell+2)!}=\Theta_{k,\ell}$$ The number $\Theta_{k,\ell}$ has some quite nice expressions in terms of binomial coefficients, e.g., $\Theta_{k,\ell}=\frac{2}{(k+1)^{2}\,(k+2)}{k+\ell\choose k}{k+\ell+1\choose k}% {k+\ell+2\choose k}$ or $\Theta_{k,\ell}=\frac{2}{(n+1)\,(n+2)^{2}}{k+\ell+2\choose k}{k+\ell+2\choose k% +1}{k+\ell+2\choose k+2}$. The total number of twin binary trees with $n+2$ leaves is given by the Baxter number††margin: Baxter number $$B_{n+1}=\sum_{k=0}^{n}\Theta_{k,n-k},$$ whose initial values are $1,2,6,22,92,422,2074,10754$. The next proposition collects families that are, due to our bijections, enumerated by $\Theta$-numbers. Proposition 4.1. The number $\Theta_{k,\ell}$ counts $\bullet$triples $(P_{1},P_{2},P_{3})$ of non-intersecting upright lattice paths, where $P_{1}$ is from $(0,2)$ to $(\ell,k+2)$ and $P_{2}$ is from $(1,1)$ to $(\ell+1,k+1)$ and $P_{3}$ is from $(2,0)$ to $(\ell+2,k)$ . $\bullet$twin pairs of binary trees with $k+1$ left leaves and $\ell+1$ right leaves, $\bullet$rectangulations of $X_{k+\ell}$ with $k$ horizontal and $\ell$ vertical segments, $\bullet$twin pairs of alternating trees with $k+1$ left vertices and $\ell+1$ right vertices, $\bullet$separating decompositions of quadrangulations with $k+2$ white and $\ell+2$ black vertices, $\bullet$2-orientations of quadrangulations with $k+2$ white and $\ell+2$ black vertices. Note. The concept of twin-binary pairs of trees is due to Dulucq and Guibert [18]. They also give a bijection between twin-binary pairs of trees and triples of non-intersecting lattice paths. The bijection also uses the fingerprint as the middle path, the other two are defined differently. In [19] they extend their work to include some more refined counts. A very good entrance point for more information about Baxter numbers is The On-Line Encyclopedia of Integer Sequences [39, A001181]. Fusy, Schaeffer and Poulalhon [25] gave a direct bijection from separating decompositions to triples of non-intersecting paths in a grid. Their main application is the counting of bipolar orientations of rooted 2-connected maps. These results are included in Section 5.2. Ackerman, Barequet and Pinter [1] also have the result that the number of rectangulations of $X_{n}$ is the Baxter number $B_{n+1}$. Their proof is via a recurrence formula obtained by Chung et al. [12]. They also show that for a point set $X_{\pi}=\{(i,\pi(i)):1\leq i\leq n\}$ to have exactly $B_{n+1}$ rectangulations it is sufficient that $\pi$ is a Schröder permutation, i.e., a permutation avoiding the patterns $3-1-4-2$ and $2-4-1-3$. They conjecture that whenever $\pi$ is a permutation that is not Schröder, the number of rectangulations of $X_{\pi}$ is strictly larger than the Baxter number. In contrast to the nice formulas for the number of 2-orientations of quadrangulations on $n$ vertices, very little is known about the number of 2-orientations of a fixed quadrangulation $Q$. In [22] it is shown that the maximal number of 2-orientations a quadrangulation on $n$ vertices can have is asymptotically between $1,47^{n}$ and $1,91^{n}$. To our knowledge, the computational complexity of the counting problem is open. 5 More Baxter Families In this section we deal with Baxter permutations and bipolar orientations. In both families we identify objects counted by $\Theta$-numbers and less refined families counted by Baxter numbers. 5.1 Baxter Permutations Definition 5.1. The max-tree††margin: max-tree $\hbox{\sf Max}(\pi)$ of a permutation $\pi$ is recursively defined as the binary tree with root labeled $z$, left subtree $\hbox{\sf Max}(\pi_{\text{left}})$ and right subtree $\hbox{\sf Max}(\pi_{\text{right}})$ where $z$ is the maximum entry of $\pi$ and in one-line notation $\pi=\pi_{\text{left}},z,\pi_{\text{right}}$. The recursion ends with unlabeled leaf-nodes corresponding to $\hbox{\sf Max}(\emptyset)$. The max-tree of a permutation is a full binary tree. The $i$th leaf $v_{i}$ of $\hbox{\sf Max}(\pi)$ from the left corresponds to the adjacent pair $(\pi_{i-1},\pi_{i})$ in the permutation $\pi$. Leaf $v_{i}$ is a left leaf if, and only if, $(\pi_{i-1},\pi_{i})$ is a descent, i.e., if $\pi_{i-1}>\pi_{i}$. The min-tree††margin: min-tree $\hbox{\sf Min}(\pi)$ of a permutation $\pi$ is defined dually, i.e., as the binary tree with root labeled $a$, left subtree $\hbox{\sf Min}(\pi_{\text{left}})$ and right subtree $\hbox{\sf Min}(\pi_{\text{right}})$ where $a$ is the minimum entry of $\pi=\pi_{\text{left}},a,\pi_{\text{right}}$. Also let $\hbox{\sf Min}(\emptyset)$ be a leaf-node. The $i$th leaf $y_{i}$ of $\hbox{\sf Min}(\pi)$ from the left is a left leaf if, and only if, $(\pi_{i-1},\pi_{i})$ is a rise, i.e., if $\pi_{i-1}<\pi_{i}$. With these definitions and observations, see also Figure LABEL:fig:min-max-tree, we obtain: Proposition 5.1. For a permutation $\pi$ of $[n-1]$ the pair $(\hbox{\sf Max}(\pi),\hbox{\sf Min}(\rho(\pi)))$ is a twin binary pair of trees. \PsFigCap 22min-max-treeThe trees $\hbox{\sf Max}(\pi)$ and $\hbox{\sf Min}(\rho(\pi))$ associated with $\pi=1,7,4,6,3,2,5$. This mapping from permutations to twin binary pairs of trees is not a bijection. Indeed, the permutation $\pi^{\prime}=1,7,5,6,3,2,4$ also maps to the pair of trees shown in Figure LABEL:fig:min-max-tree. Definition 5.2. A Baxter permutation††margin: Baxter permutation is a permutation which avoids the pattern $2$–$41$–$3$ and $3$–$14$–$2$. That is, $\pi$ is Baxter if there are no indices $i<j,j+1<k$ with $\pi_{j+1}<\pi_{i}<\pi_{k}<\pi_{j}$ nor with $\pi_{j+1}>\pi_{i}>\pi_{k}>\pi_{j}$. Theorem 5.1. There is a bijection between twin binary trees with $n$ leaves and Baxter permutations of $[n-1]$. Proof. From Proposition 5.1 we know that $(\hbox{\sf Max}(\pi),\hbox{\sf Min}(\rho(\pi)))$ is a twin binary pair of trees; this remains true if we restrict $\pi$ to be Baxter. For the converse let $(S,T)$ be twin binary trees with $n$ leaves. With this pair associate a rectangulation of $X_{n-2}$. Tilt the rectangulation to get the diagonal points onto the $x$-axis. Each rectangle of the rectangulation contains a highest corner which corresponds to an inner vertex of $S$ and a lowest corner corresponding to an inner vertex of $T$. We will refer to these corners as the north-corner††margin: north-corner and the south-corner††margin: south-corner respectively. The idea is to associate a number with every rectangle; the permutation $\pi$ corresponding to $(S,T)$ is then read of from the order of intersection of the rectangles with the $x$-axis. Writing the numbers of rectangles to their north- and south-corners makes $(S,T)=(\hbox{\sf Max}(\pi),\hbox{\sf Min}(\rho(\pi)))$. The algorithm associates the numbers with rectangles in decreasing order. Number $n-1$ is associated with the rectangle with highest north-corner. After having associated $k$ with some rectangle $R_{k}$, the union of unlabeled rectangles can be seen as a series of pyramids over the $x$-axis; see Figure LABEL:fig:pyramids. \PsFigCap15pyramidsA series of pyramids after labeling rectangle $k$. The label $k-1$ has to correspond to one of the rectangles which have their north-corner on the tip of one of the pyramids (this is because $S$ will be the max-tree of $\pi$). The algorithm will choose the next pyramid to the left or the next pyramid to the right of the interval on the $x$-axis which belongs to $R_{k}$. The decision which of the two is taken depends on the south-corner of $R_{k}$. If the south-corner is a , i.e., a left child in tree $T$, then the pyramid to the left is chosen; otherwise, if the south-corner is a , then the pyramid to the right is chosen. \PsFigCap 27square-labelingGenerating a Baxter permutation from a rectangulation. The final state shows the permutation with its max- and min-trees.An example for the execution of this algorithm is given in Figure LABEL:fig:square-labeling. The proof that this is a bijection is done with three claims. Claim A.  The permutation constructed by the algorithm is Baxter. Proof of the claim. Let $a<b<c<d$, we want to show that the algorithm will not produce the pattern $b$–$da$–$c$. Think of the status after labeling rectangle $R(c)$. To have a chance of producing the pattern $d$ is left of $c$ and the slot immediately to the right of $d$ is not yet used, i.e., it belongs to a pyramid $P$. From the labeling rule of the algorithm it follows that the rectangle covering the leftmost slot of $P$ has to be labeled before any rectangle left of $P$ can be labeled. This shows that the pattern is impossible. The case of the other pattern is symmetric. $\triangle$ Claim B.  The border between labeled and unlabeled rectangles at any stage of the algorithm is a zig-zag (the definition of zig-zag should be evident from Figure LABEL:fig:pyramids). \PsFigCap 13no-zigzagA violation of the zig-zag property. Proof of the claim. Suppose not. Then there is a first rectangle $R$ whose labeling violates the property, Figure LABEL:fig:no-zigzag shows the situation up to a reflection interchanging left and right. Let $y$ be the label of $R$ and let $z$ be the label of the rectangle whose south-corner is the deepest point of the valley whose shape was destroyed by $R(y)$. The rule of the algorithm implies that the rectangle labeled after $z$ was left of $R(z)$. This rectangle $R(z-1)$ has to intersect the $x$-axis somewhere left of pyramid $P$. From the labeling rule of the algorithm it follows that the rectangle covering the rightmost slot of $P$ has to be labeled before any rectangle right of $P$ can be labeled. Rectangle $R$ however is right of $P$, a contradiction. $\triangle$ Claim B implies that in the lower tree the labels of south-corners of rectangles are decreasing along every path from a leaf to the root. This is a property characterizing Min-trees, hence, the labels of the rectangles yield the $\hbox{\sf Min}(\rho(\pi))$. The fact that the labeling of the north-corners of rectangles yields the $\hbox{\sf Max}(\pi)$ is more immediate from the algorithm. Claim C.  If $\sigma$ is a permutation with $(\hbox{\sf Max}(\sigma),\hbox{\sf Min}(\rho(\sigma)))=(S,T)$ and $\sigma$ is not the result of applying the algorithm to the rectangulation corresponding to $(S,T)$, then $\sigma$ is not Baxter. Proof of the claim. Compare the one-line notation of $\sigma$ and $\pi$, where $\pi$ is the result of applying the algorithm to the rectangulation corresponding to $(S,T)$. Consider the largest value $k$ which is not at the same position in $\sigma$ and $\pi$. Clearly $k\neq n-1$, because $\sigma$ and $\pi$ have the same max-tree. Hence, if we think of the algorithm producing $\pi$ in the state when $k+1$ is placed, then the placement of $k$ in $\sigma$ fails to obey the rules of the algorithm. Either $k$ is placed as to have its north-corner in a pyramid on the wrong side, or the side is respected but the pyramid containing the north-corner of $k$ is not next to $R(k+1)$ on this side. We indicate how to find a forbidden pattern in each of the two cases. Wrong side. Suppose the south-corner $p$ of $R(k+1)$ is a and $k$ is placed to the right of $k+1$. Let $a$ be the element in $\sigma$ whose rectangle has its east-corner at $p$ and let $q$ be the first node of type on the path from $p$ to the root of the min-tree, i.e, of $T$. Let $b$ be the element in $\sigma$ whose rectangle has its west-corner at $q$. From the min-tree property we infer that $b<a<k$. Since $k+1$ and $b$ are neighbors in $\sigma$, the elements $a,k+1,b,k$ form a forbidden $3$–$14$–$2$ pattern. The other case where the south-corner $p$ of $R(k+1)$ is a is symmetric. In this case there is a forbidden pattern $2$–$41$–$3$. Wrong pyramid. Suppose the south-corner $p$ of $R(k+1)$ is a and $k$ is placed to a pyramid left of $k+1$ but not to the first one. Let $r>k+1$ be some element separating the first from the second pyramid. Let $a$ be the element in $\sigma$ whose rectangle has its east-corner at $p$ and note that $a<k$. Between $r$ and $a$ there is an adjacent pair $r^{\prime},a^{\prime}$ with $a^{\prime}<k$ and $k+1<r^{\prime}$. Hence, $k,r^{\prime},a^{\prime},k+1$ is a forbidden $2$–$41$–$3$. Again, the second case is symmetric. This completes the proof of the claim $\triangle$ Claim A says that every twin binary pair $(S,T)$ of trees is mapped by the algorithm to a Baxter permutation $\pi$. As a consequence of Claim B we noted that $(S,T)=(\hbox{\sf Max}(\pi),\hbox{\sf Min}(\rho(\pi))$. Claim C is the injectivity, hence, the mapping is a bijection. From Proposition 4.1 and the observation about the correspondence of left and right leaves in the max-tree of a permutation with descents and rises we obtain: Proposition 5.2. The number $\Theta_{k,\ell}$ counts $\bullet$twin pairs of binary trees with $k+1$ left leaves and $\ell+1$ right leaves, $\bullet$Baxter permutations of $k+\ell+1$ with $k$ descents and $\ell$ rises. Note. Baxter numbers first appeared in the context of counting Baxter permutations. Chung, Graham, Hoggatt and Kleiman [12] found some interesting recurrences and gave a proof based on generating functions. Mallows [33] found the refined count of Baxter permutations by rises (Proposition 5.2). The bijection of Theorem 5.1 is essentially due to Dulucq and Guibert [18, 19]. Their description and proof, however, does not use geometry. They also prove Proposition 5.2 and some even more refined counts, e.g., the number of Baxter permutations of $[n]$ with $\ell$ rises and $s$ left-to-right maxima and $t$ right-to-left maxima. A permutation $(a_{1},a_{2},\ldots,a_{n})$ is alternating††margin: alternating if $a_{1}<a_{2}>a_{3}<a_{4}>\ldots$, i.e., each consecutive pair $a_{2i-1},a_{2i}$ is a rise and each pair $a_{2i},a_{2i+1}$ a descent. Alternating permutations are characterized by the property that the reduced fingerprints of their Min- and Max-trees are alternating, i.e., of the form $\ldots 0,1,0,1,0,1,\ldots$ and in addition, to ensure that the first pair is a rise, the first entry of the reduced fingerprints of the Max-tree is a 0. Due to this characterization we obtain the following specialization of Theorem 5.1: Lemma 5.1. Twin pairs of binary trees with an alternating reduced fingerprint starting in 0 and alternating Baxter permutations are in bijection. Let $T$ be a binary tree with $n$ leaves and with an alternating reduced fingerprint starting with a 0. The leaves of $T$ come in pairs from left to right so that the leaves from each pair are attached to the same interior node. Pruning the leaves we obtain a tree $T^{\prime}$ with $n-\lfloor\frac{n}{2}\rfloor$ leaves. From $T^{\prime}$ we come back to $T$ by attaching a new pair of leaves to each of the first $\lfloor\frac{n}{2}\rfloor$ leaves of $T^{\prime}$. Using this kind of bijection we obtain two bijections (see Figure LABEL:fig:alt-baxt): $\bullet$a bijection between alternating Baxter permutations of $[2k-1]$ and pairs of binary trees with $k$ leaves, and $\bullet$a bijection between alternating Baxter permutations of $[2k]$ and pairs of binary trees with $k$ and $k+1$ leaves. Theorem 5.2. The number of alternating Baxter permutations on $[n-1]$ is $C_{k-1}C_{k}$ if $n=2k$ and $C_{k-1}C_{k-1}$ if $n=2k-1$. \PsFigCap 33alt-baxtAlternating Baxter permutations and pairs of trees. Note. Theorem 5.2 was obtained by Cori et al. [13]. It was reproved by Dulucq and Guibert [18] as a specialization of their bijection between Baxter permutations and twin pairs of binary trees. In [28] it is shown that alternating Baxter permutations with the property that their inverse is again alternating Baxter are counted by the Catalan numbers. 5.2 Plane Bipolar Orientations A graph $G$ is said to be rooted if one of its edges is distinguished and oriented. The origin and the end of the root-edge are denoted $s$ and $t$. If $G$ is a plane graph, the root-edge is always assumed to be incident to the outer face, with the outer face on its left. Definition 5.3. A bipolar orientation††margin: bipolar orientation of a rooted graph $G$ is an acyclic orientation of $G$ such that the unique source (i.e., vertex with only outgoing edges) is $s$ and the unique sink (i.e., vertex with only ingoing edges) is $t$. A plane bipolar orientation is a bipolar orientation on a rooted plane graph (multiple edges are allowed). It is well-known that the rooted graphs admitting a bipolar orientation are exactly 2-connected graphs, i.e., graphs with no separating vertex. Note. Bipolar orientations have proved to be insightful in solving many algorithmic problems such as planar graph embedding [31, 11] and geometric representations of graphs in various flavors (visibility [41], floor planning [35, 30], straight-line drawing [42, 23]). They also constitute a beautiful combinatorial structure; the thesis of Ossona de Mendez is devoted to studying their numerous properties and applications [17]; see also [16] for a detailed survey. Let $G$ be a rooted plane graph; the angular map††margin: angular map of $G$ is the graph $Q$ with vertex set consisting of vertices and faces of $G$, and edges corresponding to incidences between a vertex and a face. The special vertices $s,t$ of $Q$ are the extremities (origin $s$ and end $t$) of the root-edge of $G$. The angular map $Q$ of $G$ inherits a plane embedding from $G$. The unique bipartition of $Q$ has the vertices of $G$ in one color class and the faces of $G$ in the other. We assume that vertices of $G$ are black and faces of $G$ are white. All the faces of $Q$ are quadrangles, which correspond to the edges of $G$. Moreover, since $G$ is 2-connected, $Q$ has no double edges, so $Q$ is a quadrangulation. \PsFigCap 17bicolorFrom a rooted map endowed with a bipolar orientation to a separating decomposition on the angular map. If $G$ is endowed with a bipolar orientation $B$, the angular map can be enriched in order to transfer the orientation onto $Q$. Actually, we will define a bijection between bipolar orientations of $G$ and separating decompositions of $Q$. The construction, based on two facts about bipolar orientations of rooted plane graphs, is illustrated in Figure LABEL:fig:bicolor. Fact V.  Every vertex $v\neq s,t$ has exactly two adjacent faces (angles) where the orientation of the edges differ. Fact F.  Every face $f$ has exactly two vertices (angles) where the orientation of the edges coincide. Facts V and F specify two distinguished edges in the angular map for every non-special vertex and every face. Since every edge of $Q$ is distinguished either for a vertex or for a face this yields a 2-orientation. Figure LABEL:fig:bic-detail indicates how to color this 2-orientation to get a separating decomposition on $Q$. From a separating decomposition on $Q$, the unique bipolar orientation on $G$ inducing $Q$ is easily recovered. (If $Q$ has white vertices of degree 2, then there are multi-edges on $G$). To summarize: \PsFigCap 16bic-detailThe transformation for a vertex and a face of a rooted map. Proposition 5.3. Plane bipolar orientations with $\ell+2$ vertices and $k+2$ faces are in bijection with separating decompositions of quadrangulations with $\ell+2$ black vertices and $k+2$ white vertices. Consequently, the number $\Theta_{k,\ell}$ counts: $\bullet$separating decompositions of quadrangulations with $k+2$ white and $\ell+2$ black vertices, $\bullet$Bipolar orientations of rooted plane graphs with $k+2$ faces and $\ell+2$ vertices. In Section 7 we will use this and some previous bijections to give an independent proof for a beautiful formula of Bonichon [7] for the number of Schnyder woods on triangulations with $n$ vertices. Note. The two facts V and F have been rediscovered frequently, they can be found, e.g., in [16, 35, 41]. Actually, plane bipolar orientations can be defined via properties V and F. The bijection of Proposition 5.3 is a direct extension of  [16, Theo 5.3]. In 2001, R. Baxter [5, Eq 5.3] guessed that plane bipolar orientations are counted by the $\Theta$-numbers. His verification is based on algebraic manipulations on generating functions of plane graphs weighted by their Tutte polynomials. A simpler proof for this fact was obtained by Fusy et al. [25] via a direct bijection from separating decompositions to triples of non-intersecting lattice paths. Their bijection presents significant differences from the one presented in this article, even if the classes in correspondence are the same. The main difference is that they do not treat the blue tree and the red tree of a separating decomposition in a symmetric way, as we do here, and their correspondence is less geometric. (In their bijection, the blue tree is encoded as a refined Dyck word, while the red edges are encoded as the sequence of degrees in red of the white vertices.) As shown in [16], the number of bipolar orientations of a fixed rooted graph $G$ is equal to twice the coefficient $[x]T_{G}(x,y)$ in the Tutte polynomial of $G$. This coefficient is called Crapo’s $\beta$ invariant and it is #P-hard to compute [3]. To our knowledge, the computational complexity of the counting problem restricted to rooted plane graphs is open. (By the angular map bijection, it is clearly equivalent to computing the number of 2-orientations of a fixed quadrangulation.) 5.3 Digression: Duality, Completion Graph, and Hamiltonicity There exists a well-known duality mapping††margin: duality mapping for plane graphs. The dual $G^{*}$ of a plane graph $G$ has its vertices corresponding to the faces of $G$, and has its edges corresponding to the adjacencies of the faces of $G$ (two faces are adjacent if they share an edge). Precisely, each edge $e$ of $G$ gives rise to an edge $e^{*}$ of $G^{*}$ that connects the vertices of $G^{*}$ corresponding to the faces of $G$ on each side of $e$. Let us mention here, even if we do not make use of this fact, that duality can be enriched to take account of bipolar orientations [16]; if $G$ is endowed with a bipolar orientation, each (oriented) non-root edge $e$ of $G$ gives rise to an oriented edge $e^{*}$ of $G^{*}$ that goes from the face on the left to the face on the right of $e$ (for the root edge the opposite rule has to be applied). \PsFigCap 19hamiltonianCycTop, from left to right: A 2-connected plane graph, its quadrangulation, and its completion graph. Bottom, from left to right: the same 2-connected plane graph rooted at an edge and endowed with a bipolar orientation, the quadrangulation endowed with the corresponding separating decomposition (the equatorial line of the 2-book embedding is drawn —in dashed line— by bisecting all bicolored angles), and the special completion graph endowed with the corresponding Hamiltonian cycle. The completion graph††margin: completion graph $\widetilde{G}$ of $G$ is the plane graph obtained by superimposing $G$ and its dual $G^{*}$, see Figure LABEL:fig:hamiltonianCyc. Vertices of $\widetilde{G}$ are of 3 types: primal vertices††margin: primal vertices are the vertices of $G$, dual vertices††margin: dual vertices are the vertices of $G^{*}$, and edge-vertices††margin: edge-vertices are the vertices at the intersection of an edge $e\in G$ with its dual edge $e^{*}$ (hence edge-vertices have degree 4 in $\widetilde{G}$). Observe in Figure LABEL:fig:hamiltonianCyc that the completion graph of $G$ and the quadrangulation $Q$ of $G$ differ only upon replacing the contour of each face $f$ of $Q$ by a 4-star, the extremities of the 4-star being the 4 vertices incident to $f$ and the center of the 4-star being the edge-vertex of $\widetilde{G}$ associated with $f$ (each edge of $G$ corresponds both to a face of $Q$ and to an edge-vertex of $\widetilde{G}$, which yields a correspondence between faces of $Q$ and edge-vertices of $\widetilde{G}$). If $G$ is rooted, the origin $s$ and end $t$ of the root-edge are the special vertices††margin: special vertices of $G$; the special completion graph††margin: special completion graph of $G$ is the plane graph obtained from $\widetilde{G}$ by removing $s$ and $t$ as well as their incident edges. Proposition 5.4. The special completion graph of a rooted 2-connected plane graph is Hamiltonian. Proof. Given $G$ a rooted 2-connected plane graph, we first endow $G$ with a bipolar orientation, and consider the quadrangulation $Q$ of $G$ endowed with the corresponding separating decomposition $S$. As proved in Lemma 2.1), the equatorial line of $S$ is a simple path that has the two outer nonspecial vertices as extremities and passes once by each inner vertex and once by the interior of each inner face of $Q$. By the above discussion on the correspondence between $Q$ and $\widetilde{G}$, the path is easily deformed to a simple path on the graph $\widetilde{G}$ visiting once all the vertices of $\widetilde{G}$ except the two special ones, see Figure LABEL:fig:hamiltonianCyc. In addition, the path does not use the 4-star of $\widetilde{G}$ corresponding to the outer face of $f$. Completing the path with the two edges of the outer 4-star incident to white vertices, we obtain a Hamiltonian cycle of the special completion graph. 6 Symmetries As we explain in this section, the bijections we have presented have the nice property that they commute with the half-turn rotation, which makes it possible to count symmetric combinatorial structures. The first structures we have encountered are 2-orientations. Given a 2-orientation $O$, exchanging the two special vertices $\{s,t\}$ of $O$ clearly yields another 2-orientation, which we call the pole-inverted††margin: pole-inverted 2-orientation of $O$ and denote by $\iota(O)$. A 2-orientation is called pole-symmetric††margin: pole-symmetric if $O$ and $\iota(O)$ are isomorphic. Considering the associated separating decomposition, the blue tree of $O$ is the red tree of $\iota(O)$ and vice versa. Accordingly, a 2-orientation is pole-symmetric if, and only if, the blue tree and the red tree are isomorphic as rooted trees, in which case the separating decomposition is called pole-symmetric as well. Such a symmetry translates to half-turn rotation symmetries on the associated embeddings. Indeed, as the two trees composing the separating decomposition are isomorphic, so are their alternating embeddings and so are the two binary trees that compose the associated twin pair of full binary trees, in which case the twin pair is called symmetric. Hence, the 2-book embedding and rectangulation associated with the separating decomposition are stable under the half-turn rotation that exchanges the two special vertices. Such 2-book embeddings and rectangulations are called pole-symmetric as well. Considering Baxter permutations, the min-tree (resp., max-tree) of a Baxter permutation $\pi$ is the max-tree (resp., min-tree) of the associated Baxter permutation $\overline{\rho(\pi)}$, i.e., the permutation whose 0-1 matrix is the 0-1 matrix of $\pi$ after half-turn rotation. Baxter permutations for which $\pi=\overline{\rho(\pi)}$ are said to be symmetric. By definition of the bijective correspondence of Theorem 5.1, a Baxter permutation is symmetric if, and only if, the associated twin pair of full binary trees is symmetric. \PsFigCap 28symmetriesFigA pole-symmetric separating decomposition and the corresponding symmetric combinatorial structures: 2-book embedding, twin pair of full binary trees, plane bipolar orientation, Baxter permutation, triple of paths. Next we turn to the encoding by a triple of paths. Recall that, in a twin pair $(T,T^{\prime})$ of full binary trees, the reduced fingerprints satisfy the relation $\hat{\alpha}(T)=\rho(\hat{\alpha}(T^{\prime}))$. Hence, a symmetric twin pair $(T,T)$ is characterized by the property that the reduced fingerprint of $T$ satisfies $\hat{\alpha}(T)=\rho(\hat{\alpha}(T))$, i.e., $\hat{\alpha}$ is a palindrome. Equivalently, if $T$ has $k+1$ left leaves and $\ell+1$ right leaves, the upright lattice path $P_{2}:=P_{\alpha}(T)$, as defined in Section 4, is stable under the point-reflection $\pi_{S}$ at $S:=(k/2+1,\ell/2+1)$. The other two paths in the triple $(P_{1},P_{2},P_{3})$ of non-intersecting lattice paths correspond to two copies of the bodyprint of $T$ read respectively from $(0,2)$ to $(k,\ell+2)$ for $P_{1}$ and from $(\ell+2,k)$ to $(2,0)$ for $P_{3}$. Therefore the whole triple $(P_{1},P_{2},P_{3})$ is stable under the point reflection $\pi_{S}$. Such a triple of paths is called symmetric. Lemma 6.1. Let $\Theta_{k,\ell}^{\circlearrowleft}$ be the number of symmetric non-intersecting triples of upright lattice paths $(P_{1},P_{2},P_{3})$ going respectively from $(0,2)$, $(1,1)$, $(2,0)$ to $(k,\ell+2)$, $(k+1,\ell+1)$, $(k+2,\ell)$. (i) If $k$ and $\ell$ are odd, then $\Theta_{k,\ell}^{\circlearrowleft}=0$. (ii) If $k$ and $\ell$ are even, $k=2\kappa$, $\ell=2\lambda$, then $$\Theta_{k,\ell}^{\circlearrowleft}=\sum_{r\geq 1}\frac{2r^{3}}{(\kappa+\lambda% +1)(\kappa+\lambda+2)^{2}}{\kappa+\lambda+2\choose\kappa+1}{\kappa+\lambda+2% \choose\kappa-r+1}{\kappa+\lambda+2\choose\kappa+r+1}.$$ (iii) If $k$ is odd and $\ell$ is even, $k=2\kappa+1$, $\ell=2\lambda$, then $$\Theta_{k,\ell}^{\circlearrowleft}=\sum_{r\geq 1}\frac{2r^{3}\!+\!(\lambda\!-% \!r\!+\!1)r(r\!+\!1)(2r\!+\!1)}{(\kappa+\lambda+1)(\kappa+\lambda+2)^{2}}{% \kappa+\lambda+2\choose\kappa+1}{\kappa+\lambda+2\choose\kappa-r+1}{\kappa+% \lambda+2\choose\kappa+r+1}.$$ (iv) If $k$ is even and $\ell$ is odd, $k=2\kappa$, $\ell=2\lambda+1$, then $$\Theta_{k,\ell}^{\circlearrowleft}=\sum_{r\geq 1}\frac{2r^{3}\!+\!(\kappa\!-\!% r\!+\!1)r(r\!+\!1)(2r\!+\!1)}{(\kappa+\lambda+1)(\kappa+\lambda+2)^{2}}{\kappa% +\lambda+2\choose\kappa+1}{\kappa+\lambda+2\choose\kappa-r+1}{\kappa+\lambda+2% \choose\kappa+r+1}.$$ Proof. By definition, $(P_{1},P_{2},P_{3})$ is stable under the point-reflection $\pi_{S}$ at $S:=(\ell/2+1,k/2+1)$. In particular, $P_{2}$ is stable under $\pi_{S}$, so that $P_{2}$ has to pass by $S$. This can only occur if $S$ is on an axis-coordinate, i.e., $k/2$ or $\ell/2$ are integers. Therefore $\Theta_{k,\ell}^{\circlearrowleft}=0$ if both $k$ and $\ell$ are odd. If $k$ and $\ell$ are even, $k=2\kappa$ and $\ell=2\lambda$, the half-turn symmetry ensures that $P_{1},P_{2},P_{3}$ is completely encoded upon keeping the part $P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime}$ of the paths that lie in the half-plane $\{x+y\leq x_{S}+y_{S}\}$, i.e., the half-plane $\{x+y\leq\kappa+\lambda+2\}$. The conditions on $(P_{1},P_{2},P_{3})$ translate to the following conditions on the reduced triple: $(P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime})$ is non-intersecting, has same starting points as $(P_{1},P_{2},P_{3})$, the endpoint of $P_{2}^{\prime}$ is $S$, and the endpoints of $P_{1}^{\prime}$ and $P_{3}^{\prime}$ are equidistant from $S$, i.e., there exists an integer $r\geq 1$ such that $P_{1}^{\prime}$ ends at $(\kappa+1-r,\lambda+1+r)$ and $P_{3}^{\prime}$ ends at $(\kappa+1+r,\lambda+1-r)$. Hence, up to fixing $r\geq 1$, $(P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime})$ form a non-intersecting triple with explicit fixed endpoints, so that the number of such triples can be expressed using Gessel-Viennot determinant formula. The expression for $\Theta_{k,\ell}^{\circlearrowleft}$ follows. If $k$ is odd and $\ell$ is even, $k=2\kappa+1$ and $\ell=2\lambda$, the triple $(P_{1},P_{2},P_{3})$ is again completely encoded by keeping the part $(P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime})$ of the paths that lie in $\{x+y\leq x_{S}+y_{S}\}$, i.e., the half-plane $\{x+y\leq\kappa+\lambda+5/2\}$. The difference with the case where $k$ and $\ell$ are even is that $P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime}$ are not standard lattice paths, as they end with a step of length $1/2$. Similarly as before, the conditions on $(P_{1},P_{2},P_{3})$ are equivalent to the properties that $(P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime})$ are non-intersecting, have the same starting points as $(P_{1},P_{2},P_{3})$, $P_{2}^{\prime}$ ends at $S$, and $P_{1}^{\prime},P_{3}^{\prime}$ end at points that are equidistant from $S$ on the line $\{x+y=x_{S}+y_{S}\}$ and have one integer coordinate, i.e., there exists an integer $m\geq 2$ such that $P_{1}^{\prime}$ ends at $(x_{S}-m/2,y_{S}+m/2)$ and $P_{3}^{\prime}$ ends at $(x_{S}+m/2,y_{S}-m/2)$. Notice that, upon discarding the last step, the system $(P_{1}^{\prime},P_{2}^{\prime},P_{3}^{\prime})$ is equivalent to a triple of non-intersecting upright lattice paths $(\overline{P_{1}^{\prime}},\overline{P_{2}^{\prime}},\overline{P_{3}^{\prime}})$ with starting points $(0,2)$, $(1,1)$, $(2,0)$, and endpoints that are either of the form $(\kappa+1-r,\lambda+1+r)$, $(\kappa+1,\lambda+1)$, $(\kappa+1+r,\lambda+1-r)$ if $m$ is even, $m=2r$, or are of the form $(\kappa+1-r,\lambda+1+r)$, $(\kappa+1,\lambda+1)$, $(\kappa+2+r,\lambda-r)$ if $m$ is odd, $m=2r+1$. In each case, the number of triples has an explicit form from the formula of Gessel-Viennot. The expression of $\Theta_{k,\ell}^{\circlearrowleft}$ follows. Finally, notice that the set of symmetric non-intersecting triples is stable under swapping $x$-coordinates and $y$-coordinates, yielding the relation $\Theta_{k,\ell}^{\circlearrowleft}=\Theta_{\ell,k}^{\circlearrowleft}$. Thus the formula for $\Theta_{k,\ell}^{\circlearrowleft}$ when $k$ is even and $\ell$ is odd simply follows from the formula obtained when $k$ is odd and $\ell$ is even. Considering bipolar orientations, the effect of the half-turn symmetry of a separating decomposition on the associated plane bipolar orientation is clearly that the orientation is unchanged when the poles are exchanged, the directions of all edges are reversed, and the root-edge is flipped to the other side of the outer face (in fact it is more convenient to forget about the root-edge here). Such bipolar orientations are called pole-symmetric. The whole discussion on symmetric structures is summarized in the following proposition and illustrated in Figure LABEL:fig:symmetriesFig. Proposition 6.1. The number $\Theta_{k,\ell}^{\circlearrowleft}$ counts • pole-symmetric 2-orientations with $k+1$ white vertices and $\ell+1$ black vertices, • pole-symmetric separating decompositions and 2-book embeddings with $k+1$ white vertices and $\ell+1$ black vertices, • symmetric twin pairs of full binary trees with $k+1$ left leaves and $\ell+1$ right leaves, • pole-symmetric rectangulations of $X_{n}$ with $k$ horizontal and $\ell$ vertical segments • symmetric Baxter permutations of $k+\ell+1$ with $k$ descents and $\ell$ rises, • pole-symmetric plane bipolar orientations with $k$ inner faces and $\ell$ non-pole vertices. 7 Schnyder Families Baxter numbers count 2-orientations on quadrangulations and several other structures. We now turn to a family of structures which are equinumerous with 3-orientations of plane triangulations. Consider a plane triangulation $T$, i.e., a maximal plane graph, with $n$ vertices and three special vertices $a_{1},a_{2},a_{3}$ in clockwise order around the outer face. Definition 7.1. An orientation of the inner edges of $T$ is a 3-orientation††margin: 3-orientation if every inner vertex has outdegree three. From the count of edges it follows that the special vertices $a_{i}$ are sinks in every 3-orientation. Definition 7.2. An orientation and coloring of the inner edges of $T$ with colors red, green and blue is a Schnyder wood††margin: Schnyder wood if: (1)All edges incident to $a_{1}$ are red, all edges incident to $a_{2}$ are green and all edges incident to $a_{3}$ are blue. (2)Every inner vertex $v$ has three outgoing edges colored red, green and blue in clockwise order. All the incoming edges in an interval between two outgoing edges are colored with the third color, see Figure LABEL:fig:vertex-3-cond. \PsFigCap 30vertex-3-condSchnyder’s edge coloring rule. Theorem 7.1. Let $T$ be a plane triangulation with outer vertices $a_{1},a_{2},a_{3}$. Schnyder woods and 3-orientations of $T$ are in bijection. The proof is very similar to the proof of Theorem 2.1. Given an edge $e$ which is incoming at $v$, we can classify the outgoing edges at $v$ as left, straight and right. Define the straight-path of an edge as the path which always takes the straight outgoing edge. A count and Euler’s formula shows that every straight-path ends in a special vertex. The special vertex where a straight-path ends determines the color of all the edges along the path. It can also be shown that two straight-paths starting at a vertex do not rejoin. This implies that the coloring of the orientation is a Schnyder wood. From this proof it follows that the local properties (1) and (2) of Schnyder woods imply: (3)The edges of each color form a tree rooted at a special vertex and spanning all the inner vertices. Recall that in the case of separating decompositions we also found the tree decomposition being implied by local conditions (c.f. item (3) after the proof of Theorem 2.1). Note. Schnyder woods were introduced by Schnyder in [37] and [38]. They have numerous applications in the context of graph drawing, e.g., [4, 8, 32], dimension theory for orders, graphs and polytopes, e.g., [37, 9, 21], enumeration and encoding of planar structures, e.g., [34, 24]. The connection with 3-orientations was found by de Fraysseix and Ossona de Mendez [14]. The aim of this section is to prove the following theorem of Bonichon. Theorem 7.2. The total number of Schnyder woods on triangulations with $n+3$ vertices is $$V_{n}=C_{n+2}\,C_{n}-C_{n+1}^{2}=\frac{6\,(2n)!\;(2n+2)!}{n!\;(n+1)!\;(n+2)!\;% (n+3)!}$$ where $C_{n}$ is the Catalan number. Before going into details we outline the proof. We first show a bijection between Schnyder woods and a special class of bipolar orientations of plane graphs. We trace these bipolar orientations through the bijection with separating decompositions, twin pairs of trees and triples of non-intersecting paths. Two of the three paths turn out to be equal and the remaining pair is a non-crossing pair of Dyck paths. This implies the formula. Note. The original proof, Bonichon [7], and a more recent simplified version, Bernardi and Bonichon [6], are also based on a bijection between Schnyder woods and pairs of non-crossing Dyck paths. In [6] the authors also enumerate special classes of Schnyder woods. Little is known about the number of Schnyder woods of a fixed triangulation. In [22] it is shown that the maximal number of Schnyder woods a triangulation on $n$ vertices can have is asymptotically between $2,37^{n}$ and $3,56^{n}$. As with 2-orientations, the computational complexity of the counting problem is unkown. Proposition 7.1. There is a bijection between Schnyder woods on triangulations with $n+3$ vertices and bipolar orientations of maps with $n+2$ vertices and the special property: ($\star$)The right side of every bounded face is of length two. Proof. Let $T$ be a triangulation with a Schnyder wood $S$. With $(T,S)$ we associate a pair $(M,B)$, where $M$ is a subgraph of $T$ and $B$ a bipolar orientation on $M$. The construction is in two steps. First we delete the edges of the green tree in $S$ and the special vertex of that tree, i.e., $a_{2}$, as well as the two outer edges incident to $a_{2}$, from the graph, the resulting graph is $M$. Then we revert the orientation of all blue edges and orient the edge $\{a_{3},a_{1}\}$ from $a_{3}$ to $a_{1}$, this is the orientation $B$. Figure LABEL:fig:schnyder-bip shows an example. \PsFigCap 20schnyder-bipA Schnyder wood and the corresponding bipolar orientation. The orientation $B$ has $a_{3}$ as unique source and $a_{1}$ as unique sink. To show that it is bipolar we verify properties V and F. Property V requires that at a vertex $v\neq s,t$ the edges partition into nonempty intervals of incoming and outgoing edges, this is immediate from the edge coloring rule (2) and the construction of $B$. For Property F consider a bounded face $f$ of $M$. Suppose that $f$ is of degree $>3$, then there had been some green edges triangulating the interior of $f$. The coloring rule for the vertices on the boundary of $f$ implies that these green edges form a fan, as indicated in Figure LABEL:fig:green-edges. From the green edges we recover, again with the coloring rule, the orientation of the boundary edges of $f$ in $B$: the neighbors of the tip vertex of the green edges are the unique source and sink of $f$. This also implies that the right side of $f$ is of length two, i.e., $(\star)$. If $f$ is a triangle, then two of its edges are of the same color, say red. The coloring rule implies that these two edges point to their common vertex, whence the triangle has unique source and sink. Since the transitive vertex of $f$ has a green outgoing edge in $S$, it is on the right side and $(\star)$ also holds for $f$. \PsFigCap 17green-edgesFrom a generic face in $S$ to $B$ and back. For the converse mapping, consider a pair $(M,B)$ such that $(\star)$ holds. Every vertex $v\neq s,t$ has a unique face where it belongs to the right side. This allows us to identify the red and the blue outgoing edges of $v$. Property $(\star)$ warrants that there is no conflict. The green edges are the edges triangulating faces of larger degree together with edges reaching from the right border to the additional outer vertex $a_{2}$. This yields a unique Schnyder wood on a triangulation. Given a plane bipolar orientation $(M,B)$ with $n+2$ vertices and the $(\star)$ property, we apply the bijection from Proposition 5.3 to obtain a quadrangulation $Q$ with a separating decomposition. Property $(\star)$ is equivalent to ($\star^{\prime}$)Every white vertex (except the rightmost one) has a unique incoming edge in the blue tree. In particular it follows that there is a matching between vertices $v\neq s,t$ and bounded faces of $M$, hence, in $Q$ there are $n+2$ black and $n+1$ white vertices. The separating decomposition of $Q$ yields twin-alternating trees with $n+1$ black and $n$ white vertices (Theorem 2.2). From the twin-alternating pair we get to a twin binary pair of trees with $n+1$ black and $n$ white vertices (Theorem 4.1). This pair of trees yields a triple of non-intersecting paths (Theorem 4.3). Figure LABEL:fig:schnyder-to-paths shows an example of the sequence of transformations. \PsFigCap 12schnyder-to-pathsFrom a Schnyder wood to three strings. From ($\star^{\prime}$) we get some crucial properties of the fingerprint and the bodyprints of the blue tree $T^{b}$ and the red tree $T^{r}$. Fact 1.  If we add a leading 1 to the reduced fingerprint $\hat{\alpha}$, then we obtain a Dyck word; in symbols $(01)^{n}\leq_{{\sf dom}}1+\hat{\alpha}$. Proof. It is better to think of $1+\hat{\alpha}$ as the fingerprint $\alpha^{b}$ of the blue tree after removal of the last 0. Property ($\star^{\prime}$) implies that there is a matching between all 1’s and all but the last 0’s in the $\alpha^{b}$, such that each 1 is matched to a 0 further to the right. $\triangle$ Fact 2.  The fingerprint uniquely determines the bodyprint of the blue tree, precisely $\overline{\beta^{b}}=1+\hat{\alpha}$. Proof. From ($\star^{\prime}$) it follows that $\alpha^{b}_{i}=1$ implies $\beta^{b}_{i+1}=0$. Since $\alpha^{b}$ has $n$ entries 1 and $\beta^{b}$ that same number of 0’s, it follows that $\beta^{b}$ is determined by $\alpha^{b}$. $\triangle$ Let $\alpha^{*}=1+\hat{\alpha}$ and $\beta^{*}=1+\hat{\beta}^{r}$; then $(01)^{n}\leq_{{\sf dom}}\alpha^{*}\leq_{{\sf dom}}\beta^{*}$. We omit the proof that actually every pair $(\alpha^{*},\beta^{*})$ of 0,1 strings from $\binom{2n}{n}$ with these properties comes from a unique Schnyder wood on a triangulation with $n+3$ vertices. Translating the resulting bijection with strings into the language of paths we obtain: Theorem 7.3. There is a bijection between Schnyder woods on triangulations with $n+3$ vertices and pairs $(P_{1},P_{2})$ of non-intersecting upright lattice paths, where $P_{1}$ is from $(0,0)$ to $(n,n)$, $P_{2}$ is from $(1,-1)$ to $(n+1,n-1)$, and the paths stay weakly below the diagonal, i.e., they avoid all points $(x,y)$ with $y>x$. For the actual counting of Schnyder woods we again apply the lemma of Gessel and Viennot. The entry $A_{i,j}$ in the matrix is the number of paths from the start of $P_{i}$ to the end of $P_{j}$ staying weakly below the diagonal. The reflection principle of D. André allows us to write these numbers as differences of binomials. Proposition 7.2. The number of Schnyder woods on triangulations with $n+3$ vertices is $$\det\begin{pmatrix}{2n\choose n}-{2n\choose n-1}&&{2n\choose n+1}-{2n\choose n% -2}\\ {2n\choose n-1}-{2n\choose n-2}&&{2n\choose n}-{2n\choose n-3}\end{pmatrix}=% \frac{6\,(2n)!\;(2n+2)!}{n!\;(n+1)!\;(n+2)!\;(n+3)!}$$ Acknowledgements. Mireille Bousquet-Mélou and Nicolas Bonichon are greatly thanked for fruitful discussions. References [1] E. Ackerman, G. Barequet, and R. Pinter, On the number of rectangulations of a planar point set, J. Combin. Theory Ser. A, 113 (2006), pp. 1072–1091. [2] M. Aigner, A Course in Enumeration, vol. 238 of Graduate Texts in Mathematics, Springer-Verlag, 2007. [3] J. D. Annan, The complexity of the coefficients of the Tutte polynomial, Discr. Appl. Math., 57 (1995), pp. 93–103. [4] I. Bárány and G. Rote, Strictly convex drawings of planar graphs, Documenta Math., 11 (2006), pp. 369–391. [5] R. J. Baxter, Dichromatic polynomials and Potts models summed over rooted maps, Annals of Combinatorics, 5 (2001), p. 17. [6] O. Bernardi and N. Bonichon, Catalan intervals and realizers of triangulations. arXiv.org:0704.3731. [7] N. Bonichon, A bijection between realizers of maximal plane graphs and pairs of non-crossing Dyck paths, Discrete Mathematics, 298 (2005), pp. 104–114. [8] N. Bonichon, S. Felsner, and M. Mosbah, Convex drawings of 3-connected planar graphs, Algorithmica, 47 (2007), pp. 399–420. [9] G. Brightwell and W. T. Trotter, The order dimension of convex polytopes, SIAM J. Discrete Math., 6 (1993), pp. 230–245. [10] T. Brylawski and J. Oxley, The Tutte polynomial and its applications, in Matroid Applications, Cambr. Univ. Press, 1992, pp. 123–225. [11] N. Chiba, T. Nishizeki, S. Abe, and T. Ozawa, A linear algorithm for embedding planar graphs using PQ-trees, J. Comput. Syst. Sci., 30(1) (1985), pp. 54–76. [12] F. Chung, R. Graham, V. Hoggatt, and M. Kleiman, The number of Baxter permutations, J. Comb. Theory, Ser. A, 24 (1978), pp. 382–394. [13] R. Cori, S. Dulucq, and G. Viennot, Shuffles of pharentesis systems and Baxter permutations, J. Comb. Theory, Ser. A, 43 (1986), pp. 1–22. [14] H. de Fraysseix and P. O. de Mendez, On topological aspects of orientation, Discr. Math., 229 (2001), pp. 57–72. [15] H. de Fraysseix, P. O. de Mendez, and J. Pach, A left-first search algorithm for planar graphs, Discrete Computational Geometry, 13 (1995), pp. 459–468. [16] H. de Fraysseix, P. Ossona de Mendez, and P. Rosenstiehl, Bipolar orientations revisited, Discrete Appl. Math., 56 (1995), pp. 157–179. [17] P. O. de Mendez, Orientations bipolaires, PhD thesis, École des Hautes Études en Sciences Sociales, Paris, 1994. [18] S. Dulucq and O. Guibert, Stack words, standard tableaux and Baxter permutations, Discr. Math., 157 (1996), pp. 91–106. [19] S. Dulucq and O. Guibert, Baxter permutations, Discr. Math., 180 (1998), pp. 143–156. [20] S. Felsner, C. Huemer, S. Kappes, and D. Orden, Binary labelings for plane quadrangulations and their relatives. arXiv:math.CO/0612021, 2007. [21] S. Felsner and S. Kappes, Orthogonal surfaces, Order, (2008). DOI:10.1007/s11083-007-9075-z. [22] S. Felsner and F. Zickfeld, On the number of planar orientations with prescribed degrees. arXiv:math.CO/0701771. [23] E. Fusy, Straight-line drawing of quadrangulations, in Proceedings of Graph Drawing ’06, vol. 4372 of LNCS, 2007, pp. 234–239. [24] E. Fusy, D. Poulalhon, and G.Schaeffer, Dissection and trees, with applications to optimal mesh encoding and random sampling, in Proc. 16th ACM-SIAM Symp. Discr. Algo., 2005, pp. 690 – 699. [25] E. Fusy, D. Poulalhon, and G. Schaeffer, Bijective counting of bipolar orientations, Electr. Notes in Discr. Math., (2007), pp. 283–287. [26] I. Gelfand, M. Graev, and A. Postnikov, Combinatorics of hypergeometric functions associated with positive roots, in The Arnold-Gelfand Mathematical Seminars: Geometry and Singularity Theory, V. I. e. a. Arnold, ed., Birkhäuser, 1997, pp. 205–221. [27] I. Gessel and G. Viennot, Binomial determinants, paths, and hook length formulae, Adv. Math., 58 (1985), pp. 300–321. [28] O. Guibert and S. Linusson, Doubly alternating Baxter permutations are Catalan, Discr. Math., 217 (2000), pp. 157–166. [29] I. B.-H. Hartman, I. Newman, and R. Ziv, On grid intersection graphs, Discr. Math., 87 (1991), pp. 41–52. [30] G. Kant and X. He, Regular edge labeling of 4-connected plane graphs and its applications in graph drawing problems, Theor. Comput. Sci., 172 (1997), pp. 175–193. [31] A. Lempel, S. Even, and I. Cederbaum, An algorithm for planarity testing of graphs, in Theory of Graphs, Int. Symp (New York), 1967, pp. 215–232. [32] C. Lin, H. Lu, and I.-F. Sun, Improved compact visibility representation of planar graphs via Schnyder’s realizer, SIAM J. Discrete Math., 18 (2004), pp. 19–29. [33] C. Mallows, Baxter permutations rise again, J. Comb. Theory, Ser. A, 27 (1979), pp. 394–396. [34] D. Poulalhon and G. Schaeffer, Optimal coding and sampling of triangulations, in Proceedings ICALP ’03, vol. 2719 of Lecture Notes Comput. Sci., Springer-Verlag, 2003, pp. 1080–1094. [35] P. Rosenstiehl and R. E. Tarjan, Rectilinear planar layouts and bipolar orientations of planar graphs, Discrete Comput. Geom., 1(4) (1986), pp. 343–353. [36] G. Rote, I. Streinu, and F. Santos, Expansive motions and the polytope of pointed pseudo-triangulations, in Discrete and Computational Geometry, The Goodman and Pollack Festschrift, vol. 25 of Algorithms and Combinatorics, Springer, 2003, pp. 699–736. [37] W. Schnyder, Planar graphs and poset dimension, Order, 5 (1989), pp. 323–343. [38] W. Schnyder, Embedding planar graphs on the grid, in Proc. 1st ACM-SIAM Symp. Discr. Algo., 1990, pp. 138–148. [39] N. J. A. Sloane, The on-line encyclopedia of integer sequences. http://www.research.att.com/~njas/sequences. [40] R. P. Stanley, Enumerative Combinatorics, vol. 2, Cambridge Univ. Press, 1999. [41] R. Tamassia and I. G. Tollis, A unified approach to visibility representations of planar graphs, Discrete Comput. Geom., 1(4) (1986), pp. 321–341. [42] R. Tamassia and I. G. Tollis, Planar grid embedding in linear time, IEEE Trans. on Circuits and Systems, CAS-36(9) (1989), pp. 1230–1234. [43] M. Yannakakis, Embedding planar graphs in four pages, J. Comput. System Sci., 38 (1986), pp. 36–67.
Distributed Kernel Principal Component Analysis Maria-Florina Balcan,   Yingyu Liang,  Le Song,  David Woodruff,  Bo Xie School of Computer Science, Carnegie Mellon University. Email: ninamf@cs.cmu.eduDepartment of Computer Science, Princeton University. Email: yingyul@cs.princeton.eduCollege of Computing, Georgia Institute of Technology. Email:lsong@cc.gatech.eduAlmaden Research Center, IBM Research. Email: dpwoodru@us.ibm.comCollege of Computing, Georgia Institute of Technology. Email:bo.xie@gatech.edu Abstract Kernel Principal Component Analysis (KPCA) is a key technique in machine learning for extracting the nonlinear structure of data and pre-processing it for downstream learning algorithms. We study the distributed setting in which there are multiple servers, each holding a set of points, who wish to compute the principal components of the union of their pointsets. Our main result is a communication and computationally efficient algorithm that takes as input arbitrary data points and computes a set of global principal components with relative-error approximation for polynomial kernels. While recent work shows how to do PCA in a distributed setting, the kernel setting is significantly more challenging. Although the “kernel trick” is useful for efficient computation, it is unclear how to use it to reduce communication. The main problem with previous work is that it achieves communication proportional to the dimension of the data points, which if implemented straightforwardly in the kernel setting would give communication either proportional to the dimension of the feature space, or to the number of examples, both of which could be very large. We instead take a roundabout approach, using a careful combination of oblivious subspace embeddings for kernels, oblivious leverage score approximation, adaptive sampling, and sketching for low rank approximation to achieve our result. We also show that our algorithm enjoys strong performance on large scale datasets. 1 Introduction Principal Component Analysis (PCA) is a widely used tool for dimensionality reduction and data pre-processing. It consists of finding the most relevant lower-dimension subspace of the data in the sense that the projection should capture as much of the variance of the original data as possible. Kernel Principal Component Analysis (KPCA) is the extension of this method to data mapped to the kernel feature space (Schölkopf et al., 1997). It allows one to exploit the nonlinear structure of the data, since the vectors in the kernel feature space correspond to nonlinear functions in the original space. This is crucial for complex data such as text and image data, and thus KPCA finds its application for many learning problems. However, such data sets are often large scale, and KPCA is known to be computationally expensive and not easily scalable. Since large scale data is often partitioned across multiple servers, there is an increasing interest in solving learning problems in the distributed model. Kernel methods in this setting face an additional subtle difficulty. The algorithm should use the kernel trick to avoid going to the kernel feature space explicitly, so intermediate results are often represented by a function (e.g., a weighted combination) of the feature mapping of some data points. Communicating such intermediate results requires communicating all the data points they depend on. To lower the communication cost, the intermediate results should only depend on a small number of data points. A distributed algorithm then needs to be carefully designed to meet this constraint. In this paper, we propose a distributed algorithm to compute KPCA for the polynomial kernel. For $n$ data points in $\mathbb{R}^{d}$ arbitrarily partitioned over $s$ servers, the algorithm computes a rank-$k$ subspace in the kernel feature space, which is represented by $O(k/\epsilon)$ original points and achieves a $(1+\epsilon)$ relative-error approximation compared to the best rank-$k$ subspace. The total communication of the algorithm is $\tilde{O}(sdk/\epsilon+s\mathrm{poly}(k/\epsilon))$, which nearly matches $O(sdk/\epsilon)$, the communication of the state-of-the-art algorithms for linear PCA. As far as we know, this is the first algorithm that can achieve relative-error approximation with these communication bounds. It also leads to distributed algorithms for some other kernel methods: the data can then be projected onto the subspace found and processed by downstream non-linear applications. For example, our algorithm combined with any $\alpha$-approximation distributed $k$-means algorithm leads to a $(1+\epsilon)\alpha$-approximation distributed kernel $k$-means clustering algorithm. An intermediate result in the development of our algorithm that may be of independent interest is a distributed algorithm for approximating statistical leverage scores for polynomial kernels. This is a subroutine in our distributed KPCA algorithm, but it can also be useful for other problems where leverage scores are related. Finally, our method can be viewed as a general framework for distributed kernel PCA. It can be applied to other kernels as long as subspace embedding for the kernel is available; see Section 2.2 for more details. Experimental results on large scale real world data show that our algorithm can be run on large datasets for which non-distributed algorithms can be prohibitive, and can achieve significantly better error than the baseline with the same communication budget, with running time about twice that of basic approaches on datasets of size smaller by orders of magnitude. 1.1 Overview of Main Results Suppose there are $s$ servers that are connected to a central processor. Server $i\in[s]$ has a local data set $A^{i}\in\mathbb{R}^{d\times n_{i}}$, and the global data set $A\in\mathbb{R}^{d\times n}$ is the union of the local data ($n=\sum_{i=1}^{s}n_{i}$). Consider a polynomial kernel $k(x,x^{\prime})=(\left\langle x,x^{\prime}\right\rangle)^{q}$, with feature mapping $\phi(x)\in\mathcal{H}=\mathbb{R}^{d^{q}}$ such that $k(x,x^{\prime})=\left\langle\phi(x),\phi(x^{\prime})\right\rangle$ . Let $\phi(A)\in\mathcal{H}^{n}$ denote the matrix obtained by applying $\phi$ on each column of $A$ and concatenating the resulting vectors. The goal of distributed KPCA is to find and send to each server a subspace of dimension $k$ that approximates $\phi(A)$. Thus it is also called (kernel) low rank approximation, and the subspace is called a low rank approximate subspace. Definition 1 A subspace $L\in\mathcal{H}^{k}$ is a rank-$k$ $(1+\epsilon)$-approximate subspace for $\phi(A)$ if $L^{\top}L=I_{k}$ and $$\left\|\phi(A)-LL^{\top}\phi(A)\right\|_{F}\leq(1+\epsilon)\left\|\phi(A)-% \left[\phi(A)\right]_{k}\right\|_{F}$$ where $\left[\phi(A)\right]_{k}$ is the best rank-$k$ approximation to $\phi(A)$. Our main result is a randomized algorithm for distributed KPCA. It takes as input the local datasets, a rank $k$, and an error parameter $\epsilon$, and outputs a subspace that with constant probability is a relative-error approximation to the optimum. Roughly speaking, it first computes a set of weights measuring the importance of the data points for the task, and then samples according to the weights a small subset of points whose span are guaranteed to contain a good subspace. Finally, it computes such a good solution within the span of the sampled points. The three steps (weighting, sampling, and computing a solution) are all carefully designed by judiciously using (variants of) several techniques, to achieve the following bound on accuracy, communication and computation: Theorem 2 For polynomial kernels, there exists a randomized algorithm (Algorithm 1) that produces a subspace $L$ such that with probability at least $0.9$, $L$ is a rank-$k$ $(1+\epsilon)$-approximate subspace for $\phi(A)$. The total communication cost required is cost is $\tilde{O}(\frac{sdk}{\epsilon}+\frac{sk^{2}}{\epsilon^{3}})$. The constant success probability can be boosted up to any high probability $1-\delta$ by repetition, which adds only an extra $O(\log\frac{1}{\delta})$ term to the communication and computation. The output subspace $L$ is represented by $O(k/\epsilon)$ sampled points $Y$ from $A$ (i.e., $L=\phi(Y)C$ for some coefficient matrix $C\in\mathbb{R}^{|Y|\times k}$), so $L$ can be easily communicated and the projection of any data point on $L$ can be easily computed by the kernel trick. The communication cost has linear dependence on the dimension and the number of servers, and has no dependence on the number of data points (the $\tilde{O}$ only hides a factor of $\log k$). When $d>\frac{k}{\epsilon^{2}}$ and $k<e^{1/\epsilon}$ which is typical for big data scenarios, the communication of our algorithm is $O(sdk/\epsilon)$, which matches the $O(sdk/\epsilon)$ cost of the state-of-the-art distributed linear PCA algorithm (Balcan et al., 2014; Kannan et al., 2014). An immediate application is for distributed kernel $k$-means, which can be done by first computing KPCA to rank-$k/\epsilon$ and then $k$-means on the data projected on the subspace found by KPCA (e.g., (Dhillon et al., 2004; 2005)). Since there exist efficient distributed $k$-means algorithms (Balcan et al., 2014), our result leads to efficient algorithms for the polynomial kernel case. Corollary 3 Given a distributed $\alpha$-approximation $k$-means algorithm as a subroutine, there exists a randomized algorithm for polynomial kernels that with probability at least $0.9$ produces a $(1+\epsilon)\alpha$-approximation solution for spectral clustering for $\phi(A)$. The total communication cost required is $O(sdk/\epsilon^{2}+s\text{poly}(k/\epsilon))$ plus that of the subroutine on $O(k/\epsilon)$ dimensional data. Our distributed KPCA algorithm amounts to sampling a small subset $Y$ from the original data set $A$ according to their importance such that the span of $\phi(Y)$ contains a good approximate subspace. The weights that measure their importance is statistical leverage scores, which play a key role in many modern randomized algorithms for linear algebra and matrix computation. Computing leverage scores needs to be done with care, as naïve approaches lead to high communication and computational cost. We observe that for rank-$k$ approximation, it suffices to compute the generalized leverage scores with respect to rank $k$. So we can use the subspace embedding technique that summarizes the data and preserves the scores, and adopt a non-distributed fast algorithm for computing scores. Since computing leverage scores for linear or polynomial kernels in the distributed setting is a problem of independent interest, we summarize our result as follows. Note that leverage scores are typically used for sampling (e.g., in our algorithm), in which case constant approximation suffices and the communication is as low as $O(sk^{2})$. Theorem 4 Suppose $\ell^{ij}$ is the statistical leverage score of the $j$-th column of $A^{i}$ in $A$ with respect to rank $k$ (see Definition 8). For polynomial kernels, there exists a randomized algorithm (Algorithm 2 in Section 3) that returns values $\tilde{\ell}^{ij}$, such that with probability at least $0.9$, $|\ell^{ij}-\tilde{\ell}^{ij}|\leq\epsilon\ell^{ij}$ for all $i\in[s],j\in[n_{i}]$. The total communication cost required is $O(sk^{2}/\epsilon^{4})$. 1.2 Overview of Main Techniques As mentioned above, our algorithm consists of three steps: weighting points, sampling a subset, and computing a good solution from the subset. All the steps employ (variants of) randomized techniques in a careful way to achieve the desire bounds. In the first step of computing leverage scores, our key technique is a subspace embedding, which projects the feature mapping of the data to a low dimension space while preserving the scores with respect to rank $k$, and thus allows for computing them with low communication depending on $k$ rather than on the dimension $d^{q}$. We use a communication efficient variant of the subspace embedding for polynomial kernels in (Avron et al., 2014): first perform their embedding, and then another embedding using i.i.d. Gaussian or fast Hadamard to further reduce the dimension. This way we can achieve both low communication and fast computation. After computing leverage scores, one can simply sample according to them. However, it would give a $(1+\epsilon)$- approximation but with a rank-$O(k/\epsilon)$ space, not a rank-$k$ space. Fortunately, there exists an adaptive sampling approach that can start with and $O(1)$-approximation with rank $O(k)$ and produce $O(k/\epsilon)$ samples containing an rank-$k$ $(1+\epsilon)$-approximate subspace. So we first sample a subset of $\tilde{O}(k)$ points that achieves constant approximation, and then perform the adaptive sampling to pick $O(k/\epsilon)$ points $Y$ with the desired error bound. The last step is then to find such a subspace within the span of $\phi(Y)$. Intuitively, one can just find the best rank-$k$ approximation for the projection of the data on the span of $\phi(Y)$. This naive approach has high cost depending on the number of points, but it can be improved by the following observation: it suffices to do so on a sketch of the projected data, which essentially reduces the number of points needed. We review and provide the details of these building blocks needed for our algorithm in Section 2. 1.3 Related Work There has been a surge of work on distributed machine learning, e.g., (Balcan et al., 2012; Zhang et al., 2012; Kannan et al., 2014; Balcan et al., 2014). The most related to ours are (Kannan et al., 2014; Balcan et al., 2014) which give efficient algorithms for linear PCA. Subspace embeddings are also the key element in the algorithm in the first paper. The second paper performs global PCA on top of local PCA solutions, but also uses subspace embeddings to obtain speedups. However, these algorithms cannot be directly adapted to the polynomial kernel case. The subspace embedding approach in (Kannan et al., 2014) will need to be performed on the explicit feature mapping, which is costly; the approach in (Balcan et al., 2014) requires sending the local PCA solutions, which need to be represented by the explicit feature mapping or by the set of all the local points, neither of which is practical. To the best of our knowledge, our algorithm is the first distributed kernel PCA algorithm with provable relative error and non-trivial bounds on communication and computation. The key technique, that of a subspace embedding, has been extensively studied in recent years (Sarlós, 2006; Achlioptas, 2003; Arriaga and Vempala, 1999; Ailon and Chazelle, 2009; Clarkson and Woodruff, 2013). The recent fast sparse subspace embeddings (Clarkson and Woodruff, 2013) and its optimizations (Meng and Mahoney, 2013; Nelson and Nguyên, 2013) are particularly suitable for large scale sparse datasets, since their running time is linear in the number of non-zero entries in the data matrix. They also preserve the sparsity of the input data. The work of (Avron et al., 2014) shows that a fast computational approach, TensorSketch, is indeed a subspace embedding for the polynomial kernel, which is a key tool used by our algorithm. Another element is leverage score sampling, which has a long history in data analysis for regression diagnostics and recently has been successfully used in the development of randomized matrix algorithms. See (Woodruff, 2014) for a detailed discussion. A prior work of Boutsidis et al. (Boutsidis et al., 2015) gives the first distributed protocol for column subset selection. Our work also selects columns, but our main advance over (Boutsidis et al., 2015) is that our protocol is the first to work for low rank approximation in the kernel space rather than the original space. Our result involves several key differences over (Boutsidis et al., 2015). For instance, the polynomial kernel can only be sketched on one side, so we first sketch that side, obtaining $S\cdot\phi(A)$, but then we can only hope to obtain leverage scores for $S\cdot\phi(A)$, which we do by a new distributed protocol. This turns out to be sufficient for us because we know $S\cdot\phi(A)$ contains a rank-$k$ $O(1)$-approximation to $\phi(A)$ if $S$ has $\Theta(k)$ rows, and therefore if we sample $O(k\log k)$ columns of $\phi(A)$ proportional to the leverage scores of $S\cdot\phi(A)$ then the projection of $\phi(A)$ onto these columns has cost at most a constant factor times optimal. Another difference is that in all intermediate steps in our protocol we can only afford to communicate columns in the original space, that is, both the dimension $d^{q}$ and the number $n$ of examples are too large, whereas all prior protocols depend linearly on at least one of these quantities. This forces tasks such as adaptive sampling and computing an approximate subspace inside the columns found, that have been used in algorithms such as (Boutsidis and Woodruff, 2014), to all be done implicitly, and we use the kernel trick several times to do so. 2 Review of Techniques 2.1 Notations In the distributed setting, there are $s$ servers that are connected to a central processor. Server $i\in[s]$ has a local data set $A^{i}\in\mathbb{R}^{d\times n_{i}}$, and the global data set $A\in\mathbb{R}^{d\times n}$ is the concatenation of the local data ($n=\sum_{i=1}^{s}n_{i}$). For any matrix $M\in\mathbb{R}^{d\times n}$ and any $i\in[d],j\in[n]$, let $M_{i:}$ denote the $i$-th row of $M$ and $M_{:j}$ denote the $j$-th column of $M$. Let $\left\|M\right\|_{F}$ denote its Frobenius norm, and $\left\|M\right\|_{2}$ denote its spectral norm. Let $I_{d}$ denote the $d\times d$ identity matrix. Let the rank of $M\in\mathbb{R}^{d\times n}$ be $r\leq\min\left\{n,d\right\}$, and denote the SVD of $M$ as $M=U\Sigma V^{\top}$ where $U\in\mathbb{R}^{d\times r},\Sigma\in\mathbb{R}^{r\times r}$, and $V\in\mathbb{R}^{n\times r}$. For a kernel $k(x,x^{\prime})=(\left\langle x,x^{\prime}\right\rangle)^{q}$, let $\phi(\cdot)\in\mathcal{H}$ denote the corresponding feature mapping. Let $M\in\mathcal{H}^{t}$ denote a matrix in $\mathbb{R}^{d^{q}\times t}$ where each column is regarded as an element in $\mathcal{H}$ and let $\left\|M\right\|_{\mathcal{H}}=\mathop{\mathrm{tr}}(M^{\top}M)$. For polynomial kernels, $\mathcal{H}=\mathbb{R}^{d^{q}}$ and $\left\langle\cdot,\cdot\right\rangle_{\mathcal{H}}$ is the regular dot product, and $\left\|M\right\|_{\mathcal{H}}$ is equivalent to $\left\|M\right\|_{F}$ . 2.2 Subspace Embeddings Subspace embeddings are a very useful technique that can improve the computational and space costs by embedding data points into lower dimension while preserving interesting properties needed. Formally, Definition 5 An $\epsilon$-subspace embedding of $M\in\mathbb{R}^{m\times n}$ is a matrix $S\in\mathbb{R}^{t\times m}$ such that for any vectors $x\in\mathbb{R}^{n}$, $$\left\|SMx\right\|_{2}=(1\pm\epsilon)\left\|Mx\right\|_{2}.$$ $Mx$ is in the column space of $M$ and $SMx$ is its embedding, so the definition means that the norm of any vector in the column space of $M$ is approximately preserved. Subspace embeddings can also be done on the right hand side, i.e., $S\in\mathbb{R}^{n\times t}$ and $\left\|x^{\top}MS\right\|_{2}=(1\pm\epsilon)\left\|x^{\top}M\right\|_{2}$ for any $x\in\mathbb{R}^{m}$. Our algorithm repeatedly makes use of subspace embeddings. In particular, the embedding we use is the concatenation of the following known subspace embeddings: CountSketch and i.i.d. Gaussians (or the concatenation of CountSketch, fast Hadamard and i.i.d. Gaussians). Due to space limitations, we do not present the details, which can be found in (Woodruff, 2014); we only need the following fact. Lemma 6 For $M\in\mathbb{R}^{m\times n}$, there exist subspace embeddings $S\in\mathbb{R}^{t\times m}$ with $t=O(n/\epsilon^{2})$. Furthermore, $SM$ can be successfully computed in time $\tilde{O}(\mathrm{nnz}(M))$ with probability at least $1-\delta$, where $\mathrm{nnz}(M)$ is the number of non-zero entries in $M$. Kernel subspace embeddings Subspace embeddings can also be generalized for the feature mapping of kernels, simply by setting $M=\phi(A)$, $S\in\mathcal{H}^{t}$ and using the corresponding inner product in the definition. If the kernel subspace embedding suffices for solving the problem under consideration, then one only needs to deal with the data $S\phi(A)$ in much lower dimension rather than going to the feature mapping space. This is especially interesting for distributed kernel methods, since naively using the feature mapping or the kernel trick in this setting will leads to high communication cost. For our purpose, we will need the embedding matrix $S$ to have the following properties: P1 (Subspace Embedding): For any $V\in\mathcal{H}^{k}$ with orthonormal elements (that is, $V^{\top}V=I_{k}$), for all $x\in\mathbb{R}^{k},\left\|(SV)x\right\|_{2}=(1\pm\epsilon_{0})\left\|Vx\right% \|_{\mathcal{H}}$, where $\epsilon_{0}$ is a sufficiently small constant. P2 (Approximate Product): For any $M\in\mathcal{H}^{t_{1}},N\in\mathcal{H}^{t_{2}}$, $\left\|M^{\top}S^{\top}SN-M^{\top}N\right\|_{F}\leq\sqrt{\frac{\epsilon}{k}}% \left\|M\right\|_{\mathcal{H}}\left\|N\right\|_{\mathcal{H}}$. The following lemma from (Avron et al., 2014) shows the implication of the two properties. Fact 1 Suppose $S$ satisfies P1 and P2, and suppose $E=S\phi(A)$ has SVD $E=U\Sigma V^{\top}$. Then we have $$\left\|\left[\phi(A)V\right]_{k}V^{\top}-\phi(A)\right\|_{\mathcal{H}}\leq(1+% \epsilon)\left\|\phi(A)-[\phi(A)]_{k}\right\|_{\mathcal{H}}.$$ Since $\phi(A)V$ is the projection of $\phi(A)$ onto the row space of $S\phi(A)$ and $\left[\phi(A)V\right]_{k}$ is the best rank-$k$ approximation there, Lemma 1 means that the row space of $S\phi(A)$ contains a good approximation to $\phi(A)$, up to rank-$k$ approximation error. As discussed below, $E$ can be used to compute the generalized leverage for $\phi(A)$, which then suffices for rank-$k$ approximation. For polynomial kernels, there exists an efficient algorithm TensorSketch to compute the embedding. However, the embedding dimension has a quadratic dependence on the rank $k$, which will increase the communication. Fortunately, as in Lemma 6, subspace embedding can be concatenated, so we can further apply another known subspace embedding such as i.i.d. Gaussians or fast Hadamard which, though not fast for feature mapping, is fast for the already embedded data and has lower dimension. In this way, we can enjoy the benefits of both approaches. Lemma 7 For the polynomial kernel $k(x,x^{\prime})=(\left\langle x,x^{\prime}\right\rangle)^{q}$, there exists a subspace embedding matrix $S\in\mathbb{R}^{t\times d^{q}}$ that satisfies P1 and P2 with $t=O(k/\epsilon)$. Furthermore, such $S\phi(A)$ can be successfully computed with probability at least $1-\delta$ in time $\tilde{O}\left(\left(\frac{k}{\epsilon}+q\right)nnz(A)+\frac{q3^{q}k^{2}}{% \epsilon^{2}\delta}\right)$. Proof  First use TensorSketch (Avron et al., 2014) to bring the dimension down to $O(3^{q}k^{2}+k/\epsilon)$. Then, we can use an i.i.d. Gaussian matrix, which reduces it to $t=O(k/\epsilon)$; or we can first use fast Hadamard transformation to bring it down to $O(k\mathrm{polylog}(k)/\epsilon)$, then multiply again by i.i.d Gaussians to bring down to $O(k/\epsilon)$. P1 follows immediately from the definition, so we only need to check the matrix product. Let $S=\Omega T$ where $T$ is the TensorSketch matrix and $\Omega$ is an i.i.d. Gaussian matrix. Since they are both subspace embedding matrices, then for any $M$ and $N$, $$\displaystyle\left\|M^{\top}S^{\top}SN-M^{\top}T^{\top}TN\right\|_{F}$$ $$\displaystyle\leq\sqrt{\frac{\epsilon}{k}}\left\|MT\right\|_{F}\left\|NT\right% \|_{F},$$ $$\displaystyle\left\|M^{\top}T^{\top}TTN-M^{\top}N\right\|_{F}$$ $$\displaystyle\leq\sqrt{\frac{\epsilon}{k}}\left\|M\right\|_{\mathcal{H}}\left% \|N\right\|_{\mathcal{H}}.$$ By the subspace embedding property, $\left\|MT\right\|_{F}=(1\pm\epsilon_{0})\left\|M\right\|_{\mathcal{H}}$ and $\left\|NT\right\|_{F}=(1\pm\epsilon_{0})\left\|N\right\|_{\mathcal{H}}$ for some small constant $\epsilon_{0}$. Combining all these bounds and choosing proper $\epsilon$, we know that $S=\Omega T$ satisfies P2.   2.3 Leverage Score Sampling The statistical leverage scores measure the nonuniform structure that is critical for importance sampling in fast randomized algorithms for many problems; see (Mahoney, 2011) for more discussion. Sampling according to leverage scores fits distributed Kernel PCA, since it leads to small sample size while providing a good approximation of the original data. In fact, for our purpose, it suffices to consider a generalized notion of leverage scores with respect to rank $k$. To provide more details, we begin with the definition. Definition 8 For $E\in\mathbb{R}^{t\times n}$ with SVD $E=U\Sigma V^{\top}$, the leverage score $\ell^{i}$ for its $i$-th column is the squared $\ell_{2}$ norm of the $i$-th column of $V^{\top}$. Formally, $\ell^{i}=\left\|V_{(i)}\right\|_{2}^{2}.$ If $E$ can approximate the row space of $M$ up to $\epsilon$, i.e., $\left\|XE-M\right\|_{F}\leq(1+\epsilon)\left\|M-[M]_{k}\right\|_{F}$, then $\left\{\ell^{i}\right\}$ are also called the leverage scores for $M$ with respect to rank $k$.111This generalizes the definition in (Drineas et al., 2012) by allowing $E$ to have rank larger than $k$. A key property of leverage scores is that a small set of columns sampled according to the leverage scores will span a subspace that is close to the subspace spanned by all the columns, e.g., Thoerem 5 in (Drineas et al., 2008): Fact 2 Suppose $E\in\mathbb{R}^{t\times n}$ has rank at most $r$, $M\in\mathbb{R}^{m\times n}$ and $\epsilon\in(0,1]$. Let $\ell^{i}$ be the leverage scores of the $i$-th column in $E$. Define sampling probabilities $p_{i}$ such that for some $\beta\in(0,1]$, $p_{i}\geq\frac{\beta\ell^{i}}{\sum_{i}\ell^{i}},$ for all $i\in[n]$. Sample $M^{(i)}$ with probability $\min\left\{1,O(\frac{r\log r}{\epsilon\beta})p_{i}\right\}$. Define an $n\times n$ diagonal sampling matrix $T$ for which $T_{i,i}=1/\sqrt{p_{i}}$ if $M^{(i)}$ is sampled, and $T_{i,i}=0$ otherwise. Then with probability at least $.9$, $$\left\|M-MT(ET)^{\dagger}E\right\|_{F}\leq(1+\epsilon)\min_{X}\left\|M-XE% \right\|_{F}.$$ Note that the theorem is stated in the general form: the leverage scores of the columns of $E$ are used for sampling the columns of $M$. A special case is when the rowspace of $E$ approximates the row space of $M$, i.e., there exists $X$ such that $\left\|M-XE\right\|_{F}$ is small. In this case, the sampling error is guaranteed to be small, and at the same time $\ell^{i}$ are just the leverage scores for $M$ with respect to rank $k$ by Definition 8. Therefore, Fact 2 means sampling columns of $M$ according to its generalized scores leads to small error for rank-$k$ approximation. In other words, our task reduces to computing the generalized scores for $M$, which further reduces to finding a matrix $E$ that approximates the row space of $M$. In general, computing leverage scores is non-trivial: naive approaches require SVD which is expensive. Even ignoring computation cost, naive SVD is prohibitive in the distributed kernel method setting due to its high communication cost. However, computing the generalized scores with respect to rank $k$ could be much more efficient, since the intrinsic dimension now becomes $k$ rather than the ambient dimension (the number of points or the dimension of the feature space). The only difficulty left is to efficiently find a smaller matrix that can approximate the row space of the original data, which can be readily solved by subspace embeddings in the case of polynomial kernels (Fact 1). Based on this, we propose a distributed algorithm in (Algorithm 2) which can approximate the leverage scores. 2.4 Adaptive Sampling Leverage score sampling for $M=\phi(A)$ can be used to get a set of points $P$ such that the span of $\phi(P)$ contains a rank-$O(k/\epsilon)$ $(1+\epsilon)$-approximation to $\phi(A)$. This is still not the desired guarantee. Therefore, we first sample a constant approximation and then resort to the adaptive sampling algorithm in (Deshpande and Vempala, 2006; Boutsidis and Woodruff, 2014), which can reduce the bound to a linear dependence. The algorithm computes the errors of the points to the constant approximation and then samples accordingly $O(k/\epsilon)$ points, whose span is guaranteed to contain a $(1+\epsilon)$-approximation. Formally, Fact 3 Suppose the span of $\phi(P)$ contains a constant rank-$k$ approximation for $\phi(A)$, and let $r^{i}$ to be the square of the distance from $\phi(A_{:i})$ to its projection on the subspace spanned by $\phi(P)$. Sample a set of $O(k/\epsilon)$ points $Y$ from $A$ according to $r^{i}$. Then the span of $\phi(Y)$ contains a $(1+\epsilon)$-approximation for $\phi(A)$. 3 Algorithm The distributed kernel PCA algorithm, described in Algorithm 1, consists of three steps. First, the algorithm computes a constant approximation of the generalized leverage scores for $\phi(A)$ with respect to rank $k$. Note that the scores are used for later sampling, and a $\alpha$-approximation leads to an extra term $O(\alpha)$ in the sample size, so a constant approximation is sufficient. Algorithm 2 describes how to compute an $\alpha$-approximation. Since we only need the generalized scores w.r.t. rank $k$, so we can do subspace embedding to dimension $t=O(k/\alpha)$, and then compute the scores of the embedded data $E=\left[E^{1},\dots,E^{s}\right]$. Again, we only need to approximate the scores of $E$, so we can apply another embedding to reduce the number of data points. Let $ET=\left[E^{1}T^{1},\dots,E^{s}T^{s}\right]$ denote the embedded data, and do QR factorization $(ET)^{\top}=UZ$. Now, the rows of $U^{\top}=\left(Z^{\top}\right)^{-1}ET$ are a set of basis for $ET$. Then, think of $U^{\top}T^{\dagger}=\left(Z^{\top}\right)^{-1}E$ are the basis for $E$, so we can simply compute the norm of the columns in $\left(Z^{\top}\right)^{-1}E$. In the second step, the algorithm samples a subset of points $Y$ that acts as a proxy for the original data. We first sample a set $P$ of $O(k\log k)$ points according to the scores, which then contains a constant approximate subspace. Then adaptive sampling $O(k/\epsilon)$ points according to the square distance from the data to its projection on $P$ leads to the desire $Y$. Finally, the algorithm computes a subspace $L$ from the span of $\phi(Y)$. We first project the data on the span to get the low dimension data $\Pi$. Now it suffices to compute the best rank-$k$ approximation for $\Pi$. Again, since we only need approximation, we can do an embedding to reduce the number of data points and get $\Pi T=\left[\Pi^{1}T^{1},\dots,\Pi^{s}T^{s}\right]$. Then we compute the best rank-$k$ approximation $W$ for $\Pi T$, which is then a good approximation for $\Pi$ and thus $\phi(A)$. The algorithm then returns $L$, the representation of $W$ in the coordinate system of $\phi(A)$. A few details need to be specified. In computing the final solution from $Y$, we need to compute the projection of $\phi(A)$ onto $\phi(Y)$. This can be done by using kernel trick and implicit Gram-Schmidt. Note that $\Pi^{i}=Q^{\top}\phi(A^{i})$ where $Q$ is the basis for $\phi(P)$. Suppose $\phi(Y)$ has QR-factorization $\phi(Y)=QR$. Then $Q=\phi(Y)R^{-1}$ and $Q^{\top}\phi(A)=(R^{-1})^{\top}\phi(Y)^{\top}\phi(A)$ where $\phi(Y)^{\top}\phi(A)$ is just the kernel value between points in $Y$ and points in $A$. For $R$, we have $Q^{\top}Q=(R^{-1})^{\top}\phi(Y)^{\top}\phi(Y)R^{-1}=I$, so $R^{\top}R=\phi(Y)^{\top}\phi(Y)$ and thus $R$ can be computed by factorizing the kernel matrix on $Y$. Similarly, to compute the distance for adaptive sampling, we first need to compute the projection of $\phi(A)$ onto $\phi(P)$, which can be done in the same way. Then the square distance from the original data to the projection can be computed by subtracting the square norm of the projection from the square norm of the original data. 4 Analysis 4.1 Approximate Leverage Scores: Proof of Theorem 4 First, by Fact 1 we know that the row space of the embedding matrix $E=S\phi(A)$ contains enough information about the range of $\phi(A)$. Then the scores for $E$ are the generalized scores for $\phi(A)$. Note that Algorithm 2 can be viewed as applying an embedding $T=\mathop{\mathrm{diag}}\left(T^{1},\dots,T^{s}\right)$ on $E$ to approximate the scores while saving the costs. Such a scheme has been analyze in (Drineas et al., 2012). Let $\ell^{ij}$ be the leverage score of $(E^{i})_{:j}$ in $E$. Then the analysis in (Drineas et al., 2012) essentially leads to: Fact 4 If $T$ is an $\epsilon_{0}$-subspace embedding, then we have $\tilde{\ell}^{ij}=(1\pm\epsilon_{0})\ell^{ij}$. It now suffices to show that $T$ is an $O(\epsilon)$-subspace embedding. When $p=O(t/\epsilon)$, each $T^{i}$ is an $O(\epsilon)$-subspace embedding matrix, and thus by simple calculation $T$ is $O(\epsilon)$-subspace embedding. Then we approximate the leverage scores up to a factor of $\epsilon$. 4.2 Leverage Score Sampling Now, we are ready to show the following key lemma. Lemma 9 In Algorithm 1, the span of $\phi(Y)$ contains a $(1+\epsilon)$-approximation rank-$k$ subspace for $\phi(A)$. Proof  By Fact 3, it suffices to prove that the span of $\phi(P)$ contains a constant approximation. Suppose $E$ has SVD $E=U\Sigma V^{\top}$. By Fact 4, $\tilde{\ell}^{ij}\in[1/2,3/2]{\ell}^{ij}$ where ${\ell}^{ij}$ is the true leverage score for the column $E^{i}_{:j}$ in $E$. Then sampling according to $\tilde{\ell}^{ij}$ means the sampling probabilities satisfy the condition in Fact 2 with some constant $\beta$, so $$\displaystyle\left\|\phi(A)-\phi(A)T(ET)^{\dagger}E\right\|_{\mathcal{H}}\leq O% (1)\min_{X\in\mathcal{H}^{t}}\left\|\phi(A)-XE\right\|_{\mathcal{H}}.$$ Now let $X=\left[V\phi(A)\right]_{k}\Sigma^{-1}U^{\top}$, so that $$\displaystyle\left\|XE-\phi(A)\right\|_{\mathcal{H}}$$ $$\displaystyle=\left\|\left[\phi(A)V\right]_{k}V^{\top}-\phi(A)\right\|_{% \mathcal{H}}$$ $$\displaystyle\leq O(1)\left\|\phi(A)-[\phi(A)]_{k}\right\|_{\mathcal{H}},$$ where the last step follows from Fact 1 and our choice of $t$. This leads to $$\left\|\phi(A)-\phi(A)T(ET)^{\dagger}E\right\|_{\mathcal{H}}\leq O(1)\left\|% \phi(A)-\left[\phi(A)\right]_{k}\right\|_{\mathcal{H}}.$$ Note that $T$ is the sample matrix to get $P$ from $A$, so $\phi(A)T$ can be written as $\phi(P)T^{\prime}$ for some $T^{\prime}$. Then $C=T^{\prime}(ET)^{\dagger}E$ satisfies $$\displaystyle\left\|\phi(A)-\phi(P)C\right\|_{\mathcal{H}}$$ $$\displaystyle\leq$$ $$\displaystyle O(1)\left\|\phi(A)-\left[\phi(A)\right]_{k}\right\|_{\mathcal{H}}$$ which completes the proof.   4.3 Compute Approximate Subspace Lemma 10 In Algorithm 3, if the span of $\phi(Y)$ contains a $(1+\epsilon)$-approximate subspace, then $$\left\|LL^{\top}\phi(A)-\phi(A)\right\|_{\mathcal{H}}\leq(1+\epsilon)^{2}\left% \|\phi(A)-\left[\phi(A)\right]_{k}\right\|_{\mathcal{H}}.$$ Proof  For our choice of $w$, $T^{i}$ is an $\epsilon$-subspace embedding matrix for $\Pi^{i}$. Then their concatenation $B$ is an $\epsilon$-subspace embedding for $\Pi$, the concatenation of $\Pi^{i}$. This can be shown by the similar argument as in Fact 4. Then we can apply the argument in Lemma 5 in (Avron et al., 2014) (also implicit in Theorem 1.5 in (Kannan et al., 2014)).   Now our main result Theorem 2 just follows by combining Lemma 9, Fact 3, and Lemma 10. 5 Experiments We demonstrate the effectiveness of our algorithm in three tasks: 1) Kernel PCA, 2) Kernel PCA then distributed $k$-means clustering, and 3) Kernel PCA then regression. Since there is no existing distributed kernel PCA algorithm to the best of our knowledge, and since the key of our algorithm is to adaptively select a small subset of meaningful points, we compare with the baseline that uniformly samples data from the dataset and then performs Algorithm 3 to extract kernel principle components. For regression, in addition to uniform sampling, we also compare with another baseline (called sketching-PCA) that first applies tensor-sketch to reduce the data into a lower-dimensional space, then performs distributed PCA (not kernel PCA), and finally runs linear regression on the top principle components. Methodology The data is partitioned on different nodes according to power law distribution with exponent $2$. Depending on the size of the dataset, the number of nodes used ranges from $5$ to $200$. We run the algorithms under the same communication budget and compare their errors. The evaluation criteria are the low rank approximation error for distributed KPCA task, and the $k$-means cost for the clustering task. For regression, we compare the normalized $\ell_{2}$ error: suppose the ground truth target is $y$, and the prediction is $y_{p}$, we report $\left\|y-y_{p}\right\|/\left\|y\right\|$. Each algorithm is run $5$ times; the mean and the standard deviation are then plotted. Datasets We use: 1) 20 newsgroup ($61118$ points of dimension $11269$); 2) CT-slice ($384\times 53500$); 3) Year-Prediction MSD ($90\times 463715$); 4) MNIST8M ($784\times 8000000$); 5) HIGGS ($28\times 11000000$); 6) SUSY ($18\times 3000000$); 7) BoW PubMed ($141043\times 8200000$); 8) Protein Structure ($9\times 45730$). Distributed KPCA and clustering are carried out on 1)-7), and regression is carried out on 2), 3) and 8). Most datasets are from UCI Repository (Bache and Lichman, 2013; Baldi et al., 2014). Results Figure 1 shows the low rank approximation error in the distributed kernel PCA task. It can be observed that using the same amount of communication, our algorithm outperforms the baseline. Uniform sampling uses about $5$ times more communication to achieve a certain small error, or simply cannot achieve the same error as our algorithm does. Figures 2 shows the running time of the algorithms. The time of our algorithm is about twice that of uniformly sampling. The additional time spent is on approximating leverage scores and calculating the residues for adaptively sampling. With additional computational costs, our algorithm is able to achieve smaller errors given the same communication budget. Note that, in practice, communication overhead is usually much larger than computational time. Therefore, the additional computation effort is justified for reducing the communication overhead. Also note that uniform sampling is basically reducing big data to small data. If the problem at hand only requires small data (e.g., the leverage scores are quite uniform) or if one is satisfied with moderate performance that can be achieved by small data (obtained by uniform sampling), then uniform sampling should be used. But if one wants to take advantage of big data, then our approach is better, which can handle much larger data using only twice computation cost and similar or even less communication. Figure 3 shows the error in the $k$-means clustering task and Figure 4 shows the running time. Similar results are observed as in the distributed kernel PCA task. For example, on HIGGS the error of uniform sampling is almost twice that of ours, while the running time is almost the same. Figures 5 shows the error in the regression task and Figure 6 shows the running time. In this task, we vary the number of principle components extracted for linear regression and compare the normalized regression error. We can see the sketching-PCA method incurs higher errors in all datasets, and requires more time in some datasets. Uniformly sampling and our method have similar performance in the regression task. It is likely that the top few principle components do not contain enough relevant information for the target variables. In addition, our theoretical bound is on the low-rank approximation error and it may not be linearly correlated with regression error. Finally, in Figure 7 we plot the leverage scores of the datasets. We can see that some datasets, such as MNIST8M and CT-Slice, have more uniform leverage scores than other datasets, such as 20 newsgroup and HIGGS. Compared with the performance result, we can see that the uniformity of the leverage scores is correlated with the performance gap between our algorithm and uniform sampling. For large datasets with non-uniform leverage scores, our proposed algorithm has more significant advantages over the baseline. 6 Conclusion This paper proposes a distributed algorithm for Kernel Principal Component Analysis, and provides theoretical bounds and empirical support for the polynomial kernel case. In this case, the algorithm computes a relative-error approximation compared to the best rank-$k$ subspace, using communication that nearly matches that of the state-of-the-art algorithms for distributed linear PCA. This is the first distributed algorithm that can achieve such provable approximation and communication bounds. The experimental results show that our algorithm can achieve better performance than the baseline using the same communication budget. References Achlioptas (2003) Dimitris Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of Computer and System Sciences, 2003. Ailon and Chazelle (2009) Nir Ailon and Bernard Chazelle. The fast johnson-lindenstrauss transform and approximate nearest neighbors. SIAM Journal on Computing, 2009. Arriaga and Vempala (1999) Rosa I Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projection. In Proceddings of the Annual Symposium on Foundations of Computer Science, 1999. Avron et al. (2014) Haim Avron, Huy Nguyen, and David Woodruff. Subspace embeddings for the polynomial kernel. In Advances in Neural Information Processing Systems, pages 2258–2266, 2014. Bache and Lichman (2013) K. Bache and M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. Balcan et al. (2012) Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. Proceedings of the conference on learning theory, 2012. Balcan et al. (2014) Maria-Florina Balcan, Vandana Kanchanapally, Yingyu Liang, and David Woodruff. Improved distributed principal component analysis. In Z. Ghahramani, M. Welling, C. Cortes, N.d. Lawrence, and K.q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3113–3121. Curran Associates, Inc., 2014. Baldi et al. (2014) Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature Communications, 2014. Boutsidis and Woodruff (2014) Christos Boutsidis and David P. Woodruff. Optimal CUR matrix decompositions. In Proceddings of the Annual Symposium on the Theory of Computing, pages 353–362, 2014. Boutsidis et al. (2015) Christos Boutsidis, Maxim Sviridenko, and David P. Woodruff. Optimal distributed principal component analysis, 2015. Clarkson and Woodruff (2013) Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2013. Deshpande and Vempala (2006) Amit Deshpande and Santosh Vempala. Adaptive sampling and fast low-rank matrix approximation. Algorithms and Techniques in Approximation, Randomization, and Combinatorial Optimization, pages 292–303, 2006. Dhillon et al. (2005) Inderjit Dhillon, Yuqiang Guan, and Brian Kulis. A unified view of kernel k-means, spectral clustering and graph cuts. 2005. Dhillon et al. (2004) Inderjit S Dhillon, Yuqiang Guan, and Brian Kulis. Kernel k-means: spectral clustering and normalized cuts. In Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 551–556. ACM, 2004. Drineas et al. (2008) Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error cur matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–881, 2008. Drineas et al. (2012) Petros Drineas, Malik Magdon-Ismail, Michael Mahoney, and David Woodruff. Fast approximation of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475–3506, 2012. Kannan et al. (2014) Ravindran Kannan, Santosh Vempala, and David Woodruff. Principal component analysis and higher correlations for distributed data. In Proceedings of The 27th Conference on Learning Theory, pages 1040–1057, 2014. Mahoney (2011) Michael W Mahoney. Randomized algorithms for matrices and data. Foundations and Trends® in Machine Learning, 3(2):123–224, 2011. Meng and Mahoney (2013) Xiangrui Meng and Michael W Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the Annual ACM symposium on Symposium on Theory of Computing, 2013. Nelson and Nguyên (2013) Jelani Nelson and Huy L Nguyên. Osnap: Faster numerical linear algebra algorithms via sparser subspace embeddings. In IEEE Annual Symposium on Foundations of Computer Science, 2013. Sarlós (2006) Tamás Sarlós. Improved approximation algorithms for large matrices via random projections. In IEEE Symposium on Foundations of Computer Science, 2006. Schölkopf et al. (1997) Bernhard Schölkopf, Alex J Smola, and Klaus-Robert Müller. Kernel principal component analysis. In Proceedings of the 7th International Conference on Artificial Neural Networks. Springer-Verlag, 1997. Woodruff (2014) David P Woodruff. Sketching as a tool for numerical linear algebra. Theoretical Computer Science, 10(1-2):1–157, 2014. Zhang et al. (2012) Yuchen Zhang, Martin J Wainwright, and John C Duchi. Communication-efficient algorithms for statistical optimization. In Advances in Neural Information Processing Systems, pages 1502–1510, 2012.
Cyclotron radiation and emission in graphene Takahiro Morimoto Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033, Japan    Yasuhiro Hatsugai Institute of Physics, University of Tsukuba, Tsukuba, 305-8571, Japan Department of Applied Physics, University of Tokyo, Hongo, Tokyo 113-8656, Japan    Hideo Aoki Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033, Japan (December 3, 2020) Abstract Peculiarity in the cyclotron radiation and emission in graphene is theoretically examined in terms of the optical conductivity and relaxation rates to propose that graphene in magnetic fields can be a candidate to realize the Landau level laser, proposed decades ago [H. Aoki, Appl. Phys. Lett. 48, 559 (1986)]. pacs: 71.70.Di,76.40.+b Introduction — There has been an increasing fascination with the physics of graphene, a monolayer of graphite, as kicked off by the experimental discovery of an anomalous quantum Hall effect(QHE).Nov05 ; Zhang et al. (2005) The fascination comes from a condensed-matter realization of the massless Dirac-particle dispersion at low energy scales on the honeycomb lattice,McClure (1956); Zheng and Ando (2002); Gusynin and Sharapov (2005); Nov05 ; Peres et al. (2006) which is behind all the peculiar properties of graphene. In magnetic fields this appears as unusual Landau levels, where (i) the Landau levels ($=\sqrt{n}\hbar\omega_{c}$, $n$: Landau index) are unevenly spaced, (ii) the cyclotron frequency $\omega_{c}=(2e/c\hbar)v_{F}\sqrt{B}$ is proportional to $\sqrt{B}$ rather than to $B$, and (iii) there is an extra Landau level right at the massless Dirac point ($E=0$), which is outside the Onsager’s semiclassical quantization.Onsager (1952) While various transport measurements, as exemplified by the quantum Hall effect, have been extensively done, optical properties are also measured. For example, Sadowski et al have performed a Landau level spectroscopy for a large graphene sample. Inter-Landau level transitions are observed at multiple energies, which is due to a peculiar optical selection rule ($|n|\leftrightarrow|n|+1$ as opposed to the usual $n\leftrightarrow n+1$) as well as to the uneven Landau levels. Now, if we look at the QHE physics, cyclotron emission from the QHE system in non-equilibrium has been one important phenomenon. Experimentally, this typically appears as a strong cyclotron emission from the “hot spot”, a singular point in a Hall-bar sample where the convergence of electric lines of force puts the electrons out of equilibrium.Ikushima et al. (2004) Theoretically, one of the present authors proposed a “Landau-level laser” for non-equilibrium QHE systems.Aoki (1986) The basic idea is simple enough: we can exploit the unusual coalescence of the energy spectrum into a series of line spectrum (Landau levels) realize a laser from a spontaneous emission if we can make a population inversion, where the photon energy (= cyclotron energy in this case) is tunable and falls on the terahertz region for $B\sim 10$T. However, the most difficult part is the population inversion, since if we e.g. optically pump the system, the excitation would go up the ladder of equidistant Landau levels indefinitely. This has motivated us to put a question (Fig.1): will the graphene Landau levels, with uneven spacing among their peculiarities, favor in realizing such a population inversion? In this Letter we show that this is indeed the case, by actually calculating the optical conductivity as well as the relaxation processes. The message here is graphene is a candidate for the Landau-level laser. Optical conductivity in graphene — Low-energy physics around the Fermi energy in graphene is described by the massless Dirac Hamiltonian,Zheng and Ando (2002) $$H_{0}=v_{F}\begin{pmatrix}0&\pi^{-}&0&0\\ \pi^{+}&0&0&0\\ 0&0&0&\pi^{+}\\ 0&0&\pi^{-}&0\\ \end{pmatrix},$$ (1) where $v_{F}$ is the velocity at $E_{F}$, $\pi^{\pm}\equiv\pi_{x}\pm i\pi_{y}$, $\mbox{\boldmath$\pi$}={\bf p}+e{\bf A}$, ${\bf A}$ the vector potential representing a uniform magnetic field ${\bf B}={\rm rot}{\bf A}$, and the $4\times 4$ matrix is spanned by the chirality and (K, K’) Fermi points. In magnetic fields the energy spectrum is quantized into Landau levels, $$\displaystyle\varepsilon_{n}={\rm sgn}(n)\sqrt{n}\hbar\omega_{c},$$ (2) $$\displaystyle\omega_{c}=\frac{\sqrt{2}}{\ell}v_{F}=v_{F}\sqrt{\frac{2eB}{\hbar% }},$$ (3) for a clean system, where $n=0,\pm 1,...$ is the Landau index, and $\ell=\sqrt{\hbar/eB}$ the magnetic length. Here we consider realistic systems having disorder with the self-consistent Born approximation (SCBA) introduced by AndoAndo (1975); Zheng and Ando (2002) to calculate the optical conductivity. The optical conductivity is given by $$\begin{split}\displaystyle\sigma_{\alpha\beta}&\displaystyle(\omega)=\frac{e^{% 2}\hbar}{i\pi}\int d\varepsilon\frac{f(\varepsilon)}{\hbar\omega}\\ &\displaystyle\times\left[\mbox{Tr}\left(j_{\alpha}{\rm Im}G(\varepsilon)j_{% \beta}(G^{+}(\varepsilon+\hbar\omega)-G^{+}(\varepsilon))\right)\right.\\ &\displaystyle\hskip 10.0pt\left.-\mbox{Tr}\left(j_{\alpha}(G^{-}(\varepsilon)% -G^{-}(\varepsilon-\hbar\omega))j_{\beta}{\rm Im}G(\varepsilon)\right)\right],% \end{split}$$ (4) where $\alpha,\beta=x,y$, $f(\varepsilon)$ the Fermi distribution, and $G^{\pm}=G(\epsilon\pm i\delta)$. For Green’s function $G$, with the self-energy $\Sigma_{n}(\varepsilon)=\Gamma\sum_{n^{\prime}}[\varepsilon-{\rm sgn}(n)\sqrt{% |n|}-\Sigma_{n^{\prime}}(\varepsilon)]^{-1}$ in the SCBA, the Landau level broadening is given by $\Gamma=n_{0}V_{0}^{2}$ if we assume for simplicity short-ranged random potential, $V=\sum_{i}V_{0}\delta({\bf r}-{\bf r}_{i})$. The light absorption rate is then related to the imaginary part of the dielectric function, $\varepsilon(\omega)=1+i\sigma_{xx}(\omega)/\varepsilon_{0}\omega,$ so we can look at $\mbox{Re}\,\sigma_{xx}(\omega)$. In order to discuss the optical conductivity in graphene we need the current matrix elements across Landau levels. The eigenfunctions of the Hamiltonian (1) dictate an unusual selection rule, $|n|-|n^{\prime}|=\pm 1$ in place of the ordinary $n-n^{\prime}=\pm 1$, withAndo (1975) $$\displaystyle j_{x}^{n,n^{\prime}}=v_{F}C_{n}C_{n^{\prime}}\left[{\rm sgn}(n)% \delta_{|n|-1,|n^{\prime}|}+{\rm sgn}(n^{\prime})\delta_{|n|+1,|n^{\prime}|}% \right],$$ $$\displaystyle j_{y}^{n,n^{\prime}}=iv_{F}C_{n}C_{n^{\prime}}\left[{\rm sgn}(n)% \delta_{|n|-1,|n^{\prime}|}-{\rm sgn}(n^{\prime})\delta_{|n|+1,|n^{\prime}|}% \right],$$ (5) where $C_{n}=1(n=0)$ or $1/\sqrt{2}$ (otherwise). We have numerically obtained the Green’s function and optical conductivity. While in usual cases the broadened Landau levels are uniformly merged or separated as $\Gamma$ is varied, there is a striking difference for graphene, where the Landau levels ($\propto\sqrt{n}$) are unevenly spaced, so that the broadened Landau levels overlap to lesser extent as we go to the central one ($n\rightarrow 0$), as typically depicted in Fig.2. Namely, for an intermeditate value of $\Gamma/\omega_{c}$ only the $n=0$ Landau level stands alone while the other levels form a continuous spectrum. We now look at the optical conductivity in Fig.3 for the Fermi energy at $\varepsilon_{F}=0$ (energy for the Dirac point), each resonance peak can be assigned to an allowed transition with the selection rule (eqn(5)). The largest peak around $\omega/\omega_{c}=1$ corresponds to the transition between $n=0\leftrightarrow\pm 1$, while the peaks at higher frequencies come from the transition across the Fermi energy, $-n\leftrightarrow n\pm 1$. If we turn to the temperature dependence in the figure, we immediately notice a peculiar phenomenon: there is a peak, in the region $\omega/\omega_{c}<1$, that grows, rather than decays, for higher $T$. We can identify this as coming from the unusual Landau levels in graphene: As $T$ is raised with the Fermi distribution function becoming longer-tailed, higher Landau levels begin to be occupied, which enables the transitions among higher Landau levels, $n\leftrightarrow n\pm 1$, to take place. While this would not cause new lines to appear for equidistant Landau levels, this does so for the unequally spaced Landau levels ($\propto\sqrt{|n|}$) for $\omega/\omega_{c}<1$ transitions. So we can identify this property as one hallmark of the “massless Dirac” dispersion. Previously, the optical conductivity has been obtained by Gusynin et al,Gusynin et al. (2007)Gusynin and Sharapov (2006) who have derived the analytical expression for the optical conductivity, but the self-energy from the disorder was set to a constant, while we have calculated the self-energy self-consistently with SCBA. Sadowski et al.Sadowski et al. (2006) also presented a similar expression for the conductivity with a constant self-energy as well. The present result qualitatively agrees with these, but the new findings here are, first, the full dependence on the $k_{B}T/\hbar\omega_{c}$, including the growing of low-frequency peaks at low temperatures. Secondly, we point out that the situation as depicted in Fig.2 should be interesting for the cyclotron resonance and emission in non-equilibrium situations induced by e.g. an optical pumping with laser beams. Namely, the electrons excited to higher energies will relax down to the $n=1$ level across the continuum spectrum, so that the population inversion across $n=0$ and $n\geq 1$ should be easier to be realized. Relaxation processes — To quantify this idea, we have to consider the relaxation processes which should control the population inversion. For the ordinary quantum Hall systems the relaxation processes have been extensively discussed. Specifically, Chaubet et al.Chaubet et al. (1995)Chaubet and Geniet (1998) discussed dissipation mechanisms, where spontaneous photon radiation and coupling with phonons are examined on the basis of Fermi’s golden rule. Other dissipation processes such as electron-electron scattering or impurity scattering, which conserve the total energy, do not contribute to inter-Landau level processes in the absence of external electric fields (while Chaubet et al. have focused on effects of finite electric fields in the QHE breakdown where inter-Landau level processes are involved). So we extend the discussion by Chaubet et al. to relaxation processes in graphene. We first estimate the efficiency of the photon emission with Fermi’s golden rule: $$W_{i\to f}=\frac{2\pi}{\hbar}|\langle i|H_{\rm int}|f\rangle|^{2}\delta(% \epsilon_{f}-\epsilon_{i}).$$ Here $|i\rangle(\epsilon_{i})$ is the wavefunction (energy) in the initial state while f stands for the final states, and $H_{\rm int}$ the electric dipole interaction between the electromagnetic field and electrons. When the wavelength of light is much larger than the cyclotron radius, as is usually the case, we have $$\displaystyle W_{i\rightarrow f}$$ $$\displaystyle=$$ $$\displaystyle\frac{2\pi}{\hbar}\frac{V}{\pi^{2}c^{3}}\int\omega^{\prime 2}d% \omega^{\prime}\frac{e^{2}\hbar}{2\epsilon_{0}V\omega^{\prime}}|\langle i|v|f% \rangle|^{2}$$ (6) $$\displaystyle\times$$ $$\displaystyle\delta(\hbar\omega^{\prime}+\epsilon_{f}-\epsilon_{i})=4\alpha% \left(\frac{|\langle i|v|f\rangle|}{c}\right)^{2}\omega,$$ where $c$ is the velocity of light, $\alpha=e^{2}/(4\pi\epsilon_{0}\hbar c)$ the fine-structure constant, and we put $\hbar\omega=\epsilon_{f}-\epsilon_{i}$ to be the cyclotron energy $\hbar\omega_{c}$. A peculiarity of graphene appears in the current matrix element (eqn(5)), for which the rate of spontaneous emission, with $|\langle n|v|n+1\rangle|=C_{n}C_{n+1}v_{F}$ for graphene plugged in, reads $$W_{n+1\to n}^{\rm graphene}=\begin{cases}2\alpha\left(\frac{v_{F}}{c}\right)^{% 2}\omega&(n=0),\\ \alpha\left(\frac{v_{F}}{c}\right)^{2}\omega&(n\neq 0).\end{cases}$$ (7) This expression, another key result here, shows that the spontaneous emission rate depends linearly on the cyclotron energy and quadratically on the Fermi velocity. This is in sharp contrast with the ordinary QHE systems such as the two-dimensional electron gas (2DEG) realized at e.g. GaAs/AlGaAs interfaces. In this case the velocity matrix element $|\langle n|v|n+1\rangle|^{2}=(n+1)\hbar\omega/2m^{*}$ should be plugged in eqn.(6), which yields $$W_{n+1\to n}^{\rm GaAs}=2(n+1)\alpha\frac{\hbar}{m^{*}c^{2}}\omega^{2}.$$ (8) This reveals a dramatic difference between graphene and usual 2DEG, where the emission rate in the latter is proportional to the square of the cyclotron energy. We can quantitatively realize the difference: The cyclotron energies are $$\hbar\omega=\begin{cases}\hbar eB/m^{*}\sim 1.7{\rm meV}&{\rm(GaAs)},\\ v_{F}\sqrt{2\hbar eB}\simeq 37{\rm meV}&{\rm(graphene)},\end{cases}$$ for $B=1$ T, where we have adopted the value of graphene Fermi velocity $v_{F}=1.06\times 10^{6}$ m/s,Sadowski et al. (2006) and the GaAs effective mass $m^{*}\simeq 0.067m_{e}$. Hence the cyclotron energy in graphene is orders of magnitude larger since it scales as $\sqrt{B}$ reflecting the Dirac dispersion, while the energy is usually proportional to $B$. If we plug these in eqns.(7,8), we end up with $$W_{i\to f}\begin{cases}\propto B^{2}\simeq 6\times 10^{4}({\rm s}^{-1})&{\rm(% GaAs)},\\ \propto\sqrt{B}\simeq 1\times 10^{7}({\rm s}^{-1})&{\rm(graphene)},\end{cases}$$ where the second term in each line indicates the $B$-dependence, while the third term a numerical value for $B=1$ T. A conspicuous difference, $\propto B^{2}$ in the former and $\propto\sqrt{B}$, should sharply affect the behavior. Thus the spontaneous photon emission rate is orders of magnitude enhanced in graphene in moderate magnetic fields (as in the above numbers quoted for $B=1$ T.) This indicates that the present system is indeed favorable for a realization of the envisaged Landau level laser. Now, the dissipation process which competes with the photon emission is the phonon emission process, which has been discussed for the conventional QHE systems, especially in the context of the breakdown of the quantum Hall effectChaubet et al. (1995). The phonon emission rate is also obtained from Fermi’s golden rule if we replace the electron-light interaction with the electron-phonon interaction. If we first consider acoustic phonons, the dissipation rate is promotional to the extent of the overlap between initial and final wavefunctions both in usual and graphene QHE systems, which yields a factor $e^{-(q\ell)^{2}}$ with $q$ the phonon wavenumber and $\ell$ the magnetic length. In usual QHE systems the cyclotron energy is $\sim 1$ meV and the magnetic length $\ell=\sqrt{\hbar/eB}\sim 30$ nm for $B=$1 T, while the acoustic phonon wavenumber is $\sim 1$Å${}^{-1}$, so that the overlap factor is exponentially small. The situation is similar in graphene, since the magnetic length $\ell=\sqrt{\hbar/eB}$ is the same. So the acoustic phonon emission should be negligible in graphene as well in weak electric fields. When the applied laser electric field is so intense ($\sim 1$ kV/cm) that the Landau levels are distorted and the overlap factor grows, the phonon emission may begin to compete with the photoemission. Are there any other factors that distinguish graphene from 2DEG’s? In this context we can note that Chaubet et al. have further pointed out the following. In an electron system confined to 2D a wavefunction has a finite tail in the direction normal to the plane, and the phonon emission is enhanced through the coupling of the tail of the wavefunction and perpendicular phonon modes which propagate normal to the 2D system in the substrateChaubet and Geniet (1998). This way the phonon emission can compete with the spontaneous emission in usual QHE systems. By contrast, a graphene sheet is an atomic monolayer, and there is only a loose coupling with the substrate. We can also consider acoustic phonons coupled with impurity scattering, which may compensate the momentum transfer $q$ of phonons, and hence the overlap factor $e^{-(q\ell)^{2}}$.comment2 To be precise, graphene itself should have phonon modes that include the out of plane modes, and their effects is an interesting future problem. As for optical phonons, their energies are known to be higher than 100 meV for wavelength $q=0$ in graphenePiscanec et al. (2004), so that optical phonons do not contribute to the dissipation for $B\sim$ a few tesla with $\hbar\omega\sim$ 40 meV. Overall, we conclude that the dissipation due to acoustic phonons will be small in graphene in the weak electric-field regime. When the pumping laser intensity is not too strong to invalidate the present treatment but strong enough for the population inversion, the present reasoning should apply, and we can expect efficient cyclotron emissions from graphene. Entirely different, but interesting is the problem of Anderson localization arising from disorder. While this is out of scope of the present work, we can expect delocalized states with diverging localization length at the center of each Landau level are present as inferred from the QHE observation, whose detail is an interesting future problem. The situation should also depend on whether the disorder is short-range or long-range, but, in ordinary QHE systems, a sum rule guarantees the total intensity of the cyclotron resonance intact.Aoki (1986) As for the “ripples”, suggested to exist in actual graphene samplesMeyer et al. (2007), the $n=0$ Landau level remains sharp (which is topologically protected since the slowly varying potential does not destroy the chiral symmetryHatsugai et al. (2006)), while other levels become broadened,Giesbers et al. (2007) and this favors the situation proposed in the presented paper.comment1 Summary — To summarize, we have discussed the radiation from graphene QHE system. We conclude that unusual uneven Landau levels, unusual cyclotron energy, unusual transition selection rules all work favorably for a population inversion envisaged for the Landau level laser. An estimate of the photon emission rate shows that the emission rate is orders of magnitude more efficient than in the ordinary QHE system, while the competing phonon emission rate is not too large to mar the photon emission. Important future problems include the examination of the actual lasing processes including the cavity properties, coupling of electrons to the out-of-plane phonon modes, etc. We wish to thank Andre Geim for illuminating discussions. This work has been supported in part by Grants-in-Aid for Scientific Research on Priority Areas from MEXT, “Physics of new quantum phases in superclean materials” (Grant No.18043007) for YH, “Anomalous quantum materials” (No.16076203) for HA. References (1) K. S. Novoselov et al, Nature, 438, 197 (2005); Nature Physics, 2, 177 (2006). Zhang et al. (2005) Y. Zhang, Y. W. Tan, H. L. Stormer, and P. Kim, Nature 438, 201 (2005). McClure (1956) J. McClure, Phys. Rev. 104, 666 (1956). Zheng and Ando (2002) Y. Zheng and T. Ando, Phys. Rev. B 65, 245420 (2002). Gusynin and Sharapov (2005) V. P. Gusynin and S. G. Sharapov, Phys. Rev. Lett. 95, 146801 (2005). Peres et al. (2006) N. M. R. Peres, F. Guinea, and A. H. Castro Neto, Phys. Rev. B 73, 125411 (2006). Onsager (1952) L. Onsager, Phil. Mag 43, 1006 (1952). Ikushima et al. (2004) K. Ikushima, H. Sakuma, S. Komiyama, and K. Hirakawa, Phys. Rev. Lett. 93, 146804 (2004). Aoki (1986) H. Aoki, Appl. Phys. Lett. 48, 559 (1986). Ando (1975) T. Ando, J. Phys. Soc. Japan 38, 989 (1975). Gusynin et al. (2007) V. P. Gusynin, S. G. Sharapov, and J. P. Carbotte, Phys. Rev. Lett. 98, 157402 (2007). Gusynin and Sharapov (2006) V. P. Gusynin and S. G. Sharapov, Phys. Rev. B 73, 245411 (2006). Sadowski et al. (2006) M. L. Sadowski, G. Martinez, M. Potemski, C. Berger, and W. A. de Heer, Phys. Rev. Lett. 97, 266405 (2006). Chaubet et al. (1995) C. Chaubet, A. Raymond, and D. Dur, Phys. Rev. B 52, 11178 (1995). Chaubet and Geniet (1998) C. Chaubet and F. Geniet, Phys. Rev. B 58, 13015 (1998). Piscanec et al. (2004) S. Piscanec, M. Lazzeri, F. Mauri, A. C. Ferrari, and J. Robertson, Phys. Rev. Lett. 93, 185503 (2004). Meyer et al. (2007) J. Meyer, A. Geim, M. Katsnelson, K. Novoselov, T. Booth, and S. Roth, Nature 446, 60 (2007). Hatsugai et al. (2006) Y. Hatsugai, T. Fukui, and H. Aoki, Phys. Rev. B 74, 205414 (2006). Giesbers et al. (2007) A. Giesbers, U. Zeitler, M. Katsnelson, L. Ponomarenko, T. Ghulam, and J. Maan, eprint arXiv: 0706.2822 (2007). Niimi et al. (2006) Y. Niimi, H. Kambara, T. Matsui, D. Yoshioka, and H. Fukuyama, Phys. Rev. Lett. 97, 236804 (2006). (21) Localized states around point defects were detected recently with STM in $B$ Niimi et al. (2006), and their radius was found to be comparable with the magnetic length ($\sim 30$ nm). (22) We can also note that the $n=0$ Landau level in graphene has an exactly $E=0$ edge mode, of a topological origin, right at the centerHatsugai et al. (2006), which may also affect the photon absorption/emission.
$\,$ Preprint no. NJU-INP 009/19 Spectrum of fully-heavy tetraquarks from a diquark+antidiquark perspective M. A. Bedolla marco.bedolla@unach.mx J. Ferretti jacopo.j.ferretti@jyu.fi C. D. Roberts cdroberts@nju.edu.cn E. Santopinto Elena.Santopinto@ge.infn.it CONACyT-Mesoamerican Centre for Theoretical Physics, Universidad Autónoma de Chiapas, Carretera Zapata Km. 4, Real del Bosque (Terán), Tuxtla Gutiérrez 29040, Chiapas, México Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Genova, Via Dodecaneso 33, 16146 Genova, Italy Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Edificio C-3, Ciudad Universitaria, Morelia, Michoacán 58040, México Center for Theoretical Physics, Sloane Physics Laboratory, Yale University, New Haven, Connecticut 06520-8120, USA Department of Physics, University of Jyväskylä, P.O. Box 35 (YFL), 40014 Jyväskylä, Finland School of Physics, Nanjing University, Nanjing, Jiangsu 210093, China Institute for Nonperturbative Physics, Nanjing University, Nanjing, Jiangsu 210093, China Abstract Using a relativized quark model Hamiltonian, we explore the possibility that fully-heavy tetraquarks can be formed as bound-states of elementary colour-antitriplet diquarks and colour-triplet antidiquarks. Regarding ground-states in the $J^{PC}=0^{++}$ channel, the analysis reveals that narrow resonance-like structures exist near the lowest meson+meson thresholds in the following systems: $bs\bar{b}\bar{s}$, $bb\bar{n}\bar{n}$ ($n=u,d$), $bb\bar{s}\bar{s}$, $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$, $bc\bar{b}\bar{c}$, $bb\bar{c}\bar{c}$. We also compute extensive spectra for the fully-heavy quark flavour combinations. A reliable reaction model must be developed before a clear structural picture of any such states can be formed. keywords: tetraquarks (exotic mesons), heavy quarks, spectrum, diquarks, constituent quark model Date: 02 November 2019 ††journal: Physics Letters B 1 Introduction Until the current millennium, the spectrum of known hadrons was limited to systems that fit simply into the patterns typical of constituent-quark models GellMann:1964nj ; Zweig:1981pd , i.e. quark-antiquark $(q\bar{q})$ mesons and three-quark $(qqq)$ baryons. Notwithstanding this, Refs. GellMann:1964nj ; Zweig:1981pd also raised the possibility of complicated hadrons, e.g. $qq\bar{q}\bar{q}$ and $q\bar{q}qqq$. Today, a large amount of data, obtained at both $e^{+}e^{-}$ and hadron colliders, has provided evidence for the possible existence of such exotic hadrons. The first exotic discovered was the electric-charge neutral $X(3872)$, now named $\chi_{\rm c1}(3872)$ Tanabashi:2018oca . Potentially a $Qq\bar{Q}\bar{q}$ system, where $Q$ denotes a heavy quark, it was seen in the decay $B^{\pm}\rightarrow K^{\pm}X$ $(X\rightarrow J/\psi\pi^{+}\pi^{-})$ by the Belle Collaboration Choi:2003ue . Regarding $Q\bar{Q}qqq$ systems, states identified as pentaquarks – $P_{\rm c}(4312)$, $P_{\rm c}(4440)$, $P_{\rm c}(4457)$ – have recently been reported by the LHCb Collaboration in studies of the decay $\Lambda_{b}^{0}\to J/\Psi K^{-}p$ Aaij:2015tga ; Capriotti:2019xbr . More recently, data supporting discovery of a double-charm baryon Aaij:2017ueg : $\Xi_{cc}^{++}(3621)$, has focused further attention on the prospects for heavy-quark systems to reveal novel features of the Standard Model. Notably, with the advent of QCD, other possibilities appeared. As a non-Abelian quantum gauge field theory, in which eight self-interacting gauge bosons (gluons) mediate the interactions between current quarks, QCD can conceivably support systems with valence glue; namely, hybrid hadrons and glueballs. Many theoretical analyses indicate the existence of such bound states Chen:2005mg ; Dudek:2010wm ; Dudek:2011bn ; Meyer:2015eta ; Richard:2016eis ; Xu:2019sns ; and empirical evidence is also emerging Rodas:2018owy . Given these discoveries, experiment and theory related to hadron spectroscopy are very active areas Richard:2016eis ; Lebed:2016hpi ; Esposito:2016noz ; Ali:2017jda ; Olsen:2017bmm ; Liu:2019zoy . Herein we focus on those exotic systems which may be considered tetraquarks, viz. systems with meson-like quantum numbers that can be built using two valence quarks and two valence antiquarks. There are many candidates in addition to the $X(3872)$ Choi:2003ue , e.g. $Z_{\rm c}(3900)$ Ablikim:2013mio ; Liu:2013dau , $Z_{\rm c}(4025)$ Ablikim:2013emm ; Ablikim:2013wzq , $Z_{\rm b}(10610)$, $Z_{\rm b}(10650)$ Belle:2011aa . Some tetraquark candidates cannot be described using typical constituent quark models Godfrey:1985xj ; Eichten:1978tg ; Pennington:2007xr ; Ortega:2009hj ; Ortega:2012rs ; Ferretti:2012zz ; Ferretti:2013faa ; Ferretti:2013vua ; Ferretti:2014xqa ; Ferretti:2018tco because they carry electric charge; hence, cannot simply be $c\bar{c}$ systems. Consequently, they are good candidates for: hidden-charm/bottom tetraquarks Jaffe:1976ig ; Weinstein:1983gd ; Brink:1998as ; Maiani:2004vq ; Barnea:2006sd ; Santopinto:2006my ; Deng:2014gqa ; Zhao:2014qva ; Bicudo:2015vta ; Lu:2016cwr ; Eichten:2017ffp ; Karliner:2017qjm ; Anwar:2017toa ; Hughes:2017xie ; Anwar:2018sol ; Wu:2018xdi ; Wang:2018qpe ; Chen:2019dvd ; Wang:2019rdo , or molecular systems constituted from a pair of charm/bottom mesons Weinstein:1990gu ; Manohar:1992nd ; Tornqvist:1993ng ; Swanson:2003tb ; Hanhart:2007yq ; Thomas:2008ja ; or hadro-charmonia Dubynskiy:2008mq ; Panteleeva:2018ijz ; Ferretti:2018kzy ; Anwar:2018bpu . Owing to the large masses of the valence degrees of freedom, the possible existence of fully-heavy $QQ\bar{Q}\bar{Q}$ bound-states ($Q=c,b$) and similar, mixed systems ($c\bar{c}b\bar{b}$, $\bar{c}\bar{c}bb\sim cc\bar{b}\bar{b}$) can reasonably be explored using nonrelativistic tools for QCD phenomenology and theory. Here, in contrast to systems involving light-quarks, for which both light-meson and gluon exchange may play a role in tetraquark formation, binding in fully-heavy systems is very probably dominated by gluon-exchange forces because the typical gluon mass-scale ($m_{g}\sim 0.5\,$GeV Aguilar:2015bud ) is much lighter than that of any necessarily-heavy meson that could be exchanged between two subsystems within the tetraquark composite. It is thus natural to suppose that the favoured structural configuration for a fully-heavy tetraquark bound-state is diquark$+$antidiquark. It has been argued Eichten:2017ual that if stable $cc\bar{c}\bar{c}$ and/or $bb\bar{b}\bar{b}$ tetraquarks exist, they should be observable at the Large Hadron Collider (LHC). However, the only search to date, focusing on the $\Upsilon(1S)\,\mu^{+}\mu^{-}$ invariant-mass distribution obtained from high-energy $pp$ collisions, was unsuccessful Aaij:2018zrb ; possibly because the width of a $bb\bar{b}\bar{b}$ state is too small Esposito:2018cwh . Experimental searches continue, motivated by theoretical analyses which predict the existence of a $bb\bar{b}\bar{b}$ bound-state with mass near the $\eta_{\rm b}\eta_{\rm b}$ ($\Upsilon\Upsilon$) threshold, e.g. Refs. Berezhnoy:2011xn ; Chen:2016jxd ; Karliner:2016zzc ; Wu:2016vtq ; Wang:2017jtz ; Anwar:2017toa ; Liu:2019zuc . Plainly, if a stable $bb\bar{b}\bar{b}$ ground-state exists, one may expect at least a few radial and orbital excitations, i.e. a spectrum of $bb\bar{b}\bar{b}$ excited states. With these motivations, we compute the spectra of $cc\bar{c}\bar{c}$, $c\bar{c}b\bar{b}$, $\bar{c}\bar{c}bb\sim cc\bar{b}\bar{b}$, $bb\bar{b}\bar{b}$ tetraquarks from the diquark$+$antidiquark perspective, using a potential model characterised by linear confinement and one-gluon exchange. The Hamiltonian eigenvalue problem is solved by means of a numerical variational method based on harmonic oscillator trial wave functions, employed elsewhere for calculations of meson and baryon spectra Santopinto:2004hw ; Ferretti:2011zz ; Santopinto:2014opa ; Anwar:2017toa ; Anwar:2018sol . Using the same approach and assuming isospin symmetry, we also calculate the ground-state masses of similarly viewed $bq\bar{b}\bar{q}$, $bb\bar{q}\bar{q}$ systems ($q=u,s$). In these cases, the justification for a diquark+antidiquark picture is weaker, but comparison of the computed masses with those of accessible colour-singlet final states can still provide hints about the possible stability of such systems. 2 Relativized Diquark Model We assume that the putative tetraquark states are colour-antitriplet ($\bar{3}_{c}$) diquark + colour-triplet ($3_{c}$) antidiquark ($D\bar{D}$) bound-states. Furthermore, the constituent $D$, $\bar{D}$ are each treated as being inert against internal spatial excitations Anselmino:1992vg ; Santopinto:2004hw ; Ferretti:2011zz ; Santopinto:2014opa . This should be a fair approximation for fully-heavy systems owing to the suppression of quark exchange between the diquark subclusters in this case Yin:2019bxe . Consequently, dynamics within the $D\bar{D}$ system can be described by a single relative coordinate $\bf{r}_{\rm rel}$, with conjugate momentum ${\bf q}_{\rm rel}$. To describe the internal dynamics of a $D_{a}\bar{D}_{b}$ system, we choose the Hamiltonian constrained elsewhere for kindred bound-states Anwar:2017toa ; Anwar:2018sol : $$\displaystyle\mathcal{H}^{\rm REL}$$ $$\displaystyle=T+V({\bf r}_{\rm rel})\,,$$ (1a) $$\displaystyle T$$ $$\displaystyle=\sqrt{{\mathbf{q}}_{\rm rel}^{2}+m_{D_{a}}^{2}}+\sqrt{{\mathbf{q% }}_{\rm rel}^{2}+m_{\bar{D}_{b}}^{2}},$$ (1b) with the interaction being the sum of a linear-confinement term and a one-gluon exchange (OGE) potential: Celmaster:1977vh ; Godfrey:1985xj ; Capstick:1986bm ; Anwar:2018sol , $$\begin{array}[]{rcl}V(r_{\rm rel})&=&\beta r_{\rm rel}+G(r_{\rm rel})+\frac{2{% \bf S}_{D_{a}}\cdot{\bf S}_{\bar{D}_{b}}}{3m_{D_{a}}m_{{\bar{D}}_{b}}}\mbox{ }% \nabla^{2}G(r_{\rm rel})\\ &&-\frac{1}{3m_{D_{a}}m_{\bar{D}_{b}}}\left(3{\bf S}_{D_{a}}\cdot\hat{r}_{\rm rel% }\mbox{ }{\bf S}_{\bar{D}_{b}}\cdot\hat{r}_{\rm rel}-{\bf S}_{D_{a}}\cdot{\bf S% }_{\bar{D}_{b}}\right)\\ &&\times\left(\frac{\partial^{2}}{\partial r_{\rm rel}^{2}}-\frac{1}{r_{\rm rel% }}\frac{\partial}{\partial r_{\rm rel}}\right)G(r_{\rm rel})+\Delta E\mbox{ },% \end{array}$$ (2) where the Coulomb-like piece is Godfrey:1985xj ; Capstick:1986bm $$G(r_{\rm rel})=-\frac{4\alpha_{\rm s}(r_{\rm rel})}{3r_{\rm rel}}=-\sum_{k}% \frac{4\alpha_{k}}{3r_{\rm rel}}\mbox{ Erf}(\tau_{D_{a}\bar{D}_{b}\,k}r_{\rm rel% })\mbox{ }.$$ (3) Here, Erf is the error function and Godfrey:1985xj ; Capstick:1986bm : $$\displaystyle\tau_{D_{a}\bar{D}_{b}\,k}$$ $$\displaystyle=\frac{\gamma_{k}\sigma_{D_{a}\bar{D}_{b}}}{\sqrt{\sigma_{D_{a}% \bar{D}_{b}}^{2}+\gamma_{k}^{2}}}\mbox{ },$$ (4a) $$\displaystyle\sigma_{D_{a}\bar{D}_{b}}$$ $$\displaystyle=\sqrt{\frac{1}{2}\sigma_{0}^{2}\left[1+\left(\frac{4m_{D_{a}}m_{% \bar{D}_{b}}}{(m_{D_{a}}+m_{\bar{D}_{b}})^{2}}\right)^{4}\right]+s^{2}\left(% \frac{2m_{D_{a}}m_{\bar{D}_{b}}}{m_{D_{a}}+m_{\bar{D}_{b}}}\right)^{2}}\mbox{ }.$$ (4b) The parameters defining our Hamiltonian are listed in Table 1. The strength of the linear confining interaction, $\beta$, and the value of the constant, $\Delta E$, in Eq. (2) are taken from (Anwar:2018sol, , Table I); and in Eqs. (3), (4), the values of the parameters $\alpha_{k}$ and $\gamma_{k}$ ($k=1,2,3$), $\sigma_{0}$ and $s$ are drawn from Refs. Godfrey:1985xj ; Capstick:1986bm . This leaves the diquark masses; and they are all determined by using a Hamiltonian like that in Eq. (1) to solve for the mass of the given $(q_{1}q_{2})^{\rm sc,ax}$ system, $\{q_{1},q_{2}=n,s,c,b\}$, $n=u=d$, using the same constituent-quark masses employed for mesons (in GeV) Godfrey:1985xj : $M_{n}=0.22$, $M_{s}=0.419$, $M_{c}=1.628$, $M_{b}=4.977$. Hence, the results we subsequently report are parameter-free predictions. 3 Results and Discussion 3.1 $bb\bar{q}\bar{q}$ and $bq\bar{b}\bar{q}$ ground-state masses As an exploratory exercise, we first compute the masses of $J=0^{++}$ heavy-light tetraquarks – $bq\bar{b}\bar{q}$, $bb\bar{q}\bar{q}$ systems ($q=n,s$) – and compare the results with the closest meson+meson thresholds in order to obtain an indication of the possible stability of each such system. Using the Hamiltonian specified by Eq. (1) and the parameters in Table 1, we obtain the following ground-state masses: $$M_{bn\bar{b}\bar{n}}^{\rm gs}=\left\{\begin{array}[]{rl}10.29\,(0.08)\mbox{ % GeV }&(\mbox{sc-sc configuration})\\ 10.12\,(0.11)\mbox{ GeV }&(\mbox{av-av configuration})\end{array}\right.$$ (5) and $$M_{bs\bar{b}\bar{s}}^{\rm gs}=\left\{\begin{array}[]{rl}10.52\,(08)\mbox{ GeV % }&(\mbox{sc-sc configuration})\\ 10.35\,(10)\mbox{ GeV }&(\mbox{av-av configuration})\end{array}\right.\mbox{ },$$ (6) where the energies of the two possible $D\bar{D}$ configurations are both shown, viz. scalar-scalar and axial-vector–axial-vector. Evidently, when combining $\bar{3}_{c}$ and $3_{c}$ constituents, the OGE colour-hyperfine interaction favours a lighter av-av combination. This is because the spin-spin interaction in Eq. (2) is attractive. (The nature of our uncertainty estimate is discussed in A.) To gauge the possibility of stability for these systems we compare our calculated masses with prospective two-meson thresholds.111Experimental masses are used here because quark models are typically not appropriate for QCD’s Nambu-Goldstone bosons, especially the $\eta$-$\eta^{\prime}$ sector. The lightest available final states are Tanabashi:2018oca : $$\displaystyle M(\eta_{\rm b}+\eta)$$ $$\displaystyle=\phantom{1}9947\,{\rm MeV}$$ (7a) $$\displaystyle M(\eta_{\rm b}+\eta^{\prime})$$ $$\displaystyle=10357\,{\rm MeV},$$ (7b) $$\displaystyle M(\Upsilon+\phi)$$ $$\displaystyle=10480\,{\rm MeV}.$$ (7c) Hence, plausibly, $bs\bar{b}\bar{s}$ tetraquark configurations may be stable. Turning now to $bb\bar{q}\bar{q}$ systems, we obtain $$\displaystyle M_{bb\bar{n}\bar{n}}^{\rm gs}$$ $$\displaystyle=10.31\,(17)\,\mbox{GeV},$$ (8a) $$\displaystyle M_{bb\bar{s}\bar{s}}^{\rm gs}$$ $$\displaystyle=10.53\,(16)\,\mbox{GeV}.$$ (8b) (Owing to Pauli statistics, only av-av configurations are allowed in these cases.) Eqs. (8) can be compared with the lightest possible final states: empirically Tanabashi:2018oca $$\displaystyle M(B+\bar{B})^{\rm ex}$$ $$\displaystyle=10.60\,{\rm GeV},$$ (9a) $$\displaystyle M(B_{s}+\bar{B}_{s})^{\rm ex}$$ $$\displaystyle=10.73\,{\rm GeV};$$ (9b) and using our Hamiltonian $$\displaystyle M(B+\bar{B})^{\rm th}$$ $$\displaystyle=10.17\,(45)\,{\rm GeV},$$ (10a) $$\displaystyle M(B_{s}+\bar{B}_{s})^{\rm th}$$ $$\displaystyle=10.45\,(32)\,{\rm GeV}.$$ (10b) Despite a positive experiment-model mass-balance, the comparison between Eqs. (8) and (10) indicates that, in each case, our model produces a two-body final state that is lighter than the initial tetraquark; hence, $bb\bar{n}\bar{n}$ and $bb\bar{s}\bar{s}$ tetraquarks are probably unstable. 3.2 $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$, $bc\bar{b}\bar{c}$, $bb\bar{c}\bar{c}$ ground states In $QQ\bar{Q}\bar{Q}$ systems treated as bound-states of colour triplet-antitriplet pairs, fermion statistics also precludes a role for scalar diquarks. Consequently, the ground-state $cc\bar{c}\bar{c}$ is an av-av combination; and using Eq. (1) we find $$M_{cc\bar{c}\bar{c}}^{\rm gs}=5.88\,(17)\,{\rm GeV}.$$ (11) This value is below the empirical $\eta_{\rm c}\eta_{\rm c}$ threshold (5.968 GeV); but a comparison with our computed value ($5.82\,(12)\,$GeV) is less favourable. We conclude, therefore, that the probability of a stable $cc\bar{c}\bar{c}$ bound-state constituted as $(cc)_{\bar{3}_{c}}(\bar{c}\bar{c})_{3_{c}}$ is marginal. Table 2 lists our prediction for the mass of this system alongside a sample of values obtained elsewhere Anwar:2017toa ; Berezhnoy:2011xn ; Chen:2016jxd ; Karliner:2016zzc ; Wu:2016vtq ; Wang:2017jtz ; Liu:2019zuc . Our conclusion is supported by the fact that these other analyses produce masses larger than ours. Using the same framework, the calculated mass of the analogous $bb\bar{b}\bar{b}$ system is $$M_{bb\bar{b}\bar{b}}^{\rm gs}=18.75(07)\,{\rm GeV}.$$ (12) Once again, this value is below the empirical $\eta_{\rm b}\eta_{\rm b}$ threshold (18.797 GeV), but lies above our computed result ($18.66\,(13)\,$GeV). Notably, too, our tetraquark mass is lighter than that obtained in most other analyses. It follows that one cannot confidently predict existence of a stable $J=0^{++}$ $bb\bar{b}\bar{b}$ tetraquark.222Evidence for the existence of a fully-$b$ tetraquark is described in Ref. Bland:2019aha , which reports a resonance peak at $18.12\,(15)_{\rm stat}(60)_{\rm sys}\,$GeV. Within its $\sim 3.4$% error, this value is consistent with those results in the first four rows of Table. 2. Considering the $bc\bar{b}\bar{c}$ case, both sc-sc and av-av configurations can exist; and we find $$M_{bc\bar{b}\bar{c}}^{\rm gs}=\left\{\begin{array}[]{rl}12.52\,(08)\mbox{ GeV % }&(\mbox{sc-sc configuration})\\ 12.37\,(09)\mbox{ GeV }&(\mbox{av-av configuration})\end{array}\right.\mbox{ }.$$ (13) The pattern observed above is repeated here. The mass of the lighter av-av configuration is (slightly) below the empirical $\eta_{\rm b}\eta_{\rm c}$ threshold (12.383 GeV), but it lies above our computed value (12.24 (12) GeV). Given, too, that our predicted masses lie below those obtained elsewhere (see Table 2), a stable $(bc)_{\bar{3}_{c}}(\bar{b}\bar{c})_{3_{c}}$ system appears unlikely. One can also imagine $J=0^{++}$ $bb\bar{c}\bar{c}$ ($\bar{b}\bar{b}cc$) configurations. In this case, only the av-av $(bb)_{\bar{3}_{c}}(\bar{c}\bar{c})_{3_{c}}$ configuration is possible and its ground-state mass is $$M_{bb\bar{c}\bar{c}}^{\rm gs}=12.45\,(11)\,\mbox{ GeV }.$$ (14) The now standard pattern is evident here. Namely, our predicted mass lies below the empirical $B_{c}\bar{B}_{c}$ threshold (12.55 GeV) but above the model-consistent calculated value (12.36 (17) GeV). Again, therefore, a stable tetraquark in this configuration is unlikely. 3.3 Complete Tetraquark Spectra In the preceding subsections we showed that the internally consistent application of Eq. (1) does not support stable diquark${}_{\bar{3}_{c}}$+antidiquark${}_{3_{c}}$ $J=0^{++}$ tetraquark systems. Notwithstanding that, these states might exist as narrow resonance-like structures above the lightest breakup threshold, but development of a reliable reaction model for tetraquark production and decays would be necessary before the character of such systems could be elucidated. We remark on this problem in Sec. 4. Neglecting decays, the Hamiltonian in Eq. (1) predicts a rich spectrum; and in Tables 3, 4 we report the lightest states in the spectra of $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$, $bc\bar{b}\bar{c}$ and $bb\bar{c}\bar{c}$ systems. The typical level-ordering is illustrated using the $bb\bar{b}\bar{b}$ system in Fig. 1. These results should serve as useful benchmarks for other analyses, which are necessary in order to identify model-dependent artefacts and develop a perspective on those predictions which might only be weakly sensitive to model details. Moreover, given that the decay modes of $J=0^{++}$ tetraquarks may be difficult to access experimentally Esposito:2018cwh , our predictions for orbitally-excited and $J\neq 0$ tetraquarks may serve useful in guiding new experimental searches for fully-heavy four-quark states. As we have already highlighted, one source of uncertainty in our results is the choice of model Hamiltonian: A explains how we have attempted to estimate the size of this sensitivity. Another lies in the approximations used to simplify the tetraquark wave function. Within the diquark+antidiquark framework, this uncertainty arises because one can produce an overall colour singlet from both $\bar{3}_{c}\times 3_{c}$ and $6_{c}\times\bar{6}_{c}$. Consequently, to obtain the “physical” tetraquark colour wave function, a mixing angle should be introduced: $$\Phi_{\rm c}=\alpha\left|\Phi_{{1_{c}},\bar{3}_{c}{3}_{c}}\right\rangle+\beta% \left|\Phi_{{1_{c}},{6_{c}}\bar{6}_{c}}\right\rangle\mbox{ },$$ (15) $\alpha^{2}+\beta^{2}=1$, as described, e.g. in Refs. Wu:2016vtq ; Wang:2019rdo . Here $$\begin{array}[]{l}\left|\Phi_{{1},\bar{3}_{c}{3}_{c}}\right\rangle=\left|\left% [\Big{[}{3}_{c},{3}_{c}\Big{]}_{\bar{3}_{c}},\Big{[}\bar{3}_{c},\bar{3}_{c}% \Big{]}_{3_{c}}\right]_{1_{c}}\right\rangle\mbox{ },\\ \left|\Phi_{{1},{6_{c}}\bar{6}_{c}}\right\rangle=\left|\left[\Big{[}{3_{c}},{3% _{c}}\Big{]}_{6_{c}},\Big{[}\bar{3}_{c},\bar{3}_{c}\Big{]}_{\bar{6}_{c}}\right% ]_{1_{c}}\right\rangle\mbox{ },\end{array}$$ (16) where quarks and antiquarks in the fundamental representations $3_{c}$ and $\bar{3}_{c}$, respectively, are combined to obtain diquark (antidiquark) colour wave functions $\bar{3}_{c}$, $6_{c}$ ($3_{c}$, $\bar{6}_{c}$); and, finally, these diquark and antidiquark colour wave functions are combined into a colour singlet tetraquark configuration. If one considers systems with only a single diquark (antidiquark) as, e.g. when describing baryons as quark+diquark bound-states Cahill:1988dx ; Burden:1988dt ; Cahill:1988zi ; Reinhardt:1989rw ; Efimov:1990uz , the $6$ ($\bar{6}_{c}$) is ignored because one-gluon exchange is repulsive in this channel Cahill:1987qr . Additionally, with diquarks (antidiquarks) treated as elementary degrees-of-freedom, it is not possible to use a typical two-body Hamiltonian to determine the relative weights of the $|\Phi_{{1},\bar{3}_{c}{3}_{c}}\rangle$ and $|\Phi_{{1},{6_{c}}\bar{6}_{c}}\rangle$ components in the wave function. The mixing angle is then a free parameter, which may only be determined once substantial, reliable data becomes available. A similar problem is manifest in the spectroscopy of meson-meson molecular states where $|\Phi_{{1_{c}},{1_{c}}{1_{c}}}\rangle=|[[{3}_{c},\bar{3}_{c}]_{1_{c}},[\bar{3}% _{c},{3}_{c}]_{1_{c}}]_{1_{c}}\rangle$ and $|\Phi_{{1_{c}},{8_{c}}{\bar{8}_{c}}}\rangle=|[[{3_{c}},\bar{3}_{c}]_{8_{c}},[% \bar{3}_{c},{3}_{c}]_{8_{c}}]_{1_{c}}\rangle$ are both admissible components of the wave function. Given these issues, herein, as in other analyses, e.g. Refs. Maiani:2004vq ; Anwar:2017toa ; Anwar:2018sol ; Esposito:2018cwh , we have only considered $\beta=0$ in Eq. (15). 4 Possible Tetraquark Decay Modes In considering the prospects for tetraquark discovery, it is important to discuss the likely decay modes. We begin with the $0^{++}$ fully-heavy systems. Plainly, no open-beauty final states exist for $bb\bar{b}\bar{b}$. Leptonic decays are possible, e.g. $bb\bar{b}\bar{b}\rightarrow\Upsilon\,\mu^{+}\mu^{-}$, and readily accessible experimentally, but estimates suggest the widths are small Esposito:2018cwh . Regarding $bb\bar{c}\bar{c}$ tetraquarks, numerous weak decays are possible and a few might be measurable Li:2019uch . Moreover, as noted above, resonance-like $QQ\bar{Q}\bar{Q}$ systems, $Q=c$ or $b$, can decay into purely hadronic final states Anwar:2017toa ; Karliner:2016zzc , perhaps with an appreciable phase space. Again, however, experimental detection would likely be challenging Esposito:2018cwh . One can also imagine the possibility of open-flavour baryonic decays: $QQ\bar{Q}\bar{Q}\rightarrow QQq+\bar{Q}\bar{Q}\bar{q}$ transitions, where $q=u,d,s$. Observation of such decay products would be a fairly unambiguous signal of a four-quark initial state; but unless one considers radial excitations of the tetraquark, the baryon-antibaryon threshold will be too high. (Computed spectra of doubly-heavy baryons are reported elsewhere, e.g. Refs. Yin:2019bxe ; Brown:2014ena ; Yoshida:2015tia ; Qin:2019hgk .) With $0^{++}$ systems difficult to observe, it may be better to search for the $J^{PC}\neq 0^{++}$ states listed in Tables 3, 4. A prime example is presented by the $1^{--}$ systems. Possessing the same Poincaré-invariant quantum numbers as the photon, such states would be accessible via photoproduction or using $e^{-}e^{+}$ colliders. Moreover, since even the lightest such states lie $\gtrsim 300\,$MeV above the lowest open heavy pseudoscalar meson thresholds, there is likely sufficient phase space to enable detection. The decays could proceed as illustrated in Fig. 2; but a reliable picture of the internal structure of fully-heavy diquarks must be developed before predictions for the widths become possible. Notably, since heavy-quark exchange/rearrangement is kinematically suppressed, both the production and decay of fully-heavy tetraquarks will be difficult to observe. 5 Summary and Perspective Adopting a perspective in which tetraquarks are viewed as bound-states of elementary colour-antitriplet diquarks and colour-triplet antidiquarks and using a well-constrained model Hamiltonian, built with relativistic kinetic energies, a one-gluon exchange potential and linear confinement [Sec. 2], we computed the masses of ground-state $b\bar{b}q\bar{q}$, $bb\bar{q}\bar{q}$ tetraquarks, $q=n,s$ $(n=u=d)$ and extensive spectra for $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$, $bc\bar{b}\bar{c}$, $bb\bar{c}\bar{c}$ states [Sec. 3 and Tables 3, 4]. The eigenvalue problems were solved using a numerical variational procedure in concert with harmonic-oscillator trial wave functions. In each channel, comparing our prediction for the mass of the $J^{PC}=0^{++}$ ground-state with the experimental value of the lowest meson-meson threshold, we found tetraquarks marginally stable against strong decays in almost all channels, viz. ${\mathcal{S}}=\{bs\bar{b}\bar{s},bb\bar{n}\bar{n},bb\bar{s}\bar{s},cc\bar{c}% \bar{c},bb\bar{b}\bar{b},bc\bar{b}\bar{c},bb\bar{c}\bar{c}\}$. The $bn\bar{b}\bar{n}$ system lies above the $\eta_{b}\eta$ threshold. On the other hand, when compared with meson thresholds computed using the same Hamiltonian, all ground-state tetraquarks are marginally unstable. We therefore judge that narrow resonance-like tetraquark structures might exist near the lowest meson+meson thresholds in those channels contained in ${\mathcal{S}}$. Our analysis can be improved, most notably by forgoing the elementary diquark approximation and solving a four-body problem in which the internal structure of diquark correlations is resolved, e.g. using methods such as those in Refs. Chen:2017mug ; Wang:2019rdo . One might also tackle tetraquark systems using few body methods in quantum field theory, following Refs. Heupel:2012ua ; Wallbott:2018lrl . It is perhaps most important, however, to emphasise that no clear picture of putative heavy-tetraquark states can be drawn before a reliable reaction model is developed to describe their production and decay. There is a pressing need for progress in this direction, which can yield estimates of production cross-sections and principal decay modes. Acknowledgements. We are grateful for constructive comments from P. Bicudo, Z.-F. Cui, G. Eichmann, A. Lovato and J. Segovia. Work supported by: CONACyT, México; INFN Sezione di Genova; Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán 58040, México; Jiangsu Province Hundred Talents Plan for Professionals; US Department of Energy, Office of Nuclear Physics, contract no. DE-FG-02-91ER-40608; and the Academy of Finland, Project no. 320062. Appendix A Estimate of Model Uncertainty We use a model Hamiltonian, Eq. (1), to compute tetraquark masses. Although constrained by an array of applications, it is still a model; hence, there is a model uncertainty. In order to provide an estimate of its size, we also computed tetraquark masses using the relativised quark model (RQM) Hamiltonian introduced in Ref. Godfrey:1985xj . Only a few obvious changes are necessary because this Hamiltonian was also constructed to bind a colour triplet-antitriplet pair into a colour-singlet system. When forming a $S$-wave system from two axial-vector constituents, the only contribution from spin-dependent interactions in the RQM is that produced by the contact term, $V_{\rm cont}$: $$\begin{array}[]{l}\left\langle S_{1}^{\prime}\mbox{ }S_{2}^{\prime}\mbox{ }S^{% \prime}\right|V_{\rm cont}({\bf r})\left|S_{1}\mbox{ }S_{2}\mbox{ }S\right% \rangle=\left\langle 1\mbox{ }1\mbox{ }0\right|V_{\rm cont}({\bf r})\left|1% \mbox{ }1\mbox{ }0\right\rangle\\ \hskip 14.226378pt\propto\frac{1}{2}\left({\bf S}^{2}-{\bf S}_{1}^{2}-{\bf S}_% {2}^{2}\right)=-2\mbox{ }.\end{array}$$ (17) Contrarily, in the case of tensor, $V_{\rm tens}$, and spin-orbit, $V_{\rm so}$, interactions, one obtains the matrix elements $$\begin{array}[]{l}\left\langle S^{\prime}\mbox{ }L^{\prime}\mbox{ }J^{\prime}% \right|V_{\rm tens}({\bf r})\left|S\mbox{ }L\mbox{ }J\right\rangle=\left% \langle 0\mbox{ }0\mbox{ }0\right|V_{\rm tens}({\bf r})\left|0\mbox{ }0\mbox{ % }0\right\rangle\\ \hskip 14.226378pt\propto\left\langle L^{\prime}\right|Y^{(2)}\left|L\right% \rangle\propto\left(\begin{array}[]{ccc}0&2&0\\ 0&0&0\end{array}\right)=0\mbox{ },\end{array}$$ (18) where $Y^{(2)}$ is a $L=2$ spherical harmonic Brown:2006cc , and $$\begin{array}[]{l}\left\langle S^{\prime}\mbox{ }L^{\prime}\mbox{ }J^{\prime}% \right|V_{\rm so}({\bf r})\left|S\mbox{ }L\mbox{ }J\right\rangle=\left\langle 0% \mbox{ }0\mbox{ }0\right|V_{\rm so}({\bf r})\left|0\mbox{ }0\mbox{ }0\right% \rangle\\ \hskip 14.226378pt\propto\sqrt{L(L+1)(2L+1)}=0\mbox{ }.\end{array}$$ (19) The smearing function coefficient employed in Ref. Godfrey:1985xj , $\sigma_{C_{1}C_{2}}$, with $C_{1,2}$ denoting the constituents, is the same as that we use, given by Eq. (4). As an illustrative example, consider the fully-$b$ $J=0$ tetraquark. Our prediction for the ground-state mass is reported in Eq. (12). Using the RQM Hamiltonian and a computed value of $\sigma_{(bb)(\bar{b}\bar{b})}=77.9$ fm${}^{-1}$, one finds $$E^{\rm gs,RQM}_{bb\bar{b}\bar{b}}=18822\,{\rm MeV}.$$ (20) Our mass prediction cannot be judged more accurate than the difference between this result and that in Eq. (12), viz. 74 MeV. We therefore list this value as the uncertainty in Eq. (12). References (1) M. Gell-Mann, Phys. Lett. 8, 214 (1964). (2) G. Zweig, (1964), An $SU(3)$ model for strong interaction symmetry and its breaking. Parts 1 and 2 (CERN Reports No. 8182/TH. 401 and No. 8419/TH. 412). (3) M. Tanabashi et al., Phys. Rev. D 98, 030001 (2018), (Particle Data Group). (4) S. K. Choi et al., Phys. Rev. Lett. 91, 262001 (2003). (5) R. Aaij et al., Phys. Rev. Lett. 115, 072001 (2015). (6) L. Capriotti, (arXiv:1906.09190 [hep-ex]), Pentaquarks. (7) R. Aaij et al., Phys. Rev. Lett. 119, 112001 (2017). (8) Y. Chen et al., Phys. Rev. D 73, 014516 (2006). (9) J. J. Dudek, R. G. Edwards, M. J. Peardon, D. G. Richards and C. E. Thomas, Phys. Rev. D 82, 034508 (2010). (10) J. J. Dudek, Phys. Rev. D 84, 074023 (2011). (11) C. A. Meyer and E. S. Swanson, Prog. Part. Nucl. Phys. 82, 21 (2015). (12) J.-M. Richard, Few Body Syst. 57, 1185 (2016). (13) S.-S. Xu et al., Eur. Phys. J. A 55, 113 (Lett.) (2019). (14) A. Rodas et al., Phys. Rev. Lett. 122, 042002 (2019). (15) R. F. Lebed, R. E. Mitchell and E. S. Swanson, Prog. Part. Nucl. Phys. 93, 143 (2017). (16) A. Esposito, A. Pilloni and A. D. Polosa, Phys. Rept. 668, 1 (2017). (17) A. Ali, J. S. Lange and S. Stone, Prog. Part. Nucl. Phys. 97, 123 (2017). (18) S. L. Olsen, T. Skwarnicki and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018). (19) Y.-R. Liu, H.-X. Chen, W. Chen, X. Liu and S.-L. Zhu, Prog. Part. Nucl. Phys. 107, 237 (2019). (20) M. Ablikim et al., Phys. Rev. Lett. 110, 252001 (2013). (21) Z. Q. Liu et al., Phys. Rev. Lett. 110, 252002 (2013). (22) M. Ablikim et al., Phys. Rev. Lett. 112, 132001 (2014). (23) M. Ablikim et al., Phys. Rev. Lett. 111, 242001 (2013). (24) A. Bondar et al., Phys. Rev. Lett. 108, 122001 (2012). (25) S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985). (26) E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane and T.-M. Yan, Phys. Rev. D 17, 3090 (1978), [Erratum: Phys. Rev. D 21, 313 (1980)]. (27) M. R. Pennington and D. J. Wilson, Phys. Rev. D 76, 077502 (2007). (28) P. G. Ortega, J. Segovia, D. R. Entem and F. Fernandez, Phys. Rev. D 81, 054023 (2010). (29) P. G. Ortega, D. R. Entem and F. Fernandez, J. Phys. G 40, 065107 (2013). (30) J. Ferretti, G. Galata, E. Santopinto and A. Vassallo, Phys. Rev. C 86, 015204 (2012). (31) J. Ferretti, G. Galatà and E. Santopinto, Phys. Rev. C 88, 015207 (2013). (32) J. Ferretti and E. Santopinto, Phys. Rev. D 90, 094022 (2014). (33) J. Ferretti, G. Galatà and E. Santopinto, Phys. Rev. D 90, 054010 (2014). (34) J. Ferretti and E. Santopinto, Phys. Lett. B 789, 550 (2019). (35) R. L. Jaffe, Phys. Rev. D 15, 267 (1977). (36) J. D. Weinstein and N. Isgur, Phys. Rev. D 27, 588 (1983), [, 261 (1983)]. (37) D. M. Brink and F. Stancu, Phys. Rev. D 57, 6778 (1998). (38) L. Maiani, F. Piccinini, A. D. Polosa and V. Riquer, Phys. Rev. D 71, 014028 (2005). (39) N. Barnea, J. Vijande and A. Valcarce, Phys. Rev. D 73, 054004 (2006). (40) E. Santopinto and G. Galàta, Phys. Rev. C 75, 045206 (2007). (41) C. Deng, J. Ping and F. Wang, Phys. Rev. D 90, 054009 (2014). (42) L. Zhao, W.-Z. Deng and S.-L. Zhu, Phys. Rev. D90, 094031 (2014). (43) P. Bicudo, K. Cichy, A. Peters, B. Wagenbach and M. Wagner, Phys. Rev. D 92, 014507 (2015). (44) Q.-F. Lü and Y.-B. Dong, Phys. Rev. D 94, 074007 (2016). (45) E. J. Eichten and C. Quigg, Phys. Rev. Lett. 119, 202002 (2017). (46) M. Karliner and J. L. Rosner, Phys. Rev. Lett. 119, 202001 (2017). (47) M. N. Anwar, J. Ferretti, F.-K. Guo, E. Santopinto and B.-S. Zou, Eur. Phys. J. C 78, 647 (2018). (48) C. Hughes, E. Eichten and C. T. H. Davies, Phys. Rev. D 97, 054505 (2018). (49) M. N. Anwar, J. Ferretti and E. Santopinto, Phys. Rev. D 98, 094015 (2018). (50) J. Wu, X. Liu, Y.-R. Liu and S.-L. Zhu, Phys. Rev. D 99, 014037 (2019). (51) Z.-G. Wang and Z.-Y. Di, Eur. Phys. J. C 79, 72 (2019). (52) X. Chen, Eur. Phys. J. A 55, 106 (2019). (53) G.-J. Wang, L. Meng and S.-L. Zhu, (arXiv:1907.05177 [hep-ph]), Spectrum of the fully-heavy tetraquark state $QQ\bar{Q}^{\prime}\bar{Q}^{\prime}$. (54) J. D. Weinstein and N. Isgur, Phys. Rev. D 41, 2236 (1990). (55) A. V. Manohar and M. B. Wise, Nucl. Phys. B 399, 17 (1993). (56) N. A. Tornqvist, Z. Phys. C 61, 525 (1994). (57) E. S. Swanson, Phys. Lett. B 588, 189 (2004). (58) C. Hanhart, Yu. S. Kalashnikova, A. E. Kudryavtsev and A. V. Nefediev, Phys. Rev. D 76, 034007 (2007). (59) C. E. Thomas and F. E. Close, Phys. Rev. D 78, 034007 (2008). (60) S. Dubynskiy and M. B. Voloshin, Phys. Lett. B 666, 344 (2008). (61) J. Yu. Panteleeva, I. A. Perevalova, M. V. Polyakov and P. Schweitzer, Phys. Rev. C 99, 045206 (2019). (62) J. Ferretti, Phys. Lett. B 782, 702 (2018). (63) J. Ferretti, E. Santopinto, M. Naeem Anwar and M. A. Bedolla, Phys. Lett. B 789, 562 (2019). (64) A. C. Aguilar, D. Binosi and J. Papavassiliou, Front. Phys. China 11, 111203 (2016). (65) E. Eichten and Z. Liu, (arXiv:1709.09605 [hep-ph]), Would a Deeply Bound $b\bar{b}b\bar{b}$ Tetraquark Meson be Observed at the LHC? (66) R. Aaij et al., JHEP 10, 086 (2018). (67) A. Esposito and A. D. Polosa, Eur. Phys. J. C 78, 782 (2018). (68) A. V. Berezhnoy, A. V. Luchinsky and A. A. Novoselov, Phys. Rev. D 86, 034004 (2012). (69) W. Chen, H.-X. Chen, X. Liu, T. G. Steele and S.-L. Zhu, Phys. Lett. B 773, 247 (2017). (70) M. Karliner, S. Nussinov and J. L. Rosner, Phys. Rev. D95, 034011 (2017). (71) J. Wu, Y.-R. Liu, K. Chen, X. Liu and S.-L. Zhu, Phys. Rev. D 97, 094015 (2018). (72) Z.-G. Wang, Eur. Phys. J. C 77, 432 (2017). (73) M.-S. Liu, Q.-F. Lü, X.-H. Zhong and Q. Zhao, Phys. Rev. D 100, 016006 (2019). (74) E. Santopinto, Phys. Rev. C 72, 022201 (2005). (75) J. Ferretti, A. Vassallo and E. Santopinto, Phys. Rev. C 83, 065204 (2011). (76) E. Santopinto and J. Ferretti, Phys. Rev. C 92, 025202 (2015). (77) M. Anselmino, E. Predazzi, S. Ekelin, S. Fredriksson and D. B. Lichtenberg, Rev. Mod. Phys. 65, 1199 (1993). (78) P.-L. Yin et al., Phys. Rev. D 100, 034008 (2019). (79) W. Celmaster, H. Georgi and M. Machacek, Phys. Rev. D 17, 879 (1978). (80) S. Capstick and N. Isgur, Phys. Rev. D 34, 2809 (1986). (81) L. C. Bland et al., (arXiv:1909.03124 [nucl-ex]), Observation of Feynman scaling violations and evidence for a new resonance at RHIC. (82) R. T. Cahill, C. D. Roberts and J. Praschifka, Austral. J. Phys. 42, 129 (1989). (83) C. J. Burden, R. T. Cahill and J. Praschifka, Austral. J. Phys. 42, 147 (1989). (84) R. T. Cahill, Austral. J. Phys. 42, 171 (1989). (85) H. Reinhardt, Phys. Lett. B 244, 316 (1990). (86) G. V. Efimov, M. A. Ivanov and V. E. Lyubovitskij, Z. Phys. C 47, 583 (1990). (87) R. T. Cahill, C. D. Roberts and J. Praschifka, Phys. Rev. D 36, 2804 (1987). (88) G. Li, X.-F. Wang and Y. Xing, Eur. Phys. J. D 79, 645 (2019). (89) Z. S. Brown, W. Detmold, S. Meinel and K. Orginos, Phys. Rev. D 90, 094507 (2014). (90) T. Yoshida, E. Hiyama, A. Hosaka, M. Oka and K. Sadato, Phys. Rev. D D92, 114029 (2015). (91) S.-X. Qin, C. D. Roberts and S. M. Schmidt, Few Body Syst. 60, 26 (2019). (92) X. Chen, J. Ping, C. D. Roberts and J. Segovia, Phys. Rev. D 97, 094016 (2018). (93) W. Heupel, G. Eichmann and C. S. Fischer, Phys. Lett. B 718, 545 (2012). (94) P. C. Wallbott, G. Eichmann and C. S. Fischer, J. Phys. Conf. Ser. 1024, 012035 (2018). (95) B. A. Brown, T. Duguet, T. Otsuka, D. Abe and T. Suzuki, Phys. Rev. C 74, 061303 (2006).
Interplay between resonant tunneling and spin precession oscillations in all-electric all-semiconductor spin transistors M. I. Alomar Institut de Física Interdisciplinària i Sistemes Complexos IFISC (CSIC-UIB), E-07122 Palma de Mallorca, Spain Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain    Llorenç Serra Institut de Física Interdisciplinària i Sistemes Complexos IFISC (CSIC-UIB), E-07122 Palma de Mallorca, Spain Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain    David Sánchez Institut de Física Interdisciplinària i Sistemes Complexos IFISC (CSIC-UIB), E-07122 Palma de Mallorca, Spain Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain Abstract We investigate the transmission properties of a spin transistor coupled to two quantum point contacts acting as spin injector and detector. In the Fabry-Perot regime, transport is mediated by quasibound states formed between tunnel barriers. Interestingly, the spin-orbit interaction of the Rashba type can be tuned in such a way that nonuniform Rashba fields can point along distinct directions in different points of the sample. We discuss both spin-conserving and spin-flipping transitions as the spin-orbit angle of orientation increases from parallel to antiparallel configurations. Spin precession oscillations are clearly seen as a function of the length of the central channel. Remarkably, we find that these oscillations combine with the Fabry-Perot motion giving rise to quasiperiodic transmissions in the purely one-dimensional case. Furthermore, we consider the more realistic case of a finite width in the transverse direction and find that the coherent oscillations become deteriorated for moderate values of the spin-orbit strength. Our results then determine the precise role of the Rashba intersubband coupling potential in the Fabry-Perot-Datta-Das intermixed oscillations pacs: 85.75.Hh, 85.75.Mm, 73.23.-b, 72.25.-b I Introduction Spin transistors operate under the action of a spin-orbit coupling potential that rotates the electronic spin traveling along a narrow channel Datta and Das (1990). Semiconductor heterostructures offer the possibility of generating spin-orbit interactions due to inversion asymmetry (Rashba type Rashba (1960)), thus rendering semiconductor spintronics a rewarding area for spin information processing applications Fabian et al. (2007); Bercioux and Lucignano (2015). Importantly, the strength of the Rashba coupling can be tuned with an external electric field Nitta et al. (1997); Engels et al. (1997), which provides the necessary gate tuning of the transistor switching mechanism. The last ingredient is the ability to both inject and detect spin polarized currents. This can be done by attaching ferromagnetic terminals to the semiconductor channel. Yet a series conductivity mismatch owing to unequal Fermi wavevectors can hamper the device functionality Schmidt et al. (2000); Rashba (2000); Fert and Jaffrès (2001). Although spin precession oscillations have been detected in ferromagnetic-semiconductor junctions Koo et al. (2009) employing nonlocal voltage detection Jedema et al. (2002), the spin-injection efficiency between dissimilar materials tends to be low. The device performance can also be affected due to the presence of multiple channels Jeong and Lee (2006); Gelabert et al. (2010), the destructive effect of spin decoherence Sherman and Sinova (2005); Nikolić and Souma (2005); Xu et al. (2014), the influence of gating Sun et al. (2011); Wójcik et al. (2014), and the fact that the system can behave as a two-dimensional spin transistor Pala et al. (2004); Agnihotri and Bandyopadhyay (2010); Zainuddin et al. (2011); Gelabert and Serra (2011); Alomar et al. (2015). An interesting alternative has very recently been put forward by Chuang et al. Chuang et al. (2015). A pair of quantum point contacts (QPCs) works as spin injectors and detectors Debray et al. (2009); Nowak and Szafran (2013). The electric confinement in the point constrictions leads to an effective magnetic field that polarizes the electrons in directions perpendicular to the Rashba field present in the central channel. As a consequence, the detector voltage becomes an oscillatory function of the middle gate voltage applied to the two-dimensional electron gas. Importantly, the device is fully nonmagnetic (neither ferromagnetic contacts nor external magnetic fields are needed for the operation principle) and relies on a semiconductor-only structure. This is an appealing feature that has been pursued in different proposals Schliemann et al. (2003); Hall et al. (2003); Wang et al. (2003); Awschalom and Samarth (2009); Wunderlich et al. (2010); Liu et al. (2012). This experiment Chuang et al. (2015) motivates us to consider the following problem. Consider the case when the conductance of both quantum point contacts is set below the value corresponding to a fully open mode. Then, the waveguide potentials can be described as tunnel barriers and transport across them occurs via evanescent states Serra et al. (2007); Sablikov and Tkach (2007). Effectively, the device electronic potential is globally seen as a double barrier with a quantum well of variable depth. It is well known that these potential landscapes in general support the presence of resonant scattering due to Fabry-Perot-like oscillations arising from wave interference between the tunnel barriers. But at the same time we have spin-orbit induced oscillations due to the precession of spins traveling between the barriers. Therefore, one would naturally expect a competition between resonant tunneling and spin precession oscillations in a device comprising two serially coupled QPCs. Below, we show that this is indeed the case and that the combination of both oscillation modes leads to rich physics not only in the strictly one-dimensional case but also when more realistic samples are studied. The subject of resonant tunneling effects and spin-orbit fields has been investigated in a number of works giving rise to interesting predictions. For instance, Voskoboynikov et al. find that the transmission probability significantly changes in the presence of the Rashba coupling Voskoboynikov et al. (1999) while de Andrada e Silva et al. obtain spin polarizations for an unpolarized beam of electrons impinging on a double-barrier nanostructure de Andrada e Silva and La Rocca (1999). Koga et al. analyze spin-filter effects in triple barrier diodes Koga et al. (2002) whereas Ting and Cartoixà examine the double barrier case Ting and Cartoixà (2002). The dependence of the electronic tunneling on the spin orientation is treated by Glazov et al. Glazov et al. (2005). These structures suffer from phase-breaking effects, as shown by Isić et al. Isić et al. (2010). In our work, we consider a purely ballistic system. Scattering is elastic and the transmission probabilities are determined within the quantum scattering approach. Scattering can take place at the interfaces between the quantum point contacts and the quantum well or due to interaction between the spins and the Rashba interaction. We find that the transmission depends on the relative angle between the spin-orbit fields in the QPCs and that this transmission differs for the direction of the injected spin. Whereas the transmission diagonal in the spin space shows resonant tunneling oscillations as a function of the Rashba strength when the relative angle is zero, the off-diagonal transmission always vanishes. Furthermore, for both transmissions the spin precession oscillations only appear when the QPCs have spin-orbit couplings with different angles. This effect can be also seen when the well length is varied. Importantly, we find that our results are robust against the Rashba intersubband mixing potential for moderately low values of the spin-orbit strength. This implies that the interplay discussed here can be probed with today’s experimental techniques. The content of our paper is structured as follows. Section II describes the system under consideration in the strict one-dimensional limit: a semiconductor channel with a double barrier potential and a Rashba spin-orbit interaction applied on the barriers and central region. We determine the eigenenergies and eigenfunctions in each region. In Sec. III, using matching methods we find the transmission probabilities for a fixed incident spin. We perform an analysis of the transmission oscillations as a function of the relative orientation between the QPC effective magnetic fields and the Rashba interaction, the strength of the spin-orbit coupling and the width of the middle cavity. We find that depending on the direction of the spin polarization in the QPC regions the transitions are dominated by processes that conserve or flip the spin direction. We also observe the combined effect of Datta-Das and Fabry-Perot oscillation and obtain their characteristic frequencies. We find that modifying the strength of the Rashba coupling and the width of central region we can control the transmission probability for each spin. Section IV contains our analysis of the quasi-one-dimensional case. This discussion is important because it quantifies the role of spin-orbit intersubband coupling effects in both the Fabry-Perot and the Datta-Das oscillation modes. Finally, our conclusions are summarized in Sec. V. II Theoretical model We consider a semiconductor layer organized into five different regions as in Figure 1. The blue areas are electrodes that form constrictions in the QPCs II and IV between the left (I) and right (V) reservoir and the central quantum well (QW) depicted in III (red). We take $x$ as the transport direction. For the moment, let us disregard transverse channel effects and consider a purely one-dimensional model. We expect that this is a good approximation when the point contacts support less than one mode. We will later discuss the more realistic case where the electronic waveguides have a nonzero transversal width. Moreover, we take different Rashba potentials acting on the QPCs (strength $\alpha_{1}$) and the QW (strength $\alpha_{2}$). Thus, our Hamiltonian reads $$\displaystyle\mathcal{H}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{H}_{0}+\mathcal{H}_{SO1}+\mathcal{H}_{SO2}\,,$$ (1) $$\displaystyle\mathcal{H}_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{p_{x}^{2}}{2m_{0}}+V_{0}(x)\,,$$ (2) $$\displaystyle\mathcal{H}_{SO1}$$ $$\displaystyle=$$ $$\displaystyle\frac{\alpha_{1}}{\hbar}\left[\left(\vec{\sigma}\times\vec{p}% \right)_{z}\cos\phi+\left(\vec{\sigma}\times\vec{p}\right)_{y}\sin\phi\right]\,,$$ (3) $$\displaystyle\mathcal{H}_{SO2}$$ $$\displaystyle=$$ $$\displaystyle\frac{\alpha_{2}}{\hbar}\left(\vec{\sigma}\times\vec{p}\right)_{z% }\,,$$ (4) where $\mathcal{H}_{0}$ represents the free part of the total Hamiltonian $\mathcal{H}$, with $p_{x}=-i\hbar\partial/\partial_{x}$ the linear momentum operator, $m_{0}$ the conduction-band effective mass of the electrons in the semiconductor heterostructure, and $V_{0}(x)$ the electrostatic potential along the structure. The spin-orbit terms of $\mathcal{H}$ are $\mathcal{H}_{SO1}$ and $\mathcal{H}_{SO2}$, where the first (second) is active on the QPC (QW) only. Here, $\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ and $\vec{p}=(p_{x},0,0)$ are the Pauli matrices and the momentum vector, respectively. In the central region, the $\alpha_{2}$ spin-orbit field arises from the confining electric field perpendicular to the QW plane and thus lies along the $y$-direction. In the constrictions, there exists in the $\alpha_{1}$ spin-orbit potential an additional contribution from the lateral electric field applied to the QPCs. Therefore, we define the angle $\phi$ that quantifies the strength of the two different components. For $\phi=0$ the contribution from the lateral field is absent and spins are parallel across the system while for $\phi=\pi/2$ the two Rashba fields show a perpendicular configuration. The ability to manipulate the orientation of the Rashba field is crucial for the working principle of the device. For definiteness, we describe the QPC electrostatic potential with a double tunnel barrier of width $L_{1}$ and height $V_{0}$ and the in-between cavity with a quantum well of length $L$ and bottom aligned with that of the reservoirs energy bands, see the sketch in Figure 2. Since the potential is piecewise constant, the eigenstates of $\mathcal{H}$ are readily found, $$\displaystyle\Psi^{0}_{\ell s}(x)$$ $$\displaystyle\equiv$$ $$\displaystyle\!\Psi^{I}_{\ell s}\!=\Psi^{V}_{\ell s}\!=\!\frac{1}{\sqrt{2}}\!% \left(\!\!\begin{array}[]{c}\sqrt{1\!+\!s\,\sin\phi}\\ -is\sqrt{1\!-\!s\,\sin\phi}\end{array}\!\!\right)e^{ik^{0}_{\ell}x},$$ (5) $$\displaystyle\Psi^{1}_{\ell s}(x)$$ $$\displaystyle\equiv$$ $$\displaystyle\!\Psi^{II}_{\ell s}\!=\!\Psi^{IV}_{\ell s}\!=\!\frac{1}{\sqrt{2}% }\!\left(\!\!\begin{array}[]{c}\sqrt{1\!+\!s\,\sin\phi}\\ -is\sqrt{1\!-\!s\,\sin\phi}\end{array}\!\!\right)e^{ik^{1}_{\ell s}x},$$ (6) $$\displaystyle\Psi^{2}_{\ell s}(x)$$ $$\displaystyle\equiv$$ $$\displaystyle\Psi^{III}_{\ell s}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{c}1\\ -is\end{array}\right)e^{ik^{2}_{\ell s}x},$$ (7) where $s=\pm$ is the spin index. For instance, $s=+$ corresponds to an electron with a spin pointing along $-y$ in the quantum well. We also label the states with the index $\ell=\pm$, which denotes the two possible momenta (i.e., the two possible wave propagation directions) for fixed values of spin and energy $E$. The wave numbers read, $$\displaystyle k^{0}_{\ell}$$ $$\displaystyle\equiv$$ $$\displaystyle k^{I}_{\ell}=k^{V}_{\ell}=\ell\sqrt{\frac{2m_{0}}{\hbar^{2}}E}\,,$$ (8) $$\displaystyle k^{1}_{\ell s}$$ $$\displaystyle\equiv$$ $$\displaystyle k^{II}_{\ell s}\!=\!k^{IV}_{\ell s}\!\!\!=\!\ell\sqrt{\frac{2m_{% 0}}{\hbar^{2}}(E\!+\!E_{SO1}\!-\!V_{0})}\!-\!s\,k_{SO1}\,,$$ (9) $$\displaystyle k^{2}_{\ell s}$$ $$\displaystyle\equiv$$ $$\displaystyle k^{III}_{\ell s}=\ell\sqrt{\frac{2m_{0}}{\hbar^{2}}(E+E_{SO2})}-% s\,k_{SO2}\,,$$ (10) with $E_{SOi}=m_{0}\alpha_{i}^{2}/(2\hbar^{2})$ ($i=1,2$) the downshift of the energy spectra due to the spin-orbit coupling, which also causes a horizontal band splitting $\Delta k$ characterized by the momentum $k_{SOi}=m_{0}\alpha_{i}/\hbar^{2}$. Equations (8), (9), and (10) depend on the energy of the incident electrons, which in the following we set equal to the Fermi energy $E_{F}$. Finally, we observe that both Eqs. (6) and (7) have the same spinor. Since the spin quantization axis in the reservoirs is not fixed, we select it parallel to the spin direction on the adjacent QPCs. III Transmission properties We are now in a position to solve the scattering problem in Fig. 2. We focus on the case $0<E<V_{0}-E_{SO1}$. This indicates that we are working with evanescent states in the QPC regions (II and IV). Hence, $k^{1}_{\ell s}$ acquires an imaginary part but generally also possesses a real part. We emphasize that this differs from the case of tunnel barriers without spin-orbit coupling Serra et al. (2007). On the other hand, both $k^{0}_{\ell s}$ and $k^{2}_{\ell s}$ are always real numbers. The matching method allows us to determine all reflection and transmission amplitudes for an incoming electron, which we take as impinging form the left. The matching conditions are $$\displaystyle\Psi(\epsilon)-\Psi(-\epsilon)$$ $$\displaystyle=$$ $$\displaystyle 0$$ (11) $$\displaystyle\Psi^{\prime}(\epsilon)-\Psi^{\prime}(-\epsilon)$$ $$\displaystyle=$$ $$\displaystyle\frac{-im_{0}}{\hbar^{2}}\left[-\left(\alpha_{2}(\epsilon)-\alpha% _{2}(-\epsilon)\right)\sigma_{y}\right.$$ (12) $$\displaystyle+\left.\left(\alpha_{1}(\epsilon)\right.\right.$$ $$\displaystyle-$$ $$\displaystyle\left.\left.\alpha_{1}(-\epsilon)\right)\left(\sin\phi\sigma_{z}-% \cos\phi\sigma_{y}\right)\right]\Psi(\epsilon)\,,$$ where $\epsilon$ is a infinitesimal quantity around each interface. Equation (11) is a statement of wave function continuity. Equation (III) is derived from imposing flux conservation Molenkamp et al. (2001). Notice that in the absence of spin-orbit interaction we recover the condition of continuity for the wave function derivative. In the presence of Rashba coupling, this condition must be generalized according to Eq. (III). Since transport is elastic, energy is conserved and the transmission $T^{s^{\prime}s}$ and reflection $R^{s^{\prime}s}$ probabilities depend on a given $E$. However, spin can be mixed after scattering and an incident electron with spin $s$ is reflected or transmitted with spin $s^{\prime}$. First, we analyze in Fig. 3 the main properties of $T^{s^{\prime}s}$ and $R^{s^{\prime}s}$ when we change the relative orientation between the QPC and the QW spin-orbit fields. We choose realistic values for the Rashba strengths and the structure size (cf. Ref. Chuang et al. (2015)). We tune $\phi$ from $0$ (spins parallel-oriented along the system) to $\pi/2$ (spin axes perpendicularly oriented). In Fig. 3(a) we observe that, independently of the value of $\phi$, the electrons are reflected in the same spin state that the incoming one and that the reflection probability is roughly constant as a function of $\phi$. We understand this effect as due to the spin orientation of electrons in regions I and II, which is the same. In contrast, the transmission probability has both spin contributions for all values of $\phi$ except for the parallel configuration, for which $T^{-+}=0$ since there exists no spin polarization. We also remark that as $\phi$ increases, i.e., as the injected spin direction is rotated from $-y$ to $z$, $T^{-+}$ increases while $T^{++}$ decreases since for higher $\phi$ the perpendicular component of the spin direction becomes larger and its contribution to the transmission thus increases. Let us further clarify the effects discussed above considering a few special cases. If we make $L_{1}=0$ (no tunnel barriers), the reflection probability is trivially zero, see Fig. 3(b), and the transmission functions follow the same behavior as in Fig. 3(a) for which $L_{1}$ is nonzero. In Fig. 3(c) we observe that if we turn off the Rashba coupling on the QPCs ($\alpha_{1}=0$), the transmission decreases as compared with the values in Fig. 3(a). As a consequence, we infer that the spin-orbit coupling enhances the transmission properties of our double-barrier system. This may seem counterintuitive—when the Rashba interaction is present, one would naively expect more scattering and smaller transmission. However, we stress that the spin-orbit coupling lowers the energy band bottom of the barrier, thus amplifying the role of the evanescent states (their characteristic decay length increases) and reducing consequently the reflection probability. Finally, when we take $\alpha_{2}=0$ (no Rashba interaction in the quantum well) all transport coefficients become independent of the angle $\phi$ [Fig. 3(d)] since the spin orientation in the central region is fixed. Furthermore, the reflection becomes higher due to the particular energy value, which lies around a resonance valley (see below). Before proceeding, we notice that the case $\phi=0$ can be considerably simplified. The second term in the right hand side of Eq. (3) cancels out and we can write the projection of the Schrödinger equation $(\mathcal{H}-E)\Psi=0$ onto the spinor pointing along the $-y$ direction as $$\displaystyle\Big{[}-\frac{\hbar^{2}}{2m_{0}}\frac{d^{2}}{dx^{2}}-is\left(% \alpha_{1}+\alpha_{2}\right)\frac{d}{dx}$$ (13) $$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad+V_{0}-E\Big{]}\Psi_{s}(x% )=0\,,$$ where $\alpha_{1}$ and $V_{0}$ are nonzero in regions II and IV whereas $\alpha_{2}$ is nonvanishing in region III only. Now, if we apply an appropriate gauge transformation $\Psi_{s}(x)=\Psi(x)\exp[-is\frac{m_{0}}{\hbar^{2}}\int dx^{\prime}(\alpha_{1}+% \alpha_{2})]$ we can recast Eq. (13) as $$\displaystyle\left(-\frac{\hbar^{2}}{2m_{0}}\frac{d^{2}}{dx^{2}}+V_{1}-V_{2}-E% \right)\Psi(x)=0\,,$$ (14) which is independent of the spin. Here, $V_{1}=V_{0}-E_{SO1}$ in regions II and IV and zero otherwise while $V_{2}=E_{SO2}$ in region III. This potential corresponds to a double barrier of renormalized height $V_{1}$ and a quantum well of depth $V_{2}$ in the central region. Clearly, the spin-orbit coupling effectively lowers the top of the barrier potential as discussed earlier. Solving the scattering problem, we obtain a resonant condition that depends on all the parameters of our system, $$\displaystyle k_{\ell s}^{2}L=n\pi+f(\alpha_{1},\alpha_{2},L_{1})\,,$$ (15) where $k_{\ell s}^{2}$ is the wave number in the central region [Eq. (10)], $n=1,2\ldots$ labels the different resonances and $f(\alpha_{1},\alpha_{2},L_{1})$ is a complicated function of $\alpha_{1}$, $\alpha_{2}$ and $L_{1}$ but independent of the QW length. The condition given by Eq. (15) can be numerically shown to hold also for the general case $\phi\neq 0$. However, in this case spin precession effects must be also taken into account. Figure 4 shows how our system reacts to changes applied to the Rashba strength in the central region, $\alpha_{2}$. The parallel configuration ($\phi=0$) is plotted in Fig. 4(a), where we observe resonance peaks for certain values of spin-orbit interaction and a fixed Fermi energy. As the Rashba coupling increases, the quantum well becomes deeper and, as a consequence, there appear new quasibound states between the two barriers that fulfill Eq. (15). When the energy of the incident electron hits one of these states, the transmission probability is maximal. Therefore, the spin-orbit interaction acts in our system as a gate voltage by shifting the resonances of the quantum well López et al. (2007). Our device then behaves as an analog of a Fabry-Perot resonator tuned with a spin-orbit potential. Note that the resonances appear for $T^{++}$ only since for $\phi=0$ the spins are parallel and one obtains $T^{-+}=0$ always. This can be better understood if we take $L_{1}=0$, in which case the double barrier potential disappears and we obtain a perfectly transparent system independently of the depth of the quantum well [Fig. 4(b)]. Here, the energy of the electron is sufficiently high that its wave is mostly unaffected by the well discontinuity. Only for strong enough Rashba strengths the transmission shows weak oscillations (Ramsauer effect). We also find that the off-diagonal transmission coefficient is zero. This originates from the fact in the parallel configuration the spin cannot be flipped, in agreement with the case $\phi=0$ in Fig. 3(d). In Figs. 4(c) and (d) we take $\phi=\pi/4$, i.e., the wave is spin polarized $45º$ with respect to $-y$. Let us first eliminate the double-barrier potential ($L_{1}=0$) and focus on the effects from the central region only, see Fig. 4(d). We observe that both $T^{++}$ and $T^{-+}$ are nonzero and oscillate out of phase. These oscillations are a consequence of the spin transistor effect predicted by Datta and Das Datta and Das (1990). We find $T^{++}=1$ and $T^{-+}=0$ for $\alpha_{2}=0$ but then both transmissions become modulated as we increase the spin-orbit strength since the QW energy bands shows a larger spin splitting $\Delta k=m_{0}\alpha_{2}/\hbar^{2}$. For certain values of $\alpha_{2}$, $T^{++}$ ($T^{-+}$) attains its minimum (maximum) value of $0.5$. Importantly, the nature of these transmission oscillations fundamentally differs from the resonances in Fig. 4(a). To see this, we next obtain the spin-precession frequency from the relation Datta and Das (1990) $$\displaystyle T^{++}\propto\cos^{2}(\Delta kL)$$ (16) This expression implies that the maximum condition is reached at $\Delta kL=n^{\prime}\pi$ ($n^{\prime}=1,2\ldots$). For the parameters of Fig. 4(d) this corresponds to $\alpha_{2}\simeq 13.6n^{\prime}$ meV nm. More interestingly, we now turn on the double barrier potential and allow for the interplay between Fabry-Perot and Datta-Das oscillations. The superposition of the two effects can be seen in Fig. 4(c). We observe that (i) the resonance peaks for $T^{++}$ become somewhat quenched and (ii) the off-diagonal coefficient $T^{-+}$ shows an irregular series of oscillating peaks. The effect is more intense in the perpendicular configuration ($\phi=\pi/2$), see Figs. 4(e). Both transmissions oscillate now between $0$ and $1$ with opposite phases [Fig. 4(f)] and the combination of both types of oscillations yields the curves depicted in Fig. 4(e). It is now natural to ask about the effect of tuning the QW length $L$. We show this in Fig. 5 for the same orientation angles as in Fig. 4 but fixing the Rashba strength $\alpha_{2}$. When $\phi=0$, Fig. 5(a) presents for $T^{++}$ narrowly spaced oscillations since as we increase the width of the central cavity there appear more internal modes that, at fixed values of $L$, are resonant with the incident wave (Fabry-Perot effect). The resonant condition from Eq. (15) implies that the transmission is peaked at $L\simeq(47.5n+8.3)$ nm ($n=1,2\ldots$). For $\phi=0$ spin flipping is not possible and $T^{-+}=0$. When the constrictions are turned off ($L_{1}=0$), we have a completely open system and the transmission stays constant at its maximum value, see Fig. 5(b). As we increase the spin orientation angle [$\phi=\pi/4$ in Figs. 5(c) and (d) and $\phi=\pi/2$ in Figs. 5(e) and (f)], the spin transistor effect begins to contribute as we observe a spin precession for both $T^{++}$ and $T^{-+}$, modulated by their characteristic frequency, namely, $L\simeq 237.6n^{\prime}$ nm ($n^{\prime}=1,2\ldots$). We find that when $L_{1}=0$ (no tunnel barriers) the Fabry Perot resonances disappear and only the Datta-Das oscillations are present [Fig. 5(d) and (f)], as expected. Remarkably, when both oscillation modes are present we find that the transmission becomes quasiperiodic [Fig. 5(c) and (e)]. This effect arises from the combination of at least two oscillations whose characteristic frequencies are incommensurate Ott (1993). In our system, the Fabry-Perot frequency is given by $f_{FP}=\frac{1}{\pi}\sqrt{\frac{2m_{0}}{\hbar^{2}}E+k_{SO2}^{2}}$ whereas that of the spin precession motion is expressed as $f_{sp}=2k_{SO2}/\pi$. Clearly, its ratio $f_{FP}/f_{sp}$ is quite generally an irrational number. In related systems, quasiperiodic oscillations have been predicted to occur in double quantum dots with incommensurate capacitance couplings Ruzin et al. (1992) and in ac-driven supelattices where the ratio between the ac frequency and the internal frequency is not a rational number Sánchez et al. (2001). Importantly, in our case the origin of both oscillations is purely quantum (wave interference and spin precession). IV Quasi-one dimensional case The above discussion demonstrates that two types of transmission oscillations can coexist in a double-barrier spin-orbit coupled resonant tunneling diode. However, the results were strictly limited to the one-dimensional case. We now consider the more realistic situation of a double QPC embedded in a quantum wire of finite width. The problem is not a mere extension that takes into account transverse channels since these channels become coupled via the Rashba intersubband mixing potential. This term causes spin-flip transitions between adjacent channels and generally destroys the spin coherent oscillations Gelabert et al. (2010). Furthermore, it yieds Fano lineshapes Sánchez and Serra (2006) that dramatically alter the conductance curves Sánchez and Serra (2006); Shelykh and Galkin (2004); Zhang et al. (2005); López et al. (2007). We consider a planar waveguide formed in a two-dimensional electron gas lying on the $x$–$y$ plane. We keep $x$ as the transport direction. Hence, the $\mathcal{H}_{0}$ term in Eq. (2) is replaced with $$\mathcal{H}_{0}=\frac{p_{x}^{2}+p_{y}^{2}}{2m_{0}}+V(x,y)\,,$$ (17) where $p_{y}=-i\hbar\partial/\partial_{y}$. Equations (3) and (4) remain valid but we now take $\vec{p}=(p_{x},p_{y},0)$. The potential $V(x,y)$ confines electrons in the (transversal) $y$ direction and includes in $x$ two identical constrictions that define an intermediate region (the cavity) of length $L$. In the numerical simulations we consider a hard-wall confinement potential along $y$ and two square quantum point contacts in the $x$ direction. The system parameters are depicted in Fig. 6. We take a given quantization axis $\hat{n}$ for the spin in the left and right contacts. The spin eigenfunctions are then denoted with $\chi_{s}(\eta)$, with $s=\pm$ the eigenstate label and $\eta=\uparrow,\downarrow$ the discrete variable. The full wave function $\Psi(x,y,\eta)$ is expanded in spin channels $\psi_{s}(x,y)$ as $$\Psi(x,y,\eta)=\sum_{s^{\prime}}\psi_{s^{\prime}}(x,y)\chi_{s^{\prime}}(\eta)\,.$$ (18) Projecting the Schrödinger equation on the spin basis, we obtain coupled channel equations, $$\displaystyle\left[-\frac{\hbar^{2}\nabla^{2}}{2m}+V(x,y)\right]\psi_{s}(x,y)$$ (19) $$\displaystyle-$$ $$\displaystyle\frac{i\hbar}{2}\sum_{s^{\prime}}{\langle s|\sigma_{y}|s^{\prime}% \rangle\left(V_{A}(x)\frac{\partial}{\partial x}+\frac{\partial}{\partial x}V_% {A}(x)\right)\,\psi_{s^{\prime}}(x,y)}$$ $$\displaystyle-$$ $$\displaystyle\frac{i\hbar}{2}\sum_{s^{\prime}}{\langle s|\sigma_{z}|s^{\prime}% \rangle\left(V_{B}(x)\frac{\partial}{\partial x}+\frac{\partial}{\partial x}V_% {B}(x)\right)\,\psi_{s^{\prime}}(x,y)}$$ $$\displaystyle+$$ $$\displaystyle\frac{i\hbar}{2}\sum_{s^{\prime}}{\langle s|\sigma_{x}|s^{\prime}% \rangle\,V_{A}(x)\frac{\partial}{\partial y}}\,\psi_{s^{\prime}}(x,y)\;,$$ where the potentials $V_{A}(x)$ and $V_{B}(x)$ are responsible for the coupling between the different spin channels $s=\pm$. In general, the Pauli-matrix elements in Eq. (19) depend on $\hat{n}$. To connect with the 1D system discussed in Sec. III we take $\hat{n}=-\hat{y}$, which makes the $\sigma_{y}$ term diagonal, but the other two with $\sigma_{x}$ and $\sigma_{z}$ remain non diagonal. Coupling between opposite spin states is, therefore, always present in the quasi-1D case when $(V_{A},V_{B})\neq 0$. In Eq. (19) the potentials $V_{A}$ and $V_{B}$ read $$\displaystyle V_{A}(x)$$ $$\displaystyle=$$ $$\displaystyle\alpha_{1}\cos\phi\,\mathcal{P}_{1}(x)+\alpha_{2}\mathcal{P}_{2}(% x)+\alpha_{1}\cos\phi\,\mathcal{P}_{3}(x)\;,$$ $$\displaystyle V_{B}(x)$$ $$\displaystyle=$$ $$\displaystyle-\alpha_{1}\sin\phi\,\mathcal{P}_{1}(x)-\alpha_{1}\sin\phi\,% \mathcal{P}_{3}(x)\,,$$ (20) where the projectors $\mathcal{P}_{i}(x)$ partition the $x$ domain in regions $i=1$ (left QPC), $i=2$ (QW) and $i=3$ (right QW). These two potentials yield qualitatively different spin-flip couplings, since $V_{B}$ only appears with $\partial/\partial x$, while $V_{A}$ appears with both $\partial/\partial x$ and $\partial/\partial y$. As before, $\phi$ is the angle with the $y$-axis of the Rashba field in the two QPC’s (assumed in the $yz$ plane). Remarkably, $V_{B}(x)$ vanishes with $\phi=0$ and then, for quantization axis along $y$, the only spin-flip coupling in Eq. (19) is through the last term depending on $\partial/\partial_{y}$. To be effective, this spin-flip coupling requires that at least two transverse modes (differing in the nodes along $y$) are propagating in the asymptotic leads Sánchez and Serra (2006). Otherwise, as we show below, there is no spin-flip when $\hat{n}$ lies along $y$. Equation (19) is solved with the quantum-transmitting boundary method Lent and Kirkner (1990) on a uniform grid. The resulting transmission probability as a function of the middle spin-orbit strength $\alpha_{2}$ is shown in Fig. 7. As mentioned, the transmission is expressed in the $-y$ direction basis. Like in Fig. 4 we distinguish the case with the constrictions (left panels) from the case without the QPCs (right panels). For $\phi=0$ [Fig. 7(a)] we quench the spin precession oscillations since the injected spins are parallel to the Rashba field. Then, the cross transmission $T^{-+}$ vanishes identically. The resonant tunneling oscillations qualitatively agree with the one-dimensional case [cf. Fig. 4(a)]. Likewise, the Ramsauer oscillations that arise when the QPCs are absent [Fig. 7(b)] are visible at large values of $\alpha_{2}$ [cf. Fig. 4(b)]. The agreement in both cases is good for small values of $\alpha_{2}$. This is reasonable since Rashba intersubband coupling is negligible if $\alpha_{2}\ll\hbar^{2}/mL_{y}$ Datta and Das (1990). For larger $\alpha_{2}$ we observe in Fig. 7(b) sharp dips that originate from the Fano-Rashba effect Sánchez and Serra (2006) and that are unique to quasi-one dimensional waveguides with nonuniform spin-orbit coupling as in our case. Strikingly enough, as $\alpha_{2}$ increases we detect in Fig. 7(a) more resonant peaks than in the strict one-dimensional case. We explain this effect as follows. For $\alpha_{1}=\alpha_{2}=0$ the cavity works as a resonator with multiple resonances. If the cavity is closed, the bound levels can be described with a pair of natural numbers $(n_{1},n_{2})$ since its potential corresponds to a two-dimensional infinite well Cohen-Tannoudji et al. (1977). To a good approximation, the electronic scattering when the cavity is open obeys a conservation law that fixes the transversal component of motion Baskin et al. (2015). Accordingly, $n_{2}$ is conserved upon traversing the cavity and the transmission shows less peaks than bound states in the closed cavity. In the presence of spin-orbit coupling, the conservation law does not have to hold and more resonances then emerge. For $\phi=\pi/4$ the injected electrons are spin rotated with regard to the $\alpha_{2}$ field and spin precession oscillations of the Datta-Das type are expected. This can be more distinctly seen in Fig. 7(d), where the QPC widths are set to zero. Up to $\alpha_{2}\simeq 30$ meV nm the oscillations are smooth as in Fig. 4(d). For larger $\alpha_{2}$ the subband mixing potential starts to play a significant role. As a consequence of the spin mixing induced by the $p_{y}$ term, the precession oscillations become irregular Gelabert et al. (2010) and the transmission curves can no longer be determined by a single frequency. When combined with the Fabry-Perot oscillations, the transmission lineshapes are transformed into nonharmonic functions of $\alpha_{2}$ [see Fig. 7(c)] and our previous analysis in terms of quasiperiodic oscillations does not hold. For completeness, we also show the case $\phi=\pi/2$ for which the Data-Das frequency is higher (the spins are injected perpendicular to the Rashba field) but the spin oscillations turn out to be nonuniform as $\alpha_{2}$ grows as illustrated in Fig. 7(f). The overall transmission curves [Fig. 7(e)] qualitatively follow the pattern observed in the case $\phi=\pi/4$. In Fig. 8 we analyze the dependence with the central cavity width $L$. We set the spin-orbit strenght $\alpha_{2}$ to a moderate value to highlight the effects due to the Rashba intersubband coupling term. Figure 8(a) shows the transmission for $L_{1}=0$ and $\phi=\pi/2$. This implies that only oscillations from the spin dynamics are present since resonant tunneling effects are not allowed. Unlike Fig. 5(f) here the oscillations are not uniform for both transmission probabilities, $T^{++}$ and $T^{-+}$. The Fabry-Perot peaks are more regular as shown in Fig. 8(b), where $L_{1}$ is nonzero and $\phi=0$ in order to forbid spin precession oscillations. This suggests that the Rashba intersubband potential has a stronger impact on the Datta-Das oscillations. Finally, in Fig. 5(c) we show characteristic transmission curves for nonzero $L_{1}$ and $\phi=\pi/2$, in which case both oscillation modes come into play. As compared to the one-dimensional case in Fig. 5(e) the oscillations are now more intricate: their amplitudes strongly fluctuate with increasing $L$ and their frequency cannot be described in terms of combinations of individual frequencies. V Conclusions To sum up, we have investigated a resonant tunneling device with spin-orbit coupling in the tunnel barriers and the quantum well. We have found that both Fabry-Perot and spin precession oscillations combine into complex patterns that can be explained with the aid of quasiperiodic modes in the strict one-dimensional case. For the more actual setup where the conducting channel has a finite width we have discussed the important role of the Rashba intersubband coupling term as the spin-orbit strength increases. Our analysis is relevant for future designs of spin transistors with tunnel barriers since it emphasizes the quick deterioration of coherent oscillations even in the absence of disorder. The low-temperature conductance at linear response would be straightforwardly derived from the transmission curves reported here. One could also address high-field transport properties, in which case inelastic transitions in three-dimensional resonant tunneling diodes can change the current–voltage characteristics Stone and Lee (1985); Buttiker (1988). Another important issue for future works is the role of electron-electron interactions, which may lead to instabilities and hysteretic curves in double barrier systems Martin et al. (1994). Furthermore, magnetically doped resonant tunneling devices are shown to be quite sensitive to external magnetic fields Slobodskyy et al. (2003, 2007). However, the effect of spin-orbit coupling in these systems is still an open issue. Finally, we would like to mention the closely related systems known as chaotic dots Marcus et al. (1992) as they are built as semiconductor cavities between a pair of quantum point contacts, similarly to the two-dimensional cavities considered in the last part of our work. In contrast, our cavities have a regular shape. Interestingly, closed chaotic dots exhibit Coulomb blockade peak fluctuations Aleiner et al. (2002) and subsequent discussions might then consider if these fluctuations are affected by the presence of spin-orbit interactions. References Datta and Das (1990) S. Datta and B. Das, Electronic analog of the electro-optic modulator, Applied Physics Letters 56, 665 (1990). Rashba (1960) E. I. Rashba, Properties of semiconductors with an extremum loop I. Cyclotron and combinational resonance in a magnetic field perpendicular to the plane of the loop, Sov. Phys. Solid State 2, 1109 (1960). Fabian et al. (2007) J. Fabian, A. Matos-Abiague, C. Ertler, P. Stano, and I. Zutic, Semiconductor spintronics, Acta Phys. Slov. 57, 565 (2007). Bercioux and Lucignano (2015) D. Bercioux and P. Lucignano, Quantum transport in Rashba spin–orbit materials: a review, Reports on Progress in Physics 78, 106001 (2015). Nitta et al. (1997) J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Gate control of spin-orbit interaction in an inverted $\mathrm{{I}n}_{0.53}\mathrm{{G}a}_{0.47}\mathrm{{A}s}/\mathrm{{I}n}_{0.52}% \mathrm{{A}l}_{0.48}\mathrm{{A}s}$ heterostructure, Phys. Rev. Lett. 78, 1335 (1997). Engels et al. (1997) G. Engels, J. Lange, T. Schäpers, and H. Lüth, Experimental and theoretical approach to spin splitting in modulation-doped $\mathrm{{I}n}_{\mathrm{x}}\mathrm{{G}a}_{1\mathrm{-}\mathrm{x}}\mathrm{{A}s}/% \mathrm{{I}n{P}}$ quantum wells for B$\rightarrow$0, Phys. Rev. B 55, R1958 (1997). Schmidt et al. (2000) G. Schmidt, D. Ferrand, L. W. Molenkamp, A. T. Filip, and B. J. van Wees, Fundamental obstacle for electrical spin injection from a ferromagnetic metal into a diffusive semiconductor, Phys. Rev. B 62, R4790 (2000). Rashba (2000) E. I. Rashba, Theory of electrical spin injection: Tunnel contacts as a solution of the conductivity mismatch problem, Phys. Rev. B 62, R16267 (2000). Fert and Jaffrès (2001) A. Fert and H. Jaffrès, Conditions for efficient spin injection from a ferromagnetic metal into a semiconductor, Phys. Rev. B 64, 184420 (2001). Koo et al. (2009) H. C. Koo, J. H. Kwon, J. Eom, J. Chang, S. H. Han, and M. Johnson, Control of spin precession in a spin-injected field effect transistor, Science 325, 1515 (2009). Jedema et al. (2002) F. J. Jedema, H. B. Heersche, A. T. Filip, J. J. A. Baselmans, and B. J. van Wees, Electrical detection of spin precession in a metallic mesoscopic spin valve, Nature (London) 416, 713 (2002). Jeong and Lee (2006) J.-S. Jeong and H.-W. Lee, Ballistic spin field-effect transistors: Multichannel effects, Phys. Rev. B 74, 195311 (2006). Gelabert et al. (2010) M. M. Gelabert, L. Serra, D. Sánchez, and R. López, Multichannel effects in Rashba quantum wires, Phys. Rev. B 81, 165317 (2010). Sherman and Sinova (2005) E. Y. Sherman and J. Sinova, Physical limits of the ballistic and nonballistic spin-field-effect transistor: Spin dynamics in remote-doped structures, Phys. Rev. B 72, 075318 (2005). Nikolić and Souma (2005) B. K. Nikolić and S. Souma, Decoherence of transported spin in multichannel spin-orbit-coupled spintronic devices: Scattering approach to spin-density matrix from the ballistic to the localized regime, Phys. Rev. B 71, 195328 (2005). Xu et al. (2014) L. Xu, X.-Q. Li, and Q.-f. Sun, Revisit the spin-fet: Multiple reflection, inelastic scattering, and lateral size effects, Sci. Rep. 4, 7527 (2014). Sun et al. (2011) B. Y. Sun, P. Zhang, and M. W. Wu, Voltage-controlled spin precession in InAs quantum wells, Semiconductor Science and Technology 26, 075005 (2011). Wójcik et al. (2014) P. Wójcik, J. Adamowski, B. J. Spisak, and M. Wołoszyn, Spin transistor operation driven by the Rashba spin-orbit coupling in the gated nanowire, Journal of Applied Physics 115, 104310 (2014). Pala et al. (2004) M. G. Pala, M. Governale, J. König, and U. Zülicke, Universal Rashba spin precession of two-dimensional electrons and holes, EPL (Europhysics Letters) 65, 850 (2004). Agnihotri and Bandyopadhyay (2010) P. Agnihotri and S. Bandyopadhyay, Analysis of the two-dimensional Datta–Das spin field effect transistor, Physica E: Low-dimensional Systems and Nanostructures 42, 1736 (2010). Zainuddin et al. (2011) A. N. M. Zainuddin, S. Hong, L. Siddiqui, S. Srinivasan, and S. Datta, Voltage-controlled spin precession, Phys. Rev. B 84, 165306 (2011). Gelabert and Serra (2011) M. M. Gelabert and L. Serra, Conductance oscillations of a spin-orbit stripe with polarized contacts, The European Physical Journal B 79, 341 (2011). Alomar et al. (2015) M. I. Alomar, L. Serra, and D. Sánchez, Seebeck effects in two-dimensional spin transistors, Phys. Rev. B 91, 075418 (2015). Chuang et al. (2015) P. Chuang, S. Ho, L. Smith, F. Sfigakis, M. Pepper, C. Chen, J. Fan, J. Griffiths, I. Farrer, H. Beere, et al., All-electric all-semiconductor spin field-effect transistors, Nature Nanotech. 10, 35 (2015). Debray et al. (2009) P. Debray, S. M. S. Rahman, J. Wan, R. S. Newrock, M. Cahay, A. T. Ngo, S. E. Ulloa, S. T. Herbert, M. Muhammad, and M. Johnson, All-electric quantum point contact spin-polarizer, Nature Nanotech. 4, 759 (2009). Nowak and Szafran (2013) M. P. Nowak and B. Szafran, Spin current source based on a quantum point contact with local spin-orbit interaction, Applied Physics Letters 103, 202404 (2013). Schliemann et al. (2003) J. Schliemann, J. C. Egues, and D. Loss, Nonballistic spin-field-effect transistor, Phys. Rev. Lett. 90, 146801 (2003). Hall et al. (2003) K. C. Hall, W. H. Lau, K. Gündoğdu, M. E. Flatté, and T. F. Boggess, Nonmagnetic semiconductor spin transistor, Applied Physics Letters 83, 2937 (2003). Wang et al. (2003) B. Wang, J. Wang, and H. Guo, Quantum spin field effect transistor, Phys. Rev. B 67, 092408 (2003). Awschalom and Samarth (2009) D. Awschalom and N. Samarth, Spintronics without magnetism, Physics 2, 50 (2009). Wunderlich et al. (2010) J. Wunderlich, B.-G. Park, A. C. Irvine, L. P. Zârbo, E. Rozkotová, P. Nemec, V. Novák, J. Sinova, and T. Jungwirth, Spin Hall effect transistor, Science 330, 1801 (2010). Liu et al. (2012) J.-F. Liu, K. S. Chan, and J. Wang, Nonmagnetic spin-field-effect transistor, Applied Physics Letters 101, (2012). Serra et al. (2007) L. Serra, D. Sánchez, and R. López, Evanescent states in quantum wires with Rashba spin-orbit coupling, Phys. Rev. B 76, 045339 (2007). Sablikov and Tkach (2007) V. A. Sablikov and Y. Y. Tkach, Evanescent states in two-dimensional electron systems with spin-orbit interaction and spin-dependent transmission through a barrier, Phys. Rev. B 76, 245321 (2007). Voskoboynikov et al. (1999) A. Voskoboynikov, S. S. Liu, and C. P. Lee, Spin-dependent tunneling in double-barrier semiconductor heterostructures, Phys. Rev. B 59, 12514 (1999). de Andrada e Silva and La Rocca (1999) E. A. de Andrada e Silva and G. C. La Rocca, Electron-spin polarization by resonant tunneling, Phys. Rev. B 59, R15583 (1999). Koga et al. (2002) T. Koga, J. Nitta, H. Takayanagi, and S. Datta, Spin-filter device based on the Rashba effect using a nonmagnetic resonant tunneling diode, Phys. Rev. Lett. 88, 126601 (2002). Ting and Cartoixà (2002) D. Z.-Y. Ting and X. Cartoixà, Resonant interband tunneling spin filter, Applied Physics Letters 81, 4198 (2002). Glazov et al. (2005) M. M. Glazov, P. S. Alekseev, M. A. Odnoblyudov, V. M. Chistyakov, S. A. Tarasenko, and I. N. Yassievich, Spin-dependent resonant tunneling in symmetrical double-barrier structures, Phys. Rev. B 71, 155313 (2005). Isić et al. (2010) G. Isić, D. Indjin, V. Milanović, J. Radovanović, Z. Ikonić, and P. Harrison, Phase-breaking effects in double-barrier resonant tunneling diodes with spin-orbit interaction, Journal of Applied Physics 108, 044506 (2010). Molenkamp et al. (2001) L. W. Molenkamp, G. Schmidt, and G. E. W. Bauer, Rashba hamiltonian and electron transport, Phys. Rev. B 64, 121202 (2001). López et al. (2007) R. López, D. Sánchez, and L. Serra, From Coulomb blockade to the Kondo regime in a Rashba dot, Phys. Rev. B 76, 035307 (2007). Ott (1993) E. Ott, Chaos in dynamical systems (Cambridge University Press, 1993). Ruzin et al. (1992) I. M. Ruzin, V. Chandrasekhar, E. I. Levin, and L. I. Glazman, Stochastic Coulomb blockade in a double-dot system, Phys. Rev. B 45, 13469 (1992). Sánchez et al. (2001) D. Sánchez, G. Platero, and L. L. Bonilla, Quasiperiodic current and strange attractors in ac-driven superlattices, Phys. Rev. B 63, 201306 (2001). Sánchez and Serra (2006) D. Sánchez and L. Serra, Fano-Rashba effect in a quantum wire, Phys. Rev. B 74, 153313 (2006). Shelykh and Galkin (2004) I. A. Shelykh and N. G. Galkin, Fano and Breit-Wigner resonances in carrier transport through Datta and Das spin modulators, Phys. Rev. B 70, 205328 (2004). Zhang et al. (2005) L. Zhang, P. Brusheim, and H. Q. Xu, Multimode electron transport through quantum waveguides with spin-orbit interaction modulation: Applications of the scattering matrix formalism, Phys. Rev. B 72, 045347 (2005). Lent and Kirkner (1990) C. S. Lent and D. J. Kirkner, The quantum transmitting boundary method, Journal of Applied Physics 67, 6353 (1990). Cohen-Tannoudji et al. (1977) C. Cohen-Tannoudji, B. Diu, and F. Laloë, Quantum Mechanics, vol. 1 (Wiley-Interscience, New York, USA, 1977). Baskin et al. (2015) L. Baskin, P. Neittaanmäki, B. Plamenevskii, and O. Sarafanov, Resonant tunneling. Quantum waveguides of variable cross-section, asymptotics, numerics, and applications (Springer, Cham, Switzerland, 2015). Stone and Lee (1985) A. D. Stone and P. A. Lee, Effect of inelastic processes on resonant tunneling in one dimension, Phys. Rev. Lett. 54, 1196 (1985). Buttiker (1988) M. Buttiker, Coherent and sequential tunneling in series barriers, IBM Journal of Research and Development 32, 63 (1988). Martin et al. (1994) A. D. Martin, M. L. F. Lerch, P. E. Simmonds, and L. Eaves, Observation of intrinsic tristability in a resonant tunneling structure, Applied Physics Letters 64, 1248 (1994). Slobodskyy et al. (2003) A. Slobodskyy, C. Gould, T. Slobodskyy, C. R. Becker, G. Schmidt, and L. W. Molenkamp, Voltage-controlled spin selection in a magnetic resonant tunneling diode, Phys. Rev. Lett. 90, 246601 (2003). Slobodskyy et al. (2007) A. Slobodskyy, C. Gould, T. Slobodskyy, G. Schmidt, L. W. Molenkamp, and D. Sánchez, Resonant tunneling diode with spin polarized injector, Applied Physics Letters 90, 122109 (2007). Marcus et al. (1992) C. M. Marcus, A. J. Rimberg, R. M. Westervelt, P. F. Hopkins, and A. C. Gossard, Conductance fluctuations and chaotic scattering in ballistic microstructures, Phys. Rev. Lett. 69, 506 (1992). Aleiner et al. (2002) I. Aleiner, P. Brouwer, and L. Glazman, Quantum effects in Coulomb blockade, Physics Reports 358, 309 (2002).
SEARCHES FOR BSM HIGGS AT THE TEVATRON L. SCODELLARO (on behalf of the CDF and D$\O$ Collaborations) SEARCHES FOR BSM HIGGS AT THE TEVATRON Instituto de Fisica de Cantabria, Avda de los Castros s/n, Santander 39005, Spain In this paper, we present the latest results of the searches for beyond standard model Higgs boson production at the Tevatron collider of Fermilab. Analyses have been carried out on samples of about 1-4 fb${}^{-1}$ of data collected by the CDF $\!{}^{{\bf?}}$ and D$\O$ $\!{}^{{\bf?}}$ detectors. In particular, Higgs bosons in supersymmetric models and fermiophobic scenario have been investigated, and limits on production cross sections and theory parameters have been established. 1 Introduction The CDF and D$\O$ experiments are finally reaching sensitivity to a standard model Higgs boson production in $p\bar{p}$ collisions at the Tevatron $\!{}^{{\bf?}}$. Nevertheless, no hint for Higgs has been observed yet. Moreover, the experiments can not still probe the low mass region $M_{H}<160$ GeV/c${}^{2}$ which is favorite by the fit to the electroweak observables. Searches for Higgs boson production in the context of beyond standard model theories are then well motivated and have been carried out both from CDF and D$\O$ collaborations. We will summarize here the latest results, by focusing on four different scenarios: neutral Higgs bosons in the minimal supersymmetric standard model (MSSM), charged Higgs bosons, Higgs in the next to minimal supersymmetric standard model (nMSSM), and fermiophobic Higgs bosons. 2 Neutral Higgs Bosons in the MSSM The MSSM requires the existence of two isodoublets of Higgs fields, which couple to up-type and down-type fermions respectively. Out of the eight degrees of freedom, three are absorbed by the masses of the $Z$ and $W$ bosons, and five are associated to new scalar particles: three neutral Higgs bosons ($h$, $H$, $A$) and two charged ones ($H^{\pm}$). At tree level, Higgs phenomenology in the MSSM is described by two parameters: the ratio $\tan\beta$ of the vacuum expectation values of the Higgs doublets, and the mass $m_{A}$ of the pseudoscalar boson $A$. The couplings of neutral Higgs bosons to bottom quark $b$ and tau $\tau$ (down-type fermions) scale as $\tan\beta$ with respect to standard model value. For $\tan\beta\sim 1$, therefore, limits on standard model Higgs production apply to neutral Higgs in MSSM too. At high values of $\tan\beta$, production processes involving $b$ quarks are enhanced of a factor $\tan^{2}\beta$. Moreover, the pseudoscalar boson $A$ becomes degenerate with either one of the other neutral Higgs particles, which provides a further enhancement of the searched signal. Finally, in the high $\tan\beta$ region the neutral Higgs bosons decay dominantly into $b\bar{b}$ (Br$\sim 90\%$) or $\tau^{+}\tau^{-}$ (Br$\sim 5$-$13\%$) pairs. The CDF and D$\O$ collaborations looked for signal of MSSM neutral Higgs boson production both inclusively and in association with a bottom quark. While offering a higher cross section, the inclusive production can only be exploited in the decay mode to taus, due to the high background from QCD processes which can mimic a $b\bar{b}$ signal. Associated production has been instead investigated both in the $\tau^{+}\tau^{-}$ and $b\bar{b}$ decay channels. The reconstruction of the hadronic decays of the tau and the identification of jets coming from b quark hadronization are key ingredient of these searches. Upper limits on production cross sections can be interpreted as exclusion regions in the plane $m_{A}$-$\tan\beta$. Since at higher order other parameters of the MSSM become important for Higgs phenomenology, a particular set (benchmark scenario) for their values have to be considered when drawing the exclusion regions. Fig. 1 shows the results for the maximum Higgs mass and the no-mixing scenarios $\!{}^{{\bf?}}$, and for Higgs mixing parameter $\mu=\pm 200$ GeV. 3 Charged Higgs Bosons Searches for charged Higgs bosons $H^{\pm}$ have been carried out at the Tevatron experiments by looking in top quark samples. In particular, the CDF collaboration looked for the decay of top quark into charged Higgs and bottom quark in $t\bar{t}$ pair production events. In order to reduce background, the other top was required to decay in a $W$ boson which then decay to leptons, and the bottom quarks are required to be tagged. The charged Higgs is assumed to decay exclusively to quarks. This search is sensitive to MSSM production for $\tan\beta<1$ and $M_{H}^{\pm}<130$ GeV/c${}^{2}$. By fitting the observed dijet mass distribution to $H^{\pm}\rightarrow q\bar{q}^{\prime}$, $W^{\pm}\rightarrow q\bar{q}^{\prime}$ and background templates, an upper limit on the branching ratio of the $t\rightarrow H^{+}b$ decay has been set as a function of the Higgs boson mass (see Fig. 3). The D$\O$ experiment searched for charged Higgs boson by using a different approach, which consists in computing the effects that a $t\rightarrow H^{+}b$ decay would have on the yields of events in the different $t\bar{t}$ decay channels, and then comparing the expectations to the observed number of events to set limit on the branching ratio of top quark decay to charged Higgs boson. Fig. 3 shows the results in two scenarios for the Higgs decay: a tauonic model where the Higgs decays exclusively into tau and neutrino (which is equivalent to the MSSM for very high values of $\tan\beta$), and a leptophobic model assuming Br($H^{+}\rightarrow c\bar{s}$)$=100~{}\%$ (realized by a general multi-Higgs-doublet model $\!{}^{{\bf?}}$). 4 Higgs Bosons in the nMSSM The nMSSM $\!{}^{{\bf?}}$ adds a singlet superfield to the MSSM, allowing the theory to generate dynamically the mixing term $\mu H_{u}H_{d}$ in the Higgs sector, and solving in this way the $\mu$ problem. It also turns out to be the simplest supersymmetric model in which the electroweak scale originates only from the scale of supersymmetry breaking. Two additional Higgs boson states appear in the nMSSM: a neutral CP-even Higgs $s$ and a CP-odd Higgs $a$. While the lightest CP-even Higgs boson $h$ remains SM-like in the nMSSM, its dominant decay may not be necessarily into a $b\bar{b}$ pair, since the mass of the new state $a$ is allowed to be small enough for the decay $h\rightarrow aa$ to become dominant. LEP limits on the mass of the $h$ boson can then be avoided if $M_{a}<2m_{b}$, obtaining in this way a theory free from fine-tuning problems. The D$\O$ collaboration searched for the nMSSM process $h\rightarrow aa$. At low $M_{a}<2m_{\tau}$, a 4 muon signature is required, and upper limits on $\sigma(p\bar{p}\rightarrow hX)\times$Br$(h\rightarrow aa)\times$Br$(a\rightarrow\mu\mu)^{2}$ at about $10$ fb have been set. Assuming Br$(h\rightarrow aa)\approx 100\%$ and $M_{h}=120$ GeV/c${}^{2}$, which correspond to a production cross section of 1000 fb within the SM, it should be Br$(a\rightarrow\mu^{+}\mu^{-})\apprle 10\%$ to avoid detection, while the nMSSM predicts a branching ratio for the decay $a\rightarrow\mu^{+}\mu^{-}$ greater than $10\%$ for $a$ boson mass up to $2m_{c}$, and, depending on the branching ratio of $a$ to charm quarks, possibly even up to $2m_{\tau}$. For $M_{a}>2m_{\tau}$, the decay channel to $\mu^{+}\mu^{-}\tau^{+}\tau^{-}$ has been investigated and the limits set on Higgs production are still a factor of $\sim 4$ larger than predictions. 5 Fermiophobic Higgs Bosons A fermiophobic Higgs boson would greatly enhance the sensitivity of the Tevatron experiments to Higgs production in the low mass region ($M_{H}\apprle 130$ GeV/c${}^{2}$), where the dominant SM decay to $b\bar{b}$ provides a difficult signature due to the background from QCD processes. Theoretically, null (or highly suppressed) coupling of the Higgs boson to fermions could indicate a different origin for fermion and boson masses. The benchmark fermiophobic model assumes the same Higgs couplings to gauge boson as in the SM, and no couplings to fermions. In such a scenario, Higgs direct production is forbidden, and productions in association with a $W$ or a $Z$ boson and via vector boson fusion become the dominant mechanisms. The CDF and D$\O$ collaborations looked for $WH\rightarrow WWW^{*}$ production in events with two leptons (electrons or muons) with the same charge. Observed limits on the production cross section times the branching ratio for the decay $H\rightarrow W^{+}W^{-}$ are compared to SM and fermiophobic model predictions in Fig. 5. Inclusive production of a Higgs boson decaying to photons has also been searched by the two experiments by exploiting the high resolution (about $3\%$) on the reconstructed mass of the diphoton system provided by their calorimeters. When comparing the observed limits on the production cross section to the benchmark model expectations, a lower limit on the mass of a fermiophobic Higgs boson is set at $106$ GeV/c${}^{2}$ (see Fig. 5). 6 Conclusions The CDF and D$\O$ collaborations looked actively for Higgs bosons in the context of physics beyond the standard model in about 1-4 fb${}^{-1}$ of $p\bar{p}$ collisions at the Tevatron collider. Advanced techniques have been established and several limits on relevant parameters for different theories have been set, but no Higgs production signal has been observed yet. Lot of improvements are to come: increased statistics (both experiments already have 5 fb${}^{-1}$ of data on tape) and combination of different search channels and experiment results will enhance the sensitivity to Higgs production, eventually leading to new insights on the mechanism of electroweak symmetry breaking. References References [1] D. Acosta et al, PRD 71, 032001 (2005). [2] V. Abazov et al, Nucl. Instrum. Methods Phys. Res. A 565, 463 (2006). [3] arXiv:0903.4001v1 [hep-ex]. [4] M. Carena et al, Eur. Phys. J. C 26, 601 (2003). [5] Y. Grossman, Nucl. Phys. B 426, 355 (1994). [6] U. Ellwanger et al, Nucl. Phys. B 492, 21 (1997).
Scattering theory of walking droplets in the presence of obstacles Rémy Dubertrand, Maxime Hubert, Peter Schlagheck, Nicolas Vandewalle, Thierry Bastin, John Martin${}^{1}$ ${}^{1}$ Département de Physique, University of Liège, 4000 Liège, Belgium Abstract We aim to describe a droplet bouncing on a vibrating bath. Due to Faraday instability a surface wave is created at each bounce and serves as a pilot wave of the droplet. This leads to so called walking droplets or walkers. Since the seminal experiment by Couder et al [Phys. Rev. Lett. 97, 154101 (2006)] there have been many attempts to accurately reproduce the experimental results. Here we present a simple and highly versatile model inspired from quantum mechanics. We propose to describe the trajectories of a walker using a Green function approach. The Green function is related to Helmholtz equation with Neumann boundary conditions on the obstacle(s) and outgoing conditions at infinity. For a single slit geometry our model is exactly solvable and reproduces some general features observed experimentally. It stands for a promising candidate to account for the presence of any boundaries in the walkers’dynamics. pacs: 47.55.D-,03.65.-w 1 Introduction A considerable attention has recently been paid to the study of an hydrodynamic analogue of quantum wave particle duality. It started originally from an experiment where an oil droplet is falling on a vertically vibrating bath [1, 2, 3]. In the appropriate regime of viscosity and vibrating frequency the drop starts bouncing periodically on the surface. This leads to nontrivial effects due to the coupling between the dynamics of the surface wave and the drop, see e.g. [4]. When appropriately tuning the vibrating frequency the droplet starts to move horizontally. This is referred to as a ”walking droplet” or ”walker”. While this walk is rectilinear and at constant speed in a homogeneous tank it becomes significantly perturbed in the vicinity of boundaries. In the pioneering experiment [1] individual droplets were walking through a single or double slit. Measuring the droplet positions at a large distance behind the slit(s) yielded similar single-slit diffraction and double slit interference patterns as in quantum mechanics. In subsequent experiments other quantum phenomena could be mimicked like tunnelling [5], orbit quantisation and Landau levels [6]. To our knowledge such walking droplets stand for the very first example of systems outside the quantum world that can reproduce some features of pilot wave theory [7]. Indeed, the droplet can be identified with a particle that creates a wave at each bounce. The surface wave has a back action on the droplet when the latter impacts it, hence acting like a pilot wave. This phenomenon is strongly reminiscent of de Broglie’s early formulation of quantum theory [7], later pursued by Bohm [8]. A more quantitative comparison between walking droplets and quantum particles has been the motivation of many recent studies, see e.g. [9, 10, 11], without quantitative claim related to the experiment. We aim here to be focused on a simple yet precise description of the dynamics of the droplet. We are especially interested in the influence of the presence of boundaries on the trajectories of the walkers. In Sect. 2 we present our model for the dynamics of a walker in the presence of a boundary. In Sect. 3 numerical simulations are presented for the single slit obstacle. The geometry has been chosen as it contributed quite significantly to the interest towards walkers. This geometry is also exactly solvable in our framework. In Sect. 4 we discuss the benefit and the limitations of our model and expose possible extensions of it. 2 Green function approach for walking droplets 2.1 Walkers in free space It is useful to recall the models that were previously used in the absence of obstacles [4]. The starting assumption is that when the droplet impacts the surface wave it creates a perturbation of small amplitude so that the equations describing the bath surface can be linearised. Then it is customary to decompose the motion of the walkers in the directions along and transverse to the vertical vibrating direction. The first refers to the bouncing and can be approximated to be periodic, if the wave amplitude at the surface is small enough [4]. The second will be our main focus. Let us denote by the $2-$dimensional vector ${\bf r}(t)$ the position of the droplet’s impact on the interface between liquid and air at time $t$. We want to write a dynamical equation for ${\bf r}(t)$. The historically first model [4] assumes that the droplet is a material point as in classical mechanics. It is subject to three types of forces: • a force originating from the coupling between the surface wave of the bath and the droplet. This coupling is taken to be of the form $-A{\bf\nabla}h({\bf r},t)$, where $h({\bf r},t)$ is the height of the fluid surface at the position ${\bf r}$ and time $t$ and $A$ is a coupling coefficient to be discussed below, • a friction force due to the viscosity of the air layer when the droplet surfs during the contact time. At the leading order of small velocities it is modelled by $-D\mathrm{d}{\bf r}/\mathrm{d}t$ where the coefficient $D$ depends on the mass and the size of the drop; as well as on the density, the viscosity and the surface tension of the fluid [9], • any external force ${\bf F}_{\rm ext}$ applied to the droplet. For example droplets with metallic core have been designed and put in a magnetic field in order to create a harmonic potential [12]. Under these assumptions we can write a Newton-like law for the droplet horizontal dynamics $$m\frac{\mathrm{d}^{2}{\bf r}}{\mathrm{d}t^{2}}={\bf F}_{\rm ext}-D\frac{% \mathrm{d}{\bf r}}{\mathrm{d}t}-A{\bf\nabla}h({\bf r},t)\ ,$$ (1) where $m$ denotes the mass of the droplet. In the present study, and for the sake of simplicity, we will assume that ${\bf F}_{\rm ext}={0}$. The last and highly nontrivial part of the model is the ansatz for the surface of the fluid. To our opinion this is the very source of all the complexity of the droplet’s dynamics. A first ansatz for the fluid surface has been proposed in [6]. It reads: $$h({\bf r},t_{n})=\sum_{p=-\infty}^{n-1}\mathrm{Re}\left[\dfrac{C_{0}e^{\mathrm% {i}{\bf k}_{F}.({\bf r}-{\bf r}_{p})+\mathrm{i}\phi}}{|{\bf r}-{\bf r}_{p}|^{1% /2}}\right]e^{-|{\bf r}-{\bf r}_{p}|/\delta}e^{-\frac{n-p}{{\cal M}}}\ ,$$ (2) where ${\bf k}_{F}$ is the wave vector of the Faraday waves created by the bath vibration. The parameters $C_{0}$ and $\phi$ are the intrinsic amplitude and phase of emitted Faraday waves, which can be estimated experimentally, see e.g. [13]. The spatial damping term comes from the viscosity: $\delta$ is the typical length scale that a wave can travel at the fluid surface. It has been estimated from experimental data in [13] although the details of the determination of $\delta$ were not explicited. The vector ${\bf r}_{p}$ stands for the impact position of the droplet at time $t_{p}\equiv pT_{F}<t_{n}$, where $T_{F}$ is the period of Faraday waves. Notice that the index $n$ for the time recalls that we are interested in the surface profile only at a discrete sequence of times, when the droplet interacts with it. Eventually Faraday waves are subject to a temporal damping, which is characterised by the key parameter $\cal M$, often called the memory. $\cal M$ is related to the difference between the vibration amplitude $\Gamma$ of the bath and the Faraday threshold $\Gamma_{F}$ $${\cal M}=\dfrac{\Gamma_{F}}{\Gamma_{F}-\Gamma}\ .$$ (3) Physically, as the vibration amplitude is always below the threshold in the walking regime, this means that a perturbation of the surface profile will lead to a Faraday wave, which will last typically for the duration ${\cal M}T_{F}$. Another ansatz was derived for the surface height from a more fundamental perspective [9]. When there is no obstacle the surface height can be modelled by: $$h({\bf r},t_{n})=h_{0}\sum_{p=-\infty}^{n-1}J_{0}(k_{F}|{\bf r}-{\bf r}_{p}|)e% ^{-\frac{n-p}{{\cal M}}}\ ,$$ (4) where $h_{0}$ is a function of the fluid and droplet parameters. $J_{0}$ denotes the Bessel function of the first kind of zeroth order. While there are obvious similarities between both ansatz (2) and (4), we want to comment important differences. The main viscosity effects in (4) are located in $h_{0}$ (there is no spatial damping). Equation (4) offers a smoother spatial profile at the vicinity of the impact, while it behaves in the same way (the amplitude decreasing like $1/\sqrt{r}$) as Eq. (2) at larger distances. This model can be generalised by taking a time continuum limit [11]. 2.2 Obstacles for walkers It is worth recalling that in the previous experiments [3, 17, 18] an obstacle consists in a submerged plot. This changes the local depth of the bath and hence the dispersion relation for the Faraday waves. For a small enough depth no wave can travel and this leads to a region the walking droplets cannot go. With this definition of a boundary there have been several geometries considered to study the dynamics of a walker: the circular cavity [14], the annular cavity [18], the square cavity [16, 23] and a droplet in a rotating tank [15]. One should emphasise that one of the most intriguing results obtained with the walkers has been encountered within the single and double slit geometries, where an interference pattern was experimentally observed [3]. On the theoretical side, the presence of obstacles seems to resist to a systematic treatment. Secondary sources were suggested in [3] with poor physical justification. A recent study focused on the circular cavity [25]. It relies on a decomposition of the surface wave into the eigenmodes of the cavity. In this model the surface wave is assumed to obey Neumann conditions at the boundary, i.e., the normal derivative of the modes vanishes at the boundary. So far this model only deals with confined geometries. 2.3 Our model: Green function approach We choose to adopt here a conceptually simpler and more direct approach. The main goal is to account for any geometry of the tank as well as for any shape of one or several obstacles inside it. As usual in fluid dynamics, the main problem is to describe precisely the boundary conditions. To this end, we recall that in the vanishing viscosity limit the Faraday waves can be described by imposing Neumann boundary conditions [24]. As the small viscosity approach was already successfully applied to describe the walking droplet, we choose to assume that the waves should obey these boundary conditions along the boundary of any obstacle. Generalisations to other boundary conditions is straightforward. Our model then relies on the Newton-like description of the droplet via Eq. (1) for the horizontal motion as it has been the approach which allows for the best agreement with the experimental data. The starting point is to notice that the ansatz (4) for the bath surface without any boundary can be rewritten as $$h({\bf r},t_{n})=-4h_{0}\sum_{p=-\infty}^{n-1}\mathrm{Im}\left[G_{0}({\bf r},{% \bf r}_{p})\right]e^{-\frac{n-p}{{\cal M}}}\ ,$$ (5) where the Green function for the Helmholtz equation in the $2-$dimensional Euclidean plane has been introduced: $$G_{0}({\bf r},{\bf r}_{0})=\frac{H_{0}^{(1)}(k_{F}|{\bf r}-{\bf r}_{0}|)}{4% \mathrm{i}}\ .$$ (6) Here $H_{0}^{(1)}(z)=J_{0}(z)+\mathrm{i}Y_{0}(z)$ denotes the Hankel function of the first kind of order $0$. Our model consists in generalising Eq. (5) in the presence of obstacles by considering the relevant Green function. More precisely the bath surface will be described by: $$h({\bf r},t_{n})=-4h_{0}\sum_{p=-\infty}^{n-1}\mathrm{Im}\left[G({\bf r},{\bf r% }_{p})\right]e^{-\frac{n-p}{{\cal M}}}\ ,$$ (7) where $G({\bf r},{\bf r}_{0})$ is the kernel of a certain Green operator. It is defined through the following requirements: • $G({\bf r},{\bf r}_{0})$ is the Green function for the Helmholtz equation with the wave number $k_{F}$: $$({\bf\nabla}^{2}+k_{F}^{2})G({\bf r},{\bf r}_{0})=\delta({\bf r}-{\bf r}_{0})\;,$$ (8) where $\delta({\bf r})$ stands for the Dirac distribution, and ${\bf r}_{0}$ usually refers to a source (see below), • it obeys Neumann boundary conditions on the obstacles, see Sect. 2.2, • it obeys outgoing boundary conditions at infinity: $$G({\bf r},{\bf r}_{0})\propto\frac{e^{+\mathrm{i}k_{F}r}}{\sqrt{k_{F}r}},\quad r% \to\infty\ .$$ (9) The model containing Eq. (7) for the bath surface together with the above listed requirements in order to uniquely define $G({\bf r},{\bf r}_{0})$ constitute the main ingredients of the present theory. For sake of completeness it was assumed that $k_{F}$ has a small positive imaginary part so that $G({\bf r},{\bf r}_{0})$ stand for the retarded Green function. We will now explain why the imaginary part of $G({\bf r},{\bf r}_{0})$ is relevant for our model. When a droplet of infinitesimal spatial extent hits the bath at the point ${\bf r}_{0}$, one can model the bath surface receiving one point impact by $$h_{p}({\bf r})\propto\delta({\bf r}-{\bf r}_{0})=\int\varphi_{\bf k}({\bf r}_{% 0})^{*}\varphi_{\bf k}({\bf r})\mathrm{d}{\bf k}\ $$ (10) where the closure relation for the scattering states $\varphi_{\bf k}({\bf r})$ has been used. We assume that after one bounce the capillary waves emitted by the impact of the droplet has entirely left the impact region of the droplet111This argument needs to be refined when the impact occurs very close to the boundary. Such impacts, however, are not expected to occur often along the walking trajectory of a droplet.. The surface profile is then dominantly governed by standing Faraday waves. This is in agreement with the observations reported in [13]. Consequently we now assume that only those components of the decomposition of $h_{0}$ in Eq. (10) survive, whose wave number (or, more precisely, the eigenvalue of Helmholtz equation) is identical to the Faraday wave number $k_{F}$. A more detailed description of the decomposition between capillary and Faraday waves will be provided in a forthcoming publication. This yields the expression for the surface profile at the next bounce of the droplet: $$h_{p+1}({\bf r})\propto\int\varphi_{\bf k}({\bf r}_{0})^{*}\varphi_{\bf k}({% \bf r})\delta(|{\bf k}|-k_{F})\mathrm{d}{\bf k}\ ,$$ (11) where the proportionality factor accounts for temporal decay due to the memory. Using the decomposition of the retarded Green function defined in (8) into eigenstates $$G({\bf r},{\bf r}_{0})=\int\dfrac{\varphi_{\bf k}({\bf r_{0}})^{*}\varphi_{\bf k% }({\bf r})}{{\bf k}^{2}-k_{F}^{2}+\mathrm{i}\epsilon}\mathrm{d}{\bf k}$$ (12) we then obtain that the surface profile formed by one bounce is modelled by: $$h_{\textrm{1 bounce}}({\bf r})\propto\mathrm{Im}\left[G({\bf r},{\bf r}_{0})% \right]\ .$$ (13) The final expression in Eq. (7) comes from a superposition argument: the resulting surface profile is the sum of all the Faraday waves emitted during by the previous impacts of the droplet. It is worth giving some remarks about our model. First it reproduces the dynamics of a walker as derived from fluid dynamics arguments in [9] when there is no obstacle. The description using Helmholtz equation for the bath profile is also used in [25], but in a different manner: the surface profile is expanded as a superposition of eigenmodes of the cavity. Our approach is similar to this idea. It is more general as it applies to both closed and open geometries. Another perspective, which can be drawn for the description of the bath surface is to adopt a quantum point of view. We shall very briefly recall what the physical meaning of the Green function is. As a wave in the plane always obeys a certain wave equation, a useful way to study its dynamics is to start investigating what the wave profile would be if the wave is initially concentrated at one point. This point is referred to as the ”source” throughout this section. For example consider a free quantum particle in the $2D$ plane. The wave equation for the wave function $\Psi({\bf r},t)$ describing the particle is simply the Schrödinger equation: $$\mathrm{i}\hbar\dfrac{\partial\Psi}{\partial t}=-\frac{\hbar^{2}}{2m}{\bf% \nabla}^{2}\Psi\ ,$$ (14) where $\hbar$ is the Planck constant divided by $2\pi$ and $m$ the mass of the particle. Looking for a stationary solution — or equivalently, being restricted to one fixed frequency by linearity the trial solution $\Psi({\bf r},t)=e^{-\mathrm{i}\omega t}\varphi({\bf r})$ is inserted into Eq. (14) to obtain: $$({\bf\nabla}^{2}+k^{2})\varphi=0\ ,$$ (15) together with the dispersion relation $$\omega=\dfrac{\hbar k^{2}}{2m}\ .$$ (16) In other words Eq. (15) describes the stationary wave equation of a quantum particle in the plane. If a point source is now added one has to solve Eq. (8). Hence, $G({\bf r},{\bf r}_{0})$ is the stationary wave amplitude for a solution of Eq. (15) propagating with the wave number $k=k_{F}$, when there is a source located at the position ${\bf r}_{0}$. It is also well known in quantum mechanics that the same approach can be generalised for a nonzero potential and/or the presence of barriers. The definition of the corresponding Green function is achieved through the assumptions listed above. 3 Walkers going through a single slit We will illustrate the model introduced in the previous section by considering a special choice for the obstacle. More precisely, the trajectories of walkers going through a single slit are considered. This obstacle is motivated by two main reasons: first, it was among the first geometries to be considered in the experiments [3]. Second, it is among the few shapes that allow for an explicit and analytical expression of the Green function. 3.1 Green function of the single slit In order to write the Green function of Eq. (8) with Neumann boundary conditions on a single slit, it is convenient to introduce the elliptic coordinates $(u,v)$ in a $2D$ geometry: $$\displaystyle x$$ $$\displaystyle=$$ $$\displaystyle\frac{a}{2}\cosh u\cos v\ ,$$ (17) $$\displaystyle y$$ $$\displaystyle=$$ $$\displaystyle\frac{a}{2}\sinh u\sin v\ ,$$ (18) where $(x,y)$ are the Cartesian coordinates. The range of the new coordinates is: $$u\geq 0,\ -\pi<v\leq\pi\ .$$ In this definition of the elliptic coordinates, $a$ denotes the width of the slit and the arms of the slit are along the $x$ axis, see also Fig. 1. The elliptic coordinates are very convenient for the single slit problem because both arms have very simple equations: the left arm in Fig. 1 is defined as $v=\pi$, whereas the right arm is $v=0$. With our determination of $v$, the upper half plane is $u>0,v>0$ while the lower half plane is $u>0,v<0$. The slit is described by $u=0$. Along the slit, the points with coordinates $(u,v)=(0,v)$ and $(u,v)=(0,-v)$ coincide for $0<v<\pi$. There have been several studies for the derivation of the Green function for the single slit [19, 20, 21]. The technical details of its evaluation are beyond the scope of this paper and are to be published elsewhere. A reminder of the derivation is given in A. In the following the point ${\bf r}$ in the plane is assumed to have $(u,v)$ as elliptical coordinates while ${\bf r}_{0}$ is identified with $(u_{0},v_{0})$. Without loss of generality one can consider that the source is located below the slit, i.e. $v_{0}<0$. Then the Green function for the single slit with Neumann boundary conditions is in the upper half plane ($0<v<\pi$): $$G({\bf r},{\bf r}_{0})=\displaystyle\sum_{n\geq 0}\dfrac{Me^{(1)}_{n}(q,u)Me^{% (1)}_{n}(q,u_{0})ce_{n}(q,v)ce_{n}(q,v_{0})}{\pi Me^{(1)}_{n}(q,0)Me^{(1)\;% \prime}_{n}(q,0)}$$ (19) and in the lower half plane ($-\pi<v<0$): $$G({\bf r},{\bf r}_{0})=\displaystyle\sum_{n\geq 0}\left[2\dfrac{Me^{(1)}_{n}(q% ,u_{>})Ce_{n}(q,u_{<})}{\pi ce_{n}(q,0)}-\dfrac{Me^{(1)}_{n}(q,u)Me^{(1)}_{n}(% q,u_{0})}{\pi Me^{(1)}_{n}(q,0)}\right]\dfrac{ce_{n}(q,v_{0})ce_{n}(q,v)}{Me^{% (1)\;\prime}_{n}(q,0)}\ .$$ (20) $ce_{n}(q,v)$ refers to the even Mathieu functions while $Ce_{n}(q,u)$ and $Me^{(1)}_{n}(q,u)$ are solutions of the associated (also known as radial) Mathieu equation, see B. We also introduced the symbols $u_{<}\equiv{\rm min}(u,u_{0})$ and $u_{>}\equiv{\rm max}(u,u_{0})$. The second parameter entering the Mathieu equation is: $$q=\left(\dfrac{k_{F}a}{4}\right)^{2}\ .$$ (21) An illustration of the Green function resulting from the expressions (19) and (20) is shown in Fig. 2. 3.2 Trajectories of walkers in the presence of a single slit The fluid parameters are taken such that they reproduce the experimental data for silicon oil with a viscosity $\nu=20$ cSt. The acceleration provided by the shaker is: $$\Gamma(t)=\Gamma\cos\omega_{0}t\ ,$$ (22) with $\Gamma=4.2g$ and $\frac{\omega_{0}}{2\pi}=80$ Hz. In our numerical implementation the series (19) and (20) to compute the Green function have been truncated to $n\leq 100$ and the superposition of sources in Eq. (7) has been taken to start at $p=-5{\cal M}+1$. First we want to stress an important effect. In our numerics the generic shape of the trajectory depends strongly on the friction time $\tau_{v}$, during which the droplet surfes on the surface. This is related to the coefficient $D$ in Eq. (1) via $$\frac{D}{m}=\frac{1}{\tau_{v}}\ .$$ (23) In a more physical perspective this friction is also related to the droplet’s mass or its size: the larger the droplet is, the larger $\tau_{v}$ is. In the following the effect of the variation of the memory parameter will be investigated as it is expected to play a crucial role to find interference and diffraction pattern. Then the dependence on $\tau_{v}$ will be also shown. In Fig. 3 the trajectories obtained from integration of Eq. (1) using Eq. (7) are shown in a single slit geometry. They all start from a line parallel to the slit’s arms. The trajectories are assumed to be rectilinear at the starting point, i.e. all the previous impacts lie along a vertical line at $t=0$. Initially the velocity of the droplet is assumed to be orthogonal to the slit with a magnitude of $10$ mm.s${}^{-1}$. The time between two bounces is taken to be $0.025$ s. The histogram for the angular distribution in the far field behind the slit is evaluated in the bottom line of Fig. 3. It was checked that the trajectories are rectilinear for subsequent times. First it shows an oscillating pattern similar as what was observed in [3], but the range of the histogram is much more narrow. The same computation has been repeated for different values of the memory parameter, the values ${\cal M}=10$ and ${\cal M}=30$ are illustrated see Fig. 3. When considering the histogram of the directions of the trajectories in the far field behind the slit, there is a clear difference between ${\cal M}=10$ and ${\cal M}=30$. Both show a selection of the directions in the far field but the oscillation pattern in the histogram has a smaller amplitude in the case of a larger memory. Next we decide to investigate the sensibility of our results towards a variation of $\tau_{v}$, see Fig. 4. First one wants to highlight that the range of the histrogram for the direction of the trajectories in the far field is similar. It is clear that the far field pattern of the droplet trajectories depends on the value of $\tau_{v}$. It can be reformulated by saying that the viscous friction during the surfing phase has a strong influence on the properties of the trajectories. Notice also that the increase of the memory parameter (3) leads to a higher angular selection for the trajectories behind the slit. Last we show how the trajectories depends on the width of the slit. In Fig. 5 the trajectories and the histrogram for the far field direction are shown for $a=\lambda_{F}$. It should be noticed that the range of the diffracted angle for the far field directions are very similar to the ones in Fig. 4. Nevertheless there is for higher memory, a large proportion of trajectories being close to vertical in the far field. Finally we have analysed how the trajectory of a walking droplet can be simulated with our model in different situations. First the observed pattern is symmetric under $x\mapsto-x$, which is a symmetry of the whole problem. Next our model can reproduce qualitatively the diffraction pattern observed in [3], which are stronger when $\tau_{v}$ is decreased. However diffracted angles computed from our model are much smaller than in the experiment. We also noticed that it strongly depends on some parameters of the experiments (droplet mass, friction time,etc). These parameters, which are only approximately known in the experiment, may change the output of our model. 4 Conclusion We have introduced a model defined by Eqs. (1) and (7), which aims to describe a walking droplet on a vibrating bath in presence of boundaries. Our model is strongly based on Green function approach, a very common tool used in linear partial differential equations. It allows to treat any geometry of the tank and any number and shape of obstacle inside it. The main limitation of our model is its sensitivity to some experimental parameters, which are difficult to measure with high precision. First one immediate benefit of our approach is that it allows to treat any geometry without any additional parameter. Second while our model is specifically devoted to the description of walking droplets it sheds a new light on the possible analogy between a walker and a quantum particle. In particular clear differences have been observed between the trajectories predicted by our model and Bohmian trajectories in the single slit geometry. Our model opens several perspectives. First it is highly needed that our model is checked on cavity shapes, which have been used in the experiments. We note here that our formalism applies to both confined and open geometries. It is also worth stressing that the previously considered geometries (square, circle, annulus) allow for an exact and explicit expression for the Green function. In particular it will be necessary to estimate the contribution of viscous effects in comparison with our model. A second approach is to use more intrinsic arguments from fluid dynamics in order to give a more fundamental justification of our model. It is especially needed for a more realistic description of boundary conditions at the obstacle. Last we want to suggest that our model can be straightforwardly generalised in order to model several interacting droplets. While this is a much more challenging problem our Green function approach looks like a promising candidate in order to understand the highly complex dynamics of walking droplets in presence of obstacles. R.D. acknowledges fruitful discussions with J.-B. Shim and W. Struyve. M.H. acknowledges fruitful discussions with M. Labousse. This work was financially supported by the ’Actions de Recherches Concertées (ARC)’ of the Belgium Wallonia Brussels Federation under contract No. 12-17/02. Computational resources have been provided by the Consortium des Équipements de Calcul Intensif (CÉCI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11. Appendix A Derivation and evaluation of the Green function of the single slit We choose to look for the Green function of the single slit in a form of a series expansion. This form is more suitable for numerical implementation and is more accurate for small or moderate wave length. As explained in the main text it is especially useful to use elliptic coordinates in order to account for a single slit obstacle. Indeed each arm of the slit has a very simple expression in these coordinates. Besides we are interested here in Neumann boundary conditions on the obstacle. This restricts the set of function, which can be used for the expansion. In the following we write an ansatz for the Green function as a series of Mathieu functions, see B for their definition and basic properties. Then the conditions are fixed for that ansatz to actually solve Eq. (8) with the required boundary conditions. Our derivation follows closely the steps described in [21]. Start by writing an ansatz for the Green function in both half planes. Without loss of generality the source is supposed to be in the lower half plane, which means that $v_{0}<0$. Then the Green function can be written as: $$G({\bf r},{\bf r}_{0})=\displaystyle\sum_{n\geq 0}\alpha_{n}^{(+)}Me^{(1)}_{n}% (q,u)ce_{n}(q,v)$$ (24) in the upper half plane $0<v<\pi$, and $$G({\bf r},{\bf r}_{0})=\dfrac{2}{\pi}\displaystyle\sum_{n\geq 0}\dfrac{Me^{(1)% }_{n}(q,u_{>})Ce_{n}(q,u_{<})ce_{n}(q,v_{0})ce_{n}(q,v)}{Me^{(1)\;\prime}_{n}(% q,0)ce_{n}(q,0)}+\displaystyle\sum_{n\geq 0}\alpha_{n}^{(-)}Me^{(1)}_{n}(q,u)% ce_{n}(q,v)\ .$$ (25) in the lower half plane $-\pi<v<0$. These expansion obey Neumann boundary conditions along the slit’s arms and outgoing boundary conditions when $|{\bf r}|\to\infty$. The first sum in Eq. (25) has been chosen to fulfil the matching conditions at ${\bf r}={\bf r}_{0}$. Indeed we used the following decomposition [26]: $$\frac{H_{0}^{(1)}(k|{\bf r}-{\bf r}_{0}|)}{4\mathrm{i}}+\frac{H_{0}^{(1)}(k|{% \bf r}-{\bf r}^{\prime}_{0}|)}{4\mathrm{i}}=\frac{2}{\pi}\sum_{n\geq 0}\dfrac{% Me^{(1)}_{n}(q,u_{>})Ce_{n}(q,u_{<})ce_{n}(q,v_{0})ce_{n}(q,v)}{Me^{(1)\;% \prime}_{n}(q,0)ce_{n}(q,0)}\ .$$ (26) where ${\bf r}^{\prime}_{0}$ stand for the image of ${\bf r}_{0}$ under the symmetry $y\mapsto-y$. The next steps is to determine the remaining unknown coefficients $\alpha_{n}^{(+)}$ in Eq. (24) and $\alpha_{n}^{(-)}$ in Eq. (25). It is achieved by requiring the continuity of both the function and its normal derivative across the slit. Recall first that the slit is described in elliptic coordinates by $u=0$ and $-\pi<v<\pi$. More precisely the slit is seen in these coordinates as an ellipse with a unit eccentricity. The continuity condition for $G({\bf r},{\bf r}_{0})$ at the slit reads: $$\sum_{n\geq 0}\alpha_{n}^{(+)}Me^{(1)}_{n}(q,0)ce_{n}(q,v)=\dfrac{2}{\pi}% \displaystyle\sum_{n\geq 0}\dfrac{Me^{(1)}_{n}(q,u_{0})}{Me^{(1)\;\prime}_{n}(% q,0)}ce_{n}(q,v_{0})ce_{n}(q,-v)+\sum_{n\geq 0}\alpha_{n}^{(-)}Me^{(1)}_{n}(q,% 0)ce_{n}(q,-v)\ .$$ (27) The condition for the continuity of the normal derivative across the slit is: $$\sum_{n\geq 0}\alpha_{n}^{(+)}Me^{(1)\;\prime}_{n}(q,0)ce_{n}(q,v)=-\sum_{n% \geq 0}\alpha_{n}^{(-)}Me^{(1)\;\prime}_{n}(q,0)ce_{n}(q,-v)\ .$$ (28) We used that $Ce_{n}(q,u)=ce_{n}(q,\mathrm{i}u)$ so $Ce_{n}(q,0)=ce_{n}(q,0)$. It is crucial to notice that both Eqs.(27) and (28) are written for $0<v<\pi$. The orthogonality of the angular Mathieu functions on this restricted range $$\displaystyle\displaystyle\int_{0}^{\pi}ce_{n}(q,v)ce_{p}(q,v)\mathrm{d}v=% \dfrac{\pi}{2}\delta_{n,p}\ .$$ (29) is used to obtain a linear system for the unknown coefficients: $$\displaystyle\alpha_{p}^{(+)}Me^{(1)}_{p}(q,0)$$ $$\displaystyle-\alpha_{p}^{(-)}Me^{(1)}_{p}(q,0)=$$ $$\displaystyle\dfrac{2}{\pi}\dfrac{Me^{(1)}_{p}(q,u_{0})ce_{p}(q,v_{0})}{Me^{(1% )\;\prime}_{p}(q,0)}\ ,$$ (30) $$\displaystyle\alpha_{p}^{(+)}Me^{(1)\;\prime}_{p}(q,0)$$ $$\displaystyle+\alpha_{p}^{(-)}Me^{(1)\;\prime}_{p}(q,0)=$$ $$\displaystyle 0\ .$$ (31) The determinant of this linear system is $2Me^{(1)}_{p}(q,0)Me^{(1)\;\prime}_{p}(q,0)$, hence is finite for $q>0$. The coefficients are then uniquely determined: $$\displaystyle\alpha_{p}^{(+)}$$ $$\displaystyle=$$ $$\displaystyle\dfrac{1}{\pi}\dfrac{Me^{(1)}_{p}(q,u_{0})ce_{p}(q,v_{0})}{Me^{(1% )}_{p}(q,0)Me^{(1)\;\prime}_{p}(q,0)}\ ,$$ (32) $$\displaystyle\alpha_{p}^{(-)}$$ $$\displaystyle=$$ $$\displaystyle-\dfrac{1}{\pi}\dfrac{Me^{(1)}_{p}(q,u_{0})ce_{p}(q,v_{0})}{Me^{(% 1)}_{p}(q,0)Me^{(1)\;\prime}_{p}(q,0)}\ .$$ (33) Putting the expression (32) back into Eq. (24) on the one hand and (33) into Eq. (25) on the other hand gives Eq. (19) and Eq. (20) respectively. The numerical evaluation of the Green function requires the computation of the Mathieu function of both first and third kinds for a large range of orders. An efficient way to evaluate these functions was to store with very high accuracy the Fourier components $A^{(n)}_{p}(q)$ defined in Eq. (37) and Eq. (38) for $q=\pi^{2}$, cf. Eq. (21). These coefficients were then used to evaluate $ce_{n}(q,v)$, $Ce_{n}(q,u)$ $Me^{(1)}_{n}(q,u)$. The radial Mathieu functions have been expanded into a series of products of Bessel functions, see e.g. [27]. It is worth stressing that this common way to evaluate the radial Mathieu functions becomes rapidly inaccurate for large orders and small arguments. We then relied on a WKB$-$like approach to keep a sufficient accuracy. Technical details referring to the numerical evaluation will be provided in a forthcoming publication. Appendix B Brief reminder about Mathieu functions The Mathieu functions are defined [22] as the solutions of the Mathieu equation: $$y^{\prime\prime}(x)+\left[h-2q\cos(2x)\right]y(x)=0\ ,$$ (34) where the prime denotes differentiation with respect to $x$. From Floquet theory Eq. (34) admits periodic solutions for a discrete set of values of $h(q)$, called the characteristic value. For a fixed $q$ and $h=h(q)$ the periodic solution can be made real and it is usually called the Mathieu function. It is standard to distinguish between two symmetry classes: • if one wants $y^{\prime}(0)=0$ and $y^{\prime}(\pi)=0$ then $h(q)=a_{n}(q)$ and the solution is denoted by $ce_{n}(q,v)$ for $n\geq 0$, • if one wants $y(0)=0$ and $y(\pi)=0$ then $h(q)=b_{n}(q)$ and the solution is denoted by $se_{n}(q,v)$ for $n>0$. The so obtained functions are normalised so as to form an orthogonal family: $$\int_{0}^{2\pi}ce_{n}(q,v)ce_{m}(q,v)\mathrm{d}v=\int_{0}^{2\pi}se_{n}(q,v)se_% {m}(q,v)\mathrm{d}v=\pi\delta_{m,n}\ ,$$ (35) where $\delta_{m,n}$ denotes Kronecker symbol. Last by convention one has: $$ce_{n}(q,0)>0,\quad\dfrac{\mathrm{d}se_{n}}{\mathrm{d}v}(q,v)\Big{|}_{v=0}>0\ .$$ (36) In the current study we are only considering Neumann boundary condition. Therefore we will be restricted from now on to the first symmetry class. As any periodic function Mathieu functions can be expanded as Fourier series. It is useful to distinguish whether $n$ is odd or even: $$\displaystyle ce_{2n}(q,v)$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{p=0}^{\infty}A^{(2n)}_{2p}(q)\cos(2pv),$$ (37) $$\displaystyle ce_{2n+1}(q,v)$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{p=0}^{\infty}A^{(2n+1)}_{2p+1}(q)\cos\left[(2p% +1)v\right]\ .$$ (38) In the same spirit the radial (or modified or associated) Mathieu functions are defined as solution of the radial Mathieu equation: $$y^{\prime\prime}(x)-\left[h-2q\cosh(2x)\right]y(x)=0\ .$$ (39) When $h$ is equal to a characteristic value $a_{n}(q)$, it is useful to define the following solutions of Eq. (39): $$\displaystyle h=a_{n}(q),$$ $$\displaystyle\quad y(u)=Ce_{n}(q,u)\textrm{ or }y(u)=Me^{(1)}_{n}(q,u)\ ,$$ (40) obeying the following constraints: • $Ce_{n}(q,u)$ is a real even smooth solution of Eq. (39) for $h=a_{n}(q)$, • $Me^{(1)}_{n}(q,u)$ is the only solution of Eq. (39) obeying Sommerfeld’s radiation condition at infinity for $h=a_{n}(q)$ and such that $\mathrm{Re}Me^{(1)}_{n}(q,u)=Ce_{n}(q,u)$. Notice that one has $Ce_{n}(q,u)=ce_{n}(q,\mathrm{i}u)$. The functions $Ce_{n},Me^{(1)}_{n}$ can be shown to be linearly independent. They can be used to expand any solution of Eq. (39) when $h=a_{n}(q)$. References References [1] Couder Y, Protière S, Fort E, and Boudaoud A 2005 Nature 437 208 [2] Couder Y, Fort E, Gautier C H, and Boudaoud A 2005 Phys. Rev. Lett. 94 177801 [3] Couder Y, and Fort E 2006 Phys. Rev. Lett. 97 154101 [4] Protière S, Boudaoud A, and Couder Y 2006 J. Fluid Mech. 554 85 [5] Eddi A, Fort E, Moisy F, Couder Y 2009 Phys. Rev. Lett. 102 240401 [6] Fort E, Eddi A, Boudaoud A, Moukhtar J, and Couder Y 2010 Proc. Nat. Acad. Sci. 107 17515 [7] de Broglie L 1926 Ondes et mouvements Gauthier-Villars [8] Bohm D 1952 Phys. Rev. 85 166; ibid. Phys. Rev. 180 [9] Molác̆ek J, and Bush J W M 2013 J. Fluid Mech. 727, 582 ;ibid. 727 612 [10] Andersen A, Madsen J, Reichelt C, Ahl S R, Lautrup B, Ellegaard C, Levinsen M T, and Bohr T 2015 Phys. Rev. E 92 013006 [11] Milewski P A, Galeano-Rios C A, Nachbin A, Bush J W M 2015, J Fluid Mech 778, 361 [12] Perrard S, Labousse M, Miskin M, Fort E, and Couder Y 2014 Nature Comm. 5 3219 [13] Eddi A, Sultan E, Moukhtar J, Fort E, Rossi M, Couder Y 2011 J Fluid Mech 674 433 [14] Harris D M, Moukhtar J, Fort E, Couder Y, Bush J W M 2013 Phys. Rev. E 88 011001(R) [15] Oza A U, Harris D M, Rosales R R, Bush J W M 2014 J Fluid Mech 744, 404 [16] Shirokov D 2013 Chaos 23 013115 [17] Filoux B, Hubert M, Schlagheck P, Vandewalle N 2015 arxiv:physics.flu-dyn.1507.08228 [18] Filoux B, Hubert M, Vandewalle N 2015 Phys. Rev. 92 041004(R) [19] Schwarzschild K 1901 Math. Ann. 55 177 [20] Sieger B 1908 Ann. der Phys. 332, 626 [21] Strutt M J O 1931 Z für Physik 69, 597 [22] Erdelyi A et al 1955 Higher Transcendental Functions, vol. 3, McGraw-Hill [23] Gilet T 2014 Phys. Rev. E 90 052917 [24] Benjamin T B, Ursell F 1954 Proc Roy Soc Lond A 225 505 [25] Gilet T 2016 Phys. Rev. E 93 042202 [26] Sips R 1953 Bull. Soc. Royale des Sci. de Liège, 22, 341 [27] McLachlan N W 1964 Theory and Applications of Mathieu functions, Dover Public. Inc.
Lifespan of Classical Solutions to Quasilinear Wave Equations Outside of a Star-Shaped Obstacle in Four Space Dimensions Dongbing Zha ,  Yi Zhou Corresponding author. School of Mathematical Sciences, Fudan University, Shanghai 200433, PR China. E-mail address: ZhaDongbing@fudan.edu.cn(D.Zha), yizhou@fudan.edu.cn(Y.Zhou). (May 5, 2014) Abstract We study the initial-boundary value problem of quasilinear wave equations outside of a star-shaped obstacle in four space dimensions, in which the nonlinear term under consideration may explicitly depend on the unknown function itself. By some new $L^{\infty}_{t}L^{2}_{x}$ and weighted $L^{2}_{t,x}$ estimates for the unknown function itself, together with energy estimates and KSS estimates, for the quasilinear obstacle problem we obtain a lower bound of the lifespan $T_{\varepsilon}\geq\exp{(\frac{c}{\varepsilon^{2}})}$, which coincides with the sharp lower bound of lifespan estimate for the corresponding Cauchy problem. Key words Quasilinear wave equations; Star-shaped obstacles; Lifespan. 2010 MR Subject Classification 35L05; 35L10; 35L20; 35L70. 1 Introduction and Main Result This paper is devoted to study the lifespan of classical solutions to the following initial-boundary value problem for nonlinear wave equations: $$\left\{\begin{array}[]{llll}\Box u(t,x)=F(u,\partial u,\partial\nabla u),~{}(t% ,x)\in\mathbb{R}^{+}\times\mathbb{R}^{4}\backslash\mathcal{K},\\ u|_{\partial\mathcal{K}}=0,\\ t=0:u=\varepsilon f,u_{t}=\varepsilon g,~{}x\in\mathbb{R}^{4}\backslash% \mathcal{K},\\ \par\par\par\end{array}\right.$$ (1.1) where $\Box=\partial_{t}^{2}-\Delta$ is the wave operator, $\varepsilon>0$ is a small parameter, the obstacle $\mathcal{K}\subset\mathbb{R}^{4}$ is compact, smooth and strictly star-shaped with respect to the origin, and $f,g$ in (1.1) belongs to $C_{c}^{\infty}(\mathbb{R}^{4}\backslash\mathcal{K}).$ Moreover $(t,x)=(x_{0},x_{1},x_{2},x_{3},x_{4}),\partial_{\alpha}=\frac{\partial}{% \partial x_{\alpha}}(\alpha=0,\cdots,4),\nabla=(\partial_{1},\partial_{2},% \partial_{3},\partial_{4}),\partial=(\partial_{0},\nabla)$. Let $$\widehat{\lambda}=(\lambda;(\lambda_{i}),i=0,\cdots,4;(\lambda_{ij}),i,j=0,% \cdots,4,i+j\geq 1).$$ (1.2) Suppose that in a neighborhood of $\widehat{\lambda}=0$, say, for $|\widehat{\lambda}|\leq 1$, the nonlinear term $F=F(\widehat{\lambda})$ is a smooth function satisfying $$F(\widehat{\lambda})=\mathcal{O}(|\widehat{\lambda}|^{2})$$ (1.3) and being affine with respect to $\lambda_{ij}(i,j=0,\cdots,4,i+j\geq 1)$ . Our aim is to study the lifespan of classical solutions to (1.1). By definition, the lifespan $T_{\varepsilon}$ is the supremum of all $T>0$ such that there exists a classical solution to (1.1) on $0\leq t\leq T,$ i.e. $$\displaystyle T_{\varepsilon}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\sup\{T% >0:\eqref{Quasilinear}{\text{ has a unique classical solution on}~{}[0,T]}\}.$$ (1.4) First, it is needed to illustrate why we consider the case of spatial dimension $n=4$. For this purpose, we have to review the history on the corresponding Cauchy problem in four space dimensions. In [12], Hörmander considered the following Cauchy problem of nonlinear wave equations in four space dimensions: $$\displaystyle\begin{cases}\Box u(t,x)=F(u,\partial u,\partial\nabla u),~{}t% \geq 0,~{}x\in\mathbb{R}^{4},\\ t=0:~{}u=\varepsilon f,~{}\partial_{t}u=\varepsilon g.\end{cases}$$ (1.5) Here in a neighborhood of $\widehat{\lambda}=0$, the nonlinear term $F$ is a smooth function with quadratic order with respect to its arguments. $f,g\in C_{c}^{\infty}(\mathbb{R}^{4})$, and $\varepsilon>0$ is a small parameter. He proved that if $\partial_{u}^{2}F(0,0,0)=0$, then (1.5) admits a unique global classical solution. For general $F$, he got a lower bound of the lifespan $T_{\varepsilon}\geq\exp({\frac{c}{\varepsilon}}),$ where $c$ is a positive constant independent of $\varepsilon$. But this result is not sharp. In [31], Li and Zhou showed that Hörmander’s estimate can be improved by $$\displaystyle T_{\varepsilon}\geq\exp{(\frac{c}{\varepsilon^{2}})}.$$ (1.6) Li and Zhou’s proof was simplified by Lindblad and Sogge in [33] later. Recently, the sharpness of Li and Zhou’s estimate was shown by Takamura and Wakasa in [37] (see also Zhou and Han [39]). They proved that for the following Cauchy problem of semilinear wave equations: $$\displaystyle\begin{cases}\Box u(t,x)=u^{2},~{}t\geq 0,~{}x\in\mathbb{R}^{4},% \\ t=0:~{}u=\varepsilon f,~{}\partial_{t}u=\varepsilon g,\end{cases}$$ (1.7) the lifespan of classical solutions admits a upper bound: $T_{\varepsilon}\leq\exp{(\frac{c}{\varepsilon^{2}})}$ for some special functions $f,g\in C_{c}^{\infty}(\mathbb{R}^{4})$. In fact, when the spatial dimension $n=4$, the equation in (1.7) is just corresponding to the critical case of the Strauss conjecture, so it is the most difficult case to be handled. For the Strauss conjecture, we refer the reader to Strauss [36] and the survey article Wang and Yu [38]. The pioneering works by F. John and S. Klainerman open the field of lifespan estimate of classical solutions to the Cauchy problem of nonlinear wave equations. In other spatial dimensions, classical references can be found in John [14, 15, 17], John and Klainerman [16], Klainerman [21, 22, 23, 24, 25, 26], Christodoulou [5], Hörmander [11, 13], Lindblad [32], Li and Chen [27], Li and Zhou [28, 29, 30], Alinhac [1, 2, 4] etc. . Especially, Klainerman’s commutative vector field method in [23] offer the basic framework for treating this kind of problem. For wave equations, a natural extension of the Cauchy problem is the initial-boundary value problem in exterior domains, which describes the wave propagation outside a bounded obstacle. Similarly to the Cauchy problem, the wave in exterior domains will propagate to the infinity(if the shape of the obstacle is sufficiently regular, say, for a star-shaped obstacle), But one should note the effect of the boundary condition, which will enhance the difficulty of the corresponding problem. In analogy with the Cauchy problem, we also want to get the lifespan estimate for classical solutions to the initial-boundary value problem in exterior domains. For the problem (1.1) with obstacle in four space dimensions , Du et al. [6] established a lower bound of the lifespan $T_{\varepsilon}\geq\exp({\frac{c}{\varepsilon}}).$ In [35], under the assumption $\partial_{u}^{2}F(0,0,0)=0$, Metcalfe and Sogge proved that (1.1) has a unique global classical solution. Their results extend Hörmander’s results from the Cauchy problem to the obstacle problem. In this paper, for the obstacle problem (1.1) with general nonlinear term $F$, we get the lower bound of the lifespan $T_{\varepsilon}\geq\exp{(\frac{c}{\varepsilon^{2}})}$, which extends the result of Li and Zhou [31] from the Cauchy problem to the obstacle problem. But the sharpness of this estimate is yet to be proved. In other dimensions, for the obstacle problem, when $n=3$, Du and Zhou [7] showed that the analog of (1.1) admits a unique classical solution with lifespan $T_{\varepsilon}\geq\frac{c}{\varepsilon^{2}}$(see also [8]). For the special case where the nonlinear term does not explicitly depend on $u$, we refer the reader to  [18, 20, 34] and the references therein. When $n\geq 5$, Metcalfe and Sogge [35] showed that the analog of (1.1) has a unique global classical solution(see also Du et al. [6]). The case of $n=2$ is still open. To prove our result, we will use Klainerman’s commutative vector field method in [23]. For the Cauchy problem, the Lorentz invariance of the wave operator is the key point of this method. However, for the obstacle problem, the Lorentz invariance does not hold. Another difficulty we encounter in the obstacle case is that, the homogeneous Dirichlet boundary condition is not preserved when some generalized derivatives act on the solution. To overcome these difficulties, Keel et al. [18] established some weighted space-time estimates(KSS estiamtes) for first order derivatives of solutions to the linear wave equation. In [34], only by energy methods, Metcalfe and Sogge established KSS estimates for perturbed linear wave equations on an exterior domain. By elliptic regularity estimates, high order KSS estimates(involve only general derivatives and spatial rotation operators) have been also established. Using these estimates, they proved the long time existence for $n\geq 3$ when the nonlinear term does not depend explicitly on the unknown function. In this paper, we will prove our result by means of the framework of Metcalfe and Sogge [34]. To handle the case that the nonlinear term depends explicitly on the unknown function, we will first prove a new $L^{\infty}_{t}L^{2}_{x}$ estimate for solutions to the Cauchy problem of linear wave equation in four space dimensions, based on the Morawetz estimate in [10]. After that, starting from the $L^{\infty}_{t}L^{2}_{x}$ estimate and using the original method used to establish the KSS estimates in [18], combined with some pointwise estimates of the fundamental solution of wave operator in four space dimensions, we give a new weighted space-time $L^{2}_{t,x}$ estimate for the unknown function itself. The key point of the two estimates is that, on the right-hand side of these inequalities, we must take the $L^{2}$ norm with respect to the time variable. By the cut-off argument, we can extend these estimates to the case of obstacle problem of linear wave equation in four space dimensions. As to the estimate for the derivatives of solutions, we can use the energy estimate and KSS estimate in [34]. Since we consider the problem with small initial data, the higher order terms have no essential influence on the discussion of the lifespan of solutions with small amplitude, without loss of generality, we assume that the nonlinear term $F$ can be taken as $$F(u,\partial u,\partial\nabla u)=H(u,\partial u)+\sum^{4}_{\begin{subarray}{c}% \alpha,\beta=0\\ \alpha+\beta\geq 1\end{subarray}}\gamma^{\alpha\beta}(u,\partial u)\partial_{% \alpha}\partial_{\beta}u,$$ (1.8) where $H(u,\partial u)$ is a quadratic form, $\gamma^{\alpha\beta}$ is a linear form of $(u,\partial u)$ and satisfies the symmetry condition: $$\gamma^{\alpha\beta}(u,\partial u)=\gamma^{\beta\alpha}(u,\partial u),~{}% \alpha,\beta=0,1,\cdots,4,\alpha+\beta\geq 1.$$ (1.9) Without loss of generality, we may assume that the obstacle satisfying $$\displaystyle\mathcal{K}\subset\mathbb{B}_{\frac{1}{2}}=\{x\in\mathbb{R}^{4}:|% x|<\frac{1}{2}\}.$$ (1.10) To solve (1.1), the data must be assumed to satisfy the relevant compatibility conditions. Setting $J_{k}u=\{\partial_{x}^{a}u:0\leq|a|\leq k\},$ we know that for a fixed $m$ and a formal $H^{m}$ solution $u$ of (1.1), we can write $\partial_{t}^{k}u(0,\cdot)=\psi_{k}(J_{k}f,J_{k-1}g),0\leq k\leq m,$ in which the compatibility functions $\psi_{k}(0\leq k\leq m)$ depend on the nonlinearity, $J_{k}f$ and $J_{k-1}g$. For $(f,g)\in H^{m}\times H^{m-1}$, the compatibility condition simply requires that $\psi_{k}$ vanish on $\partial\mathcal{K}$ for $0\leq k\leq m-1$. For smooth $(f,g)$, we say that the compatibility condition is satisfied to infinite order if this vanishing condition holds for all $m$. For some further descriptions, see Keel et al. [19]. The main theorem of this paper is the following Theorem 1.1. For the quasilinear initial-boundary problem (1.1), where the obstacle $\mathcal{K}\subset\mathbb{R}^{4}$ is compact, smooth and strictly star-shaped with respect to the origin, and satisfies (1.10), assume that initial data $~{}f,g\in C_{c}^{\infty}(\mathbb{R}^{4}\backslash\mathcal{K})$, satisfies the compatibility conditions to infinite order, and $F$ in (1.1) satisfies hypotheses (1.8) and (1.9). Then for any given parameter $\varepsilon$ small enough, (1.1) admits a unique solution $u\in C^{\infty}([0,T_{\varepsilon})\times\mathbb{R}^{4})$ with $$T_{\varepsilon}\geq\exp({\frac{c}{\varepsilon^{2}}}),$$ (1.11) where $c$ is a positive constant independent of $\varepsilon$. An outline of this paper is as follows. In Section 2, we give some notations. In Section 3, some $L^{\infty}_{t}L^{2}_{x}$ and weighted $L^{2}_{t,x}$ estimates for the wave equation in Minkowski space-time $\mathbb{R}^{1+4}$ will be established. In Section 4, we will give some estimates needed for obstacle problem. And then, the proof of Theorem 1.1 will be presented in Section 5. 2 Some Notations Denote the spatial rotations $$\Omega=(\Omega_{ij};1\leq i<j\leq 4),$$ (2.1) where $$\Omega_{ij}=x_{i}\partial_{j}-x_{j}\partial_{i},1\leq i<j\leq 4,$$ (2.2) and the vector fields $$Z=(\partial,\Omega)=(Z_{1},Z_{2},\cdots,Z_{11}).$$ (2.3) For any given multi-index  $\varsigma=(\varsigma_{1},\cdots,\varsigma_{11}),$ we denote $$Z^{\varsigma}=Z_{1}^{\varsigma_{1}}\cdots Z_{11}^{\varsigma_{11}}.$$ (2.4) As introduced in Li and Chen [27], we say that $f\in L^{p,q}(\mathbb{R}^{4}),$ if $$\displaystyle f(r\omega)r^{\frac{3}{p}}\in L^{p}_{r}(0,\infty;L^{q}_{w}(S^{3})),$$ (2.5) where $r=|x|,\omega=(\omega_{1},\cdots,\omega_{4})\in S^{3},$ $S^{3}$ being the unit sphere in $\mathbb{R}^{4}$. For $1\leq p,q\leq+\infty,$ equipped with the norm $$||f||_{L^{p,q}(\mathbb{R}^{4})}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}||f(r% \omega)r^{\frac{3}{p}}||_{L^{p}_{r}(0,\infty;L^{q}_{w}(S^{3}))},$$ (2.6) $L^{p,q}(\mathbb{R}^{4})$ is a Banach space. It is easy to see that, if $p=q$, then $L^{p,q}(\mathbb{R}^{4})$ becomes the usual Lebesgue space $L^{p}(\mathbb{R}^{4})$. 3 $L^{\infty}_{t}L^{2}_{x}$ and Weighted $L^{2}_{t,x}$ Estimates in Minkowski Space-time $\mathbb{R}^{1+4}$ 3.1 $L^{\infty}_{t}L^{2}_{x}$ Estimate We first give a weighted Sobolev inequality which will be used in the proof of the  $L^{\infty}_{t}L_{x}^{2}$ estimate of solutions. Lemma 3.1. If $\frac{1}{2}<s_{0}<2,$ then we have the estimate $$|||x|^{2-s_{0}}f||_{L^{\infty,2}(\mathbb{R}^{4})}\leq C||f||_{{\dot{H}}^{s_{0}% }(\mathbb{R}^{4})}$$ (3.1) and the corresponding dual estimate $$||f||_{{\dot{H}}^{-s_{0}}(\mathbb{R}^{4})}\leq C|||x|^{s_{0}-2}f||_{L^{1,2}(% \mathbb{R}^{4})},$$ (3.2) where $C$ is a positive constant independent of $f$. Proof. By an 1-D Sobolev embedding, the localization technique and the dyadic decomposition, we can get (3.1). For the details, see Li and Zhou [31] Theorem 2.10 or the Appendix of Wang and Yu [38]. ∎ Lemma 3.2. (Morawetz Estimate) Let $v$ be the solution to the following problem: $$\left\{\begin{array}[]{ll}\Box v(t,x)=0,(t,x)\in\mathbb{R}^{+}\times\mathbb{R}% ^{4},\\ t=0:v=0,\partial_{t}v=g(x),x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.3) For any given $T>0$, denoting $S_{T}=[0,T]\times\mathbb{R}^{4}$, we have the following weighted space-time estimate: $$|||x|^{-s}v||_{L^{2}_{t,x}(S_{T})}\leq C||g||_{\dot{H}^{-(\frac{3}{2}-s)}(% \mathbb{R}^{4})},$$ (3.4) where $\frac{1}{2}<s<1$ and $C$ is a positive constant independent of $T$. Proof. See Hidano et al. [10] Lemma 3.1. ∎ Lemma 3.3. (Dual Estimate) Let $v$ be the solution to the following problem: $$\left\{\begin{array}[]{ll}\Box v(t,x)=G(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=0:v=0,\partial_{t}v=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.5) Then for any given $T>0,$ we have $$\sup_{0\leq t\leq T}||v(t)||_{\dot{H}^{(\frac{3}{2}-s)}(\mathbb{R}^{4})}\leq C% |||x|^{s}G||_{L^{2}(S_{T})},$$ (3.6) where $\frac{1}{2}<s<1$ and $C$ is a positive constant independent of $T$. Proof. By Duhamel principle and Lemma 3.2, we have $$|||x|^{-s}v||_{L^{2}(S_{T})}\leq C\int_{0}^{T}||G(t)||_{\dot{H}^{-(\frac{3}{2}% -s)}(\mathbb{R}^{4})}dt.$$ (3.7) By duality, $$\sup_{0\leq t\leq T}||v(t)||_{\dot{H}^{\frac{3}{2}-s}(\mathbb{R}^{4})}=\sup\{% \int_{0}^{T}\int_{\mathbb{R}^{4}}v(t,x)P(t,x)dxdt:\int_{0}^{T}||P(t)||_{\dot{H% }^{-(\frac{3}{2}-s)}}dt=1\}.$$ (3.8) Let $w$ satisfy $$\left\{\begin{array}[]{ll}\Box w(t,x)=P(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=T:w=0,\partial_{t}w=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.9) Integrating by parts, we have $$\displaystyle\int_{0}^{T}\int_{\mathbb{R}^{4}}v(t,x)P(t,x)dxdt$$ $$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{4}}v(t,x)\Box w(t,x)dxdt$$ $$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{4}}\Box v(t,x)w(t,x)dxdt$$ $$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{4}}G(t,x)w(t,x)dxdt$$ $$\displaystyle\leq C|||x|^{s}G||_{L^{2}(S_{T})}|||x|^{-s}w||_{L^{2}(S_{T})}.$$ (3.10) By (3.7), we get $$|||x|^{-s}w||_{L^{2}(S_{T})}\leq C\int_{0}^{T}||P(t)||_{\dot{H}^{-(\frac{3}{2}% -s)}(\mathbb{R}^{4})}dt\leq C.$$ (3.11) So we finally obtain (3.6). ∎ Lemma 3.4. ($L^{\infty}_{t}L^{2}_{x}$ Estimate) Let $v$ satisfy $$\left\{\begin{array}[]{ll}\Box v(t,x)=G(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=0:~{}v=0,\partial_{t}v=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.12) Then for any given $T>0,$ we have $$\displaystyle\sup_{0\leq t\leq T}||v(t)||_{L^{2}(\mathbb{R}^{4})}\leq C|||x|^{% -\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))},$$ (3.13) where $C$ is a positive constant independent of $T$. Proof. Let $|D|=\sqrt{-\Delta}$, where $\Delta$ is the Laplacian operator on $\mathbb{R}^{4}$. Acting  $|D|^{-(\frac{3}{2}-s)}$ on both sides of (3.12), and noting Lemma 3.3, we get, $$\displaystyle\sup_{0\leq t\leq T}||v(t)||_{L^{2}(\mathbb{R}^{4})}\leq C|||x|^{% s}|D|^{-(\frac{3}{2}-s)}G||_{L^{2}(S_{T})}.$$ (3.14) Since we have $$\displaystyle|x|^{s}|D|^{-(\frac{3}{2}-s)}G(t,x)$$ $$\displaystyle=C|x|^{s}\int_{\mathbb{R}^{4}}\frac{G(t,y)}{~{}~{}|x-y|^{\frac{5}% {2}+s}}dy$$ $$\displaystyle=C|x|^{s}\int_{|y|\leq\frac{|x|}{4}}\frac{G(t,y)}{~{}~{}|x-y|^{% \frac{5}{2}+s}}dy+C|x|^{s}\int_{|y|\geq\frac{|x|}{4}}\frac{G(t,y)}{~{}~{}|x-y|% ^{\frac{5}{2}+s}}dy,$$ (3.15) then we get the pointwise estimate $$\displaystyle|x|^{s}||D|^{-(\frac{3}{2}-s)}G(t,x)|$$ $$\displaystyle\leq C\int_{\mathbb{R}^{4}}\frac{|G(t,y)|}{~{}~{}|x-y|^{\frac{5}{% 2}}}dy+C\int_{\mathbb{R}^{4}}\frac{|y|^{s}|G(t,y)|}{~{}~{}|x-y|^{\frac{5}{2}+s% }}dy.$$ (3.16) Consequently, $$\displaystyle|||x|^{s}||D|^{-(\frac{3}{2}-s)}G(t)|||_{L^{2}_{x}(\mathbb{R}^{4})}$$ $$\displaystyle\leq C|||G(t)|||_{\dot{H}^{-\frac{3}{2}}(\mathbb{R}^{4})}+C|||x|^% {s}|G(t)|||_{\dot{H}^{-(\frac{3}{2}-s)}(\mathbb{R}^{4})}.$$ (3.17) It follows from (3.2) that $$\displaystyle|||G(t)|||_{\dot{H}^{-\frac{3}{2}}(\mathbb{R}^{4})}+|||x|^{s}|G(t% )|||_{\dot{H}^{-(\frac{3}{2}-s)}(\mathbb{R}^{4})}$$ $$\displaystyle\leq C|||x|^{-\frac{1}{2}}G(t)||_{L^{1,2}(\mathbb{R}^{4})},$$ (3.18) so we get $$\displaystyle|||x|^{s}|D|^{-(\frac{3}{2}-s)}G(t)||_{L_{x}^{2}(\mathbb{R}^{4})}% \leq C|||x|^{-\frac{1}{2}}G(t)||_{L^{1,2}(\mathbb{R}^{4})}.$$ (3.19) Consequently, $$\displaystyle|||x|^{s}|D|^{-(\frac{3}{2}-s)}G||_{L^{2}(S_{T})}\leq C|||x|^{-% \frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.20) By (3.14) and (3.20), we get the conclusion. ∎ 3.2 Weighted $L^{2}_{t,x}$ Estimate Now we will prove the weighted space-time $L^{2}_{t,x}$ estimate based on the $L^{\infty}_{t}L^{2}_{x}$ estimate given in Lemma 3.4. Our proof is inspired by the original proof of KSS inequality in [18]. But in four space dimensions, there is no strong Huygens principle, so we must do some extra pointwise estimates by using the fundamental solution of wave equation. We will follow the argument used in Section 6.6 of Alinhac [3]. Lemma 3.5. (Weighted $L^{2}_{t,x}$ Estimate) Let $v$ satisfy $$\left\{\begin{array}[]{ll}\Box v(t,x)=G(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=0:v=0,\partial_{t}v=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.21) Then, for any given $T>0$, we have $$\displaystyle(\log(2+T))^{-1/2}||<x>^{-1/2}v||_{L^{2}_{t,x}(S_{T})}+||<x>^{-3/% 4}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle\leq C|||x|^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}% ))},$$ (3.22) where $C$ is a positive constant independent of $T$. Proof. Step 1. (Localization) First we prove that $$\displaystyle||v||_{L^{2}([0,T];L^{2}(|x|\leq 2))}\leq C|||x|^{-\frac{1}{2}}G|% |_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.23) Denoting $$\displaystyle R_{k}=\{(x,t):k\leq|x|+t<k+1\},~{}k=0,1,2,\cdots,$$ (3.24) let $\chi_{k}$ be the characteristic function of $R_{k}$, and $G_{k}=G\chi_{k}$, we have $G=\sum_{k}G_{k}$. Let $v_{k}$ satisfy $$\left\{\begin{array}[]{ll}\Box v_{k}(t,x)=G_{k}(t,x),(t,x)\in\mathbb{R}^{+}% \times\mathbb{R}^{4},\\ t=0:v_{k}=0,\partial_{t}v_{k}=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.25) Obviously,  $v=\sum_{k}v_{k}$. In order to obtain (3.23), without loss of generality, we assume that $T\geq 10$ and $T$ is an integer. We have $$\displaystyle||v||^{2}_{L^{2}([8,T];L^{2}(|x|\leq 2))}$$ $$\displaystyle=\int_{8}^{T}\int_{|x|\leq 2}|v|^{2}dxdt$$ $$\displaystyle=\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|v|^{2}dxdt$$ $$\displaystyle\leq C\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{|k-l|% \leq 7}v_{k}|^{2}dxdt+C\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{k>% l+7}v_{k}|^{2}dxdt$$ $$\displaystyle+C\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{k<l-7}v_{k% }|^{2}dxdt.$$ (3.26) Now we estimate the three terms on the right-hand side of (3.2), respectively. For the first term, it follows from Lemma 3.4 that $$\displaystyle\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{|k-l|\leq 7}% v_{k}|^{2}dxdt$$ $$\displaystyle\leq C\sum_{l=8}^{T-1}\int_{l}^{l+1}\sum_{|k-l|\leq 7}\int_{|x|% \leq 2}|v_{k}|^{2}dxdt$$ $$\displaystyle\leq C\sum_{l=8}^{T-1}\sum_{|k-l|\leq 7}\sup_{0\leq t\leq T}\int_% {|x|\leq 2}|v_{k}|^{2}dx$$ $$\displaystyle\leq C\sum_{k=1}^{T+6}|||x|^{-\frac{1}{2}}G_{k}||^{2}_{L^{2}([0,T% ];L^{1,2}(\mathbb{R}^{4}))}$$ $$\displaystyle\leq C|||x|^{-\frac{1}{2}}G||^{2}_{L^{2}([0,T];L^{1,2}(\mathbb{R}% ^{4}))}.$$ (3.27) To get the last inequality in (3.2), we have used the Minkowski inequality. For the second term on the right-hand side of (3.2), noting the support of $G_{k}$, by Huygens principle we know that when $|x|\leq 2,l\leq t\leq l+1,k>l+7$, $v_{k}(t,x)=0$. Consequently, $$\displaystyle\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{k>l+7}v_{k}|% ^{2}dxdt=0.$$ (3.28) Now we deal with the third term on the right-hand side of (3.2). We have $$\displaystyle v_{k}(t,x)=C\int_{R_{k}}\chi_{+}^{-\frac{3}{2}}((t-\tau)^{2}-|x-% y|^{2})G_{k}(\tau,y)dyd\tau,$$ (3.29) where $\chi_{+}^{-\frac{3}{2}}$ is the fundamental solution of wave operator in four space dimensions(see Section 6.2 of [13]). Noting the support of $G_{k}$, we see that when $|x|\leq 2,l\leq t\leq l+1,k<l-7,(\tau,y)\in R_{k}$ , we have $$\displaystyle t-\tau-|x-y|\geq t-\tau-|x|-|y|=t-|x|-(\tau+|y|)\geq C(l-k).$$ (3.30) So by the properties of $\chi_{+}^{-\frac{3}{2}}$, we get $$\displaystyle\chi_{+}^{-\frac{3}{2}}((t-\tau)^{2}-|x-y|^{2})$$ $$\displaystyle\leq C((t-\tau)^{2}-|x-y|^{2})^{-3/2}$$ $$\displaystyle\leq C(t-\tau-|x-y|)^{-3/2}(t-\tau+|x-y|)^{-3/2}$$ $$\displaystyle\leq C(t-|x|-(\tau+|y|))^{-3/2}(t-|x|-(\tau+|y|)+2|y|)^{-1+\delta% }(t-\tau)^{-(1/2+\delta)}$$ $$\displaystyle\leq C(t-|x|-(\tau+|y|))^{-3/2}(t-|x|-(\tau+|y|))^{-1/2+\delta}|y% |^{-1/2}(t-\tau)^{-(1/2+\delta)}$$ $$\displaystyle\leq C(l-k)^{-2+\delta}|y|^{-1/2}(t-\tau)^{-(1/2+\delta)},$$ (3.31) where $\delta$ is a fixed real number and $0<\delta<\frac{1}{8}$. Using Hölder inequality, and noting that $t-\tau\geq C(l-k)\geq 7C,2(\frac{1}{2}+\delta)>1$, we get $$\displaystyle|v_{k}(t,x)|$$ $$\displaystyle\leq C(l-k)^{-2+\delta}\int_{R_{k}}(t-\tau)^{-(1/2+\delta)}|y|^{-% 1/2}G_{k}(\tau,y)dyd\tau$$ $$\displaystyle\leq C(l-k)^{-2+\delta}|||x|^{-\frac{1}{2}}G_{k}||_{L^{2}([0,T];L% ^{1,2}(\mathbb{R}^{4}))}.$$ (3.32) It follows from Cauchy-Schwarz inequality and (3.2) that $$\displaystyle|\sum_{k<l-7}v_{k}|^{2}$$ $$\displaystyle\leq\sum_{k<l-7}(l-k)^{-2(\frac{1}{2}+\delta)}\sum_{k<l-7}(l-k)^{% 2(\frac{1}{2}+\delta)}|v_{k}|^{2}$$ $$\displaystyle\leq C\sum_{k<l-7}(l-k)^{1+2\delta}|v_{k}|^{2}$$ $$\displaystyle\leq C\sum_{k<l-7}(l-k)^{-3+4\delta}|||x|^{-\frac{1}{2}}G_{k}||^{% 2}_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.33) We then have $$\displaystyle\sum_{l=8}^{T-1}\int_{l}^{l+1}\int_{|x|\leq 2}|\sum_{k<l-7}v_{k}|% ^{2}dxdt$$ $$\displaystyle\leq C\sum_{l=8}^{T-1}\sum_{k<l-7}(l-k)^{-3+4\delta}|||x|^{-\frac% {1}{2}}G_{k}||^{2}_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}$$ $$\displaystyle\leq C\sum_{k=0}^{T}|||x|^{-\frac{1}{2}}G_{k}||^{2}_{L^{2}([0,T];% L^{1,2}(\mathbb{R}^{4}))}$$ $$\displaystyle\leq C|||x|^{-\frac{1}{2}}G||^{2}_{L^{2}([0,T];L^{1,2}(\mathbb{R}% ^{4}))},$$ (3.34) here, Minkowski inequality is used in the last step of (3.2). By (3.2)–(3.28) and (3.2), we get (3.23). Step 2. (Scaling) By scaling argument, we will pass from the inequality of step 1, i.e., (3.23), to the following inequality in an annulus: $$\displaystyle|||x|^{-\frac{1}{2}}v||_{L^{2}([0,T];L^{2}(R\leq|x|\leq 2R))}\leq C% |||x|^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))},$$ (3.35) where $R>0$, $C$ is a positive constant independent of $R$. Denoting $$\displaystyle v_{R}(x,t)=v(Rx,Rt),~{}G_{R}(t,x)=R^{2}G(Rx,Rt),$$ (3.36) $v_{R}$ satisfies $$\left\{\begin{array}[]{ll}\Box v_{R}(t,x)=G_{R}(t,x),(t,x)\in\mathbb{R}^{+}% \times\mathbb{R}^{4},\\ t=0:v_{R}=0,\partial_{t}v_{R}=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.37) It follows from (3.23) that $$\displaystyle||v_{R}||_{L^{2}([0,\frac{T}{R}];L^{2}(1\leq|x|\leq 2))}\leq C|||% x|^{-\frac{1}{2}}G_{R}||_{L^{2}([0,\frac{T}{R}];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.38) By simple calculation, we have $$\displaystyle||v_{R}||_{L^{2}([0,\frac{T}{R}];L^{2}(1\leq|x|\leq 2))}=R^{-5/2}% ||v||_{L^{2}([0,{T}];L^{2}(R\leq|x|\leq 2R))},$$ (3.39) $$\displaystyle|||x|^{-\frac{1}{2}}G_{R}||_{L^{2}([0,\frac{T}{R}];L^{1,2}(% \mathbb{R}^{4}))}=R^{-2}|||x|^{-\frac{1}{2}}G||_{L^{2}([0,{T}];L^{1,2}(\mathbb% {R}^{4}))},$$ (3.40) so we get $$\displaystyle|||x|^{-1/2}v||_{L^{2}([0,T];L^{2}(R\leq|x|\leq 2R))}\leq C|||x|^% {-\frac{1}{2}}G||_{L^{2}([0,{T}];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.41) Step 3. (Dyadic decomposition) To get (3.5) , we will prove $$\displaystyle(\log(2+T))^{-1/2}||<x>^{-1/2}v||_{L^{2}_{t,x}(S_{T})}\leq C|||x|% ^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))},$$ (3.42) and $$\displaystyle||<x>^{-3/4}v||_{L^{2}_{t,x}(S_{T})}\leq C|||x|^{-\frac{1}{2}}G||% _{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))},$$ (3.43) respectively. To get (3.42), noting that by Lemma 3.4 we have $$\displaystyle||<x>^{-1/2}v||_{L^{2}([0,T];L^{2}(|x|\geq T))}$$ (3.44) $$\displaystyle\leq C(\log(2+T))^{1/2}\sup_{0\leq t\leq T}||v(t)||_{L^{2}(% \mathbb{R}^{4})}$$ (3.45) $$\displaystyle\leq C(\log(2+T))^{1/2}|||x|^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,% 2}(\mathbb{R}^{4}))},$$ (3.46) we need only to prove $$\displaystyle(\log(2+T))^{-1/2}||<x>^{-1/2}v||_{L^{2}([0,T];L^{2}(|x|\leq T))}% \leq C|||x|^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.47) Taking $N$ such that $2^{N}<T\leq 2^{N+1}$, by (3.23) and (3.35), we get $$\displaystyle\int_{0}^{T}\int_{|x|\leq T}(1+|x|)^{-1}|v|^{2}dxdt$$ $$\displaystyle=\int_{0}^{T}\int_{|x|\leq 1}(1+|x|)^{-1}|v|^{2}dxdt+\sum_{k=0}^{% N}\int_{0}^{T}\int_{2^{k}\leq|x|\leq 2^{k+1}}(1+|x|)^{-1}|v|^{2}dxdt$$ $$\displaystyle\leq C(N+2)|||x|^{-\frac{1}{2}}G||^{2}_{L^{2}([0,T];L^{1,2}(% \mathbb{R}^{4}))}$$ $$\displaystyle\leq C\log(2+T)|||x|^{-\frac{1}{2}}G||^{2}_{L^{2}([0,T];L^{1,2}(% \mathbb{R}^{4}))}.$$ (3.48) This completes the proof of (3.42). Now we prove (3.43). Taking $R=2^{k}(k=0,1,2,\cdots)$ in (3.35), we have $$\displaystyle\int_{0}^{T}\int_{2^{k}\leq|x|\leq 2^{k+1}}(1+|x|)^{-1}|v|^{2}% dxdt\leq C|||x|^{-\frac{1}{2}}G||^{2}_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.49) Multiplying $2^{-\frac{k}{2}}$ on both sides of (3.49), and then summing up with respect to $k$, we get $$\displaystyle||<x>^{-3/4}v||_{L^{2}([0,T];L^{2}(|x|\geq 1))}\leq C||x|^{-\frac% {1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}.$$ (3.50) Combining (3.50) with (3.23), we can get (3.43). ∎ 3.3 Higher Order $L^{\infty}_{t}L^{2}_{x}$ and Weighted $L^{2}_{t,x}$ Estimates By Lemma 3.4 and Lemma 3.5, we have the following Lemma 3.6. Let $v$ satisfy $$\left\{\begin{array}[]{ll}\Box v(t,x)=G(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=0:v=0,\partial_{t}v=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.51) Then for any given $T>0$ we have $$\displaystyle\sup_{0\leq t\leq T}||v(t)||_{L^{2}(\mathbb{R}^{4})}+(\log(2+T))^% {-1/2}||<x>^{-1/2}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+||<x>^{-3/4}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle\leq C|||x|^{-\frac{1}{2}}G||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}% ))},$$ (3.52) where $C$ is a positive constant independent of $T$. Since the wave operator $\square$ commutes with $Z$, it is not hard to get the higher order versions of (3.6) as follows. Lemma 3.7. Let $v$ satisfy $$\left\{\begin{array}[]{ll}\Box v(t,x)=G(t,x),(t,x)\in\mathbb{R}^{+}\times% \mathbb{R}^{4},\\ t=0:v=0,\partial_{t}v=0,x\in\mathbb{R}^{4}.\\ \end{array}\right.$$ (3.53) Then for any given $T>0$ and $N=0,1,2,\cdots$, we have $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2}(% \mathbb{R}^{4})}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x>^{-1/2}Z^{\mu}v||_{L% ^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}Z^{\mu}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}|||x|^{-\frac{1}{2}}Z^{\mu}G||_{L^{2}([0,% T];L^{1,2}(\mathbb{R}^{4}))}$$ (3.54) and $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||\partial_{t,x}^{\mu}v(t)|% |_{L^{2}(\mathbb{R}^{4})}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x>^{-1/2}% \partial_{t,x}^{\mu}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}\partial_{t,x}^{\mu}v||_{L^{2}_{t,% x}(S_{T})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}|||x|^{-\frac{1}{2}}\partial_{t,x}^{\mu}G% ||_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))},$$ (3.55) where $C$ is a positive constant independent of $T$. 4 Some Estimates Outside of a Star-Shaped Obstacle 4.1 Higher Order $L^{\infty}_{t}L^{2}_{x}$ and Weighted $L^{2}_{t,x}$ Estimates In this section, we will give some estimates outside of a star-shaped obstacle which is needed in the proof of lifespan estimate. By Lemma 3.7 and a cutoff argument, $L^{\infty}_{t}L^{2}_{x}$ and weighted $L^{2}_{t,x}$ estimates for the unknown function itself can be established. Lemma 4.1. Let $v\in C^{\infty}(\mathbb{R}^{+}\times\mathbb{R}^{4}\backslash\mathcal{K})$ satisfy $$\displaystyle\begin{cases}\square v(t,x)=G(t,x),~{}(t,x)\in\mathbb{R}^{+}% \times\mathbb{R}^{4}\backslash\mathcal{K},\\ v(t,x)=0,~{}x\in\partial\mathcal{K},\\ t=0:~{}v=0,~{}\partial_{t}v=0,\end{cases}$$ (4.1) where the obstacle $\mathcal{K}$ is bounded, smooth and strictly star-shaped with respect to the origin. Assume that (1.10) holds. For any given $T>0$, denoting $S_{T}=[0,T]\times\mathbb{R}^{4}\backslash\mathcal{K}$ and $N=1,2\cdots,$ we have $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2}(% \mathbb{R}^{4}\backslash\mathcal{K})}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x% >^{-1/2}Z^{\mu}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}Z^{\mu}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle\leq C\sum_{|\alpha|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}% ^{\mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}+C\sum_{|\mu|\leq N}||\partial_{t,x}^{% \mu}v^{\prime}||_{L^{2}_{t,x}([0,T]\times\{|x|\leq 1\})}$$ $$\displaystyle+C\sum_{|\mu|\leq N}||<x>^{-\frac{1}{2}}Z^{\mu}G||_{L^{2}([0,T];L% ^{1,2}(|x|>\frac{3}{4}))}$$ (4.2) and $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||\partial_{t,x}^{\mu}v(t)|% |_{L^{2}(\mathbb{R}^{4}\backslash\mathcal{K})}+\sum_{|\mu|\leq N}(\log(2+T))^{% -1/2}||<x>^{-1/2}\partial_{t,x}^{\mu}v||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}\partial_{t,x}^{\mu}v||_{L^{2}_{t,% x}(S_{T})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{% \mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}+C\sum_{|\mu|\leq N}||\partial_{t,x}^{% \mu}v^{\prime}||_{L^{2}_{t,x}([0,T]\times\{|x|\leq 1\})}$$ $$\displaystyle+C\sum_{|\mu|\leq N}||<x>^{-\frac{1}{2}}\partial_{t,x}^{\mu}G||_{% L^{2}([0,T];L^{1,2}(|x|>\frac{3}{4}))},$$ (4.3) where $C$ is a positive constant independent of $T$. Proof. Noting that $v$ satisfies the homogeneous Dirichlet boundary condition, by Poincaré inequality we have $$\displaystyle\sup_{0\leq t\leq T}||v(t)||_{L^{2}(|x|\leq 1)}\leq C\sup_{0\leq t% \leq T}||v^{\prime}(t)||_{L^{2}(|x|\leq 1)}.$$ (4.4) Then we get $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2}(|x|% \leq 1)}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||\partial_{t,x}^{\mu% }v(t)||_{L^{2}(|x|\leq 1)}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{% \mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}+C\sup_{0\leq t\leq T}||v(t)||_{L^{2}(|x% |\leq 1)}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{% \mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}+C\sup_{0\leq t\leq T}||v^{\prime}(t)||_% {L^{2}(|x|\leq 1)}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{% \mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}.$$ Similarly, we have $$\displaystyle\sum_{|\mu|\leq N}||Z^{\mu}v||_{L^{2}_{t,x}([0,T]\times\{|x|\leq 1% \})}\leq C\sum_{|\mu|\leq N-1}||\partial_{t,x}^{\mu}v^{\prime}||_{L^{2}_{t,x}(% [0,T]\times\{|x|\leq 1\})}.$$ (4.6) So $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2}(|x|% \leq 1)}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x>^{-1/2}Z^{\mu}v||_{L^{2}([0,% T]\times\{|x|\leq 1\})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}Z^{\mu}v||_{L^{2}([0,T]\times\{|x|% \leq 1\})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2% }(|x|\leq 1)}+C\sum_{|\mu|\leq N}||Z^{\mu}v||_{L^{2}_{t,x}([0,T]\times\{|x|% \leq 1\})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{% \mu}v^{\prime}(t)||_{L^{2}(|x|\leq 1)}+C\sum_{|\mu|\leq N-1}||\partial_{t,x}^{% \mu}v^{\prime}||_{L^{2}_{t,x}([0,T]\times\{|x|\leq 1\})}.$$ (4.7) Taking a smooth cutoff function $\rho$ such that $$\displaystyle\rho(x)=\begin{cases}1,~{}|x|\geq 1,\\ 0,~{}|x|\leq\frac{3}{4},\end{cases}$$ (4.8) and denoting $\phi=\rho v$, $\phi$ satisfies the following wave equation in the whole space: $$\displaystyle\square\phi=\rho G-2\nabla\rho\cdot\nabla v-\Delta\rho v:=% \widetilde{G}.$$ (4.9) It follows from (3.7) and Poincaré inequality that $$\displaystyle\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}v(t)||_{L^{2}(|x|% \geq 1)}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x>^{-1/2}Z^{\mu}v||_{L^{2}_{t,% x}([0,T]\times{|x|\geq 1})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}Z^{\mu}v||_{L^{2}_{t,x}([0,T]% \times{|x|\geq 1})}$$ $$\displaystyle\leq\sum_{|\mu|\leq N}\sup_{0\leq t\leq T}||Z^{\mu}\phi(t)||_{L^{% 2}(\mathbb{R}^{4})}+\sum_{|\mu|\leq N}(\log(2+T))^{-1/2}||<x>^{-1/2}Z^{\mu}% \phi||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq N}||<x>^{-3/4}Z^{\mu}\phi||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}|||x|^{-\frac{1}{2}}Z^{\mu}\widetilde{G}|% |_{L^{2}([0,T];L^{1,2}(\mathbb{R}^{4}))}$$ $$\displaystyle\leq C\sum_{|\mu|\leq N}||\partial_{t,x}^{\mu}v^{\prime}||_{L^{2}% _{t,x}([0,T]\times\{|x|\leq 1\})}+C\sum_{|\mu|\leq N}||<x>^{-\frac{1}{2}}Z^{% \mu}G||_{L^{2}([0,T];L^{1,2}(|x|>\frac{3}{4}))}.$$ (4.10) By (4.1) and (4.1), we get (4.1). Similarly, we can get (4.1). ∎ We point out that two localized linear terms $$\displaystyle\sum_{|\mu|\leq N-1}\sup_{0\leq t\leq T}||\partial_{t,x}^{\mu}v^{% \prime}(t)||_{L^{2}(|x|\leq 1)}~{}\text{and}~{}\sum_{|\mu|\leq N}||\partial_{t% ,x}^{\mu}v^{\prime}||_{L^{2}_{t,x}([0,T]\times\{|x|\leq 1\})}$$ (4.11) appear on the right-hand side of  (4.1) and (4.1). These terms can be estimated by energy estimate and KSS inequalities, which have been obtained by Metalfe and Sogge in [34]. Now we list these estimates without proofs in the next section(for details, see Lemma 3.2, Lemma 3.3, Lemma 5.2 and Lemma 5.3 in [34]). 4.2 Energy Estimate and KSS Estimate Lemma 4.2. Let $w$ satisfy $$\left\{\begin{array}[]{llll}\Box_{h}w(t,x)=Q(t,x),~{}(x,t)\in\mathbb{R}^{+}% \times\mathbb{R}^{4}\backslash\mathcal{K},\\ w|_{\partial\mathcal{K}}=0,\\ \end{array}\right.$$ (4.12) where $$\displaystyle\Box_{h}w=(\partial_{t}^{2}-\Delta)w+\sum_{\alpha,\beta=0}^{4}h^{% \alpha\beta}(t,x)\partial_{\alpha}\partial_{\beta}w.$$ (4.13) Assume that, without loss of generality, $h^{\alpha\beta}$ satisfy the symmetry conditions $$h^{\alpha\beta}=h^{\beta\alpha},$$ (4.14) and the smallness condition $$|h|\ll 1.$$ (4.15) Here we denote $$|h|=\sum_{\alpha,\beta=0}^{4}|h^{\alpha\beta}|,$$ (4.16) $$|\partial h|=\sum_{\alpha,\beta,\gamma=0}^{4}|\partial_{\gamma}h^{\alpha\beta}|.$$ (4.17) Then we have $$\displaystyle\sup_{0\leq t\leq T}\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}w^{\prime}(t)||^{2}_{L^{2}(% \mathbb{R}^{4}\backslash\mathcal{K})}+\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||<x>^{-\frac{3}{4}}\partial^{\mu}\partial^{a}w^{\prime}% ||^{2}_{L^{2}(S_{T})}$$ $$\displaystyle\leq C\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}w^{\prime}(0)||^{2}_{L^{2}(% \mathbb{R}^{4}\backslash\mathcal{K})}+C\sum_{\begin{subarray}{c}|{\mu}|,|\nu|% \leq N\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial^{\mu}\partial^{a}w^{\prime}|+\frac{|\partial^{\mu}\partial^{a}w|}% {r})|\partial^{\nu}\partial^{b}Q|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|{\mu}|,|\nu|\leq N\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial h|+\frac{|h|}{r})|\partial^{\mu}\partial^{a}w^{\prime}|(|\partial% ^{\nu}\partial^{b}w^{\prime}|+\frac{|\partial^{\nu}\partial^{b}w|}{r})dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|{\mu}|,|\nu|\\ |a|,|b|=0,1\end{subarray}}\sum_{\alpha,\beta=0}^{4}\int_{0}^{T}\int_{\mathbb{R% }^{4}\backslash\mathcal{K}}(|\partial^{\mu}\partial^{a}w^{\prime}|+\frac{|% \partial^{\mu}\partial^{a}w|}{r})|[h^{\alpha\beta}\partial_{\alpha\beta},% \partial^{\nu}\partial^{b}]w|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|\leq N-1\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}\square w||^{2}_{L^{2}_{t,x}% (S_{T})}+C\sum_{\begin{subarray}{c}|\mu|\leq N-1\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}\square w||^{2}_{L^{\infty}(% [0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}))}$$ (4.18) and $$\displaystyle\sup_{0\leq t\leq T}\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||Z^{\mu}\partial^{a}w^{\prime}(t)||^{2}_{L^{2}(\mathbb{% R}^{4}\backslash\mathcal{K})}+\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||<x>^{-\frac{3}{4}}Z^{\mu}\partial^{a}w^{\prime}||^{2}_% {L^{2}(S_{T})}$$ $$\displaystyle\leq C\sum_{\begin{subarray}{c}|\mu|\leq N\\ |a|=0,1\end{subarray}}||Z^{\mu}\partial^{a}w^{\prime}(0)||^{2}_{L^{2}(\mathbb{% R}^{4}\backslash\mathcal{K})}+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq N\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|Z^{\mu}\partial^{a}w^{\prime}|+\frac{|Z^{\mu}\partial^{a}w|}{r})|Z^{\nu}% \partial^{b}Q|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq N\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial h|+\frac{|h|}{r})|Z^{\mu}\partial^{a}w^{\prime}|(|Z^{\nu}\partial% ^{b}w^{\prime}|+\frac{|Z^{\nu}\partial^{b}w|}{r})dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq N\\ |a|,|b|=0,1\end{subarray}}\sum_{\alpha,\beta=0}^{4}\int_{0}^{T}\int_{\mathbb{R% }^{4}\backslash\mathcal{K}}(|Z^{\mu}\partial^{a}w^{\prime}|+\frac{|Z^{\mu}% \partial^{a}w|}{r})|[h^{\alpha\beta}\partial_{\alpha\beta},Z^{\nu}\partial^{b}% ]w|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|\leq N+1\\ |a|=0,1\end{subarray}}||\partial^{\mu}_{x}\partial^{a}w^{\prime}||^{2}_{L_{t,x% }^{2}([0,T]\times\{|x|\leq 1\})}+C\sum_{\begin{subarray}{c}|\mu|\leq N+1\\ |a|=0,1\end{subarray}}||\partial^{\mu}_{x}\partial^{a}w^{\prime}||^{2}_{L^{% \infty}([0,T];L^{2}(\{|x|\leq 1\}))},$$ (4.19) where $[~{},~{}]$ stands for the Poisson’s bracket, and $C$ is a positive constant independent of $T$. Remark 4.1. The energy estimate and KSS estimates [34] involve only higher order estimates of the first order derivatives of the unknown function. Higher order estimates of the second order derivatives of the unknown function can be obtained by the same method. 4.3 Decay Estimate Lemma 4.3. Assume that $f\in C^{\infty}(\mathbb{R}^{4}\backslash\mathcal{K})$ and $f$ vanishes for large $x$. Then for all $x\in\mathbb{R}^{4}\backslash\mathcal{K}$ we have $$\displaystyle<r>^{\frac{3}{2}}|f(x)|\leq C\sum_{|a|\leq 3}||Z^{a}f||_{L^{2}(% \mathbb{R}^{4}\backslash\mathcal{K})},$$ (4.20) where $r=|x|,<r>=(1+r^{2})^{\frac{1}{2}}$, and $C$ is a positive constant independent of $f$ and $x$. Proof. When $0<r\leq 1,$ $<r>\sim 1$, by usual Sobolev embedding $$\displaystyle H^{3}(\mathbb{B}_{1})\hookrightarrow L^{\infty}(\mathbb{B}_{1}),$$ (4.21) we can get (4.20). When $r\geq 1$ , $<r>\sim|r|$, by the Sobolev embedding on $S^{3}$: $$\displaystyle H^{2}(S^{3})\hookrightarrow L^{\infty}(S^{3}),$$ (4.22) we get that $$\displaystyle r^{3}|f(x)|^{2}$$ $$\displaystyle\leq Cr^{3}\sum_{|\alpha|\leq 3}\int_{S^{3}}|\Omega^{\alpha}f(rw)% |^{2}dw$$ $$\displaystyle\leq Cr^{3}\sum_{|\alpha|\leq 3}\int_{S^{3}}\int_{r}^{\infty}|% \partial_{\rho}\Omega^{\alpha}f(\rho w)||\Omega^{\alpha}f(\rho w)|dwd\rho$$ $$\displaystyle\leq C\sum_{|\alpha|\leq 3}||\partial_{r}\Omega^{\alpha}f\Omega^{% \alpha}f||_{L^{1}(\mathbb{R}^{4}\backslash\mathcal{K})}$$ $$\displaystyle\leq C\sum_{|a|\leq 3}||Z^{a}f||^{2}_{L^{2}(\mathbb{R}^{4}% \backslash\mathcal{K})}.$$ (4.23) Thus when $r\geq 1$, (4.20) still holds. ∎ 5 Lifespan Estimate of Classical Solutions to Problem (1.1) In this section, we will prove Theorem 1.1 by a bootstrap argument. Let $u$ satisfy (1.1). For any $T>0$, denote $$\displaystyle M(T)$$ $$\displaystyle=\sup_{0\leq t\leq T}\sum_{|\mu|\leq 50}||\partial^{\mu}u(t)||_{L% ^{2}(\mathbb{R}^{4}\backslash\mathcal{K})}+(\log(2+T))^{-1/2}\sum_{|\mu|\leq 5% 0}||<x>^{-1/2}\partial^{\mu}u||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq 50}||<x>^{-3/4}\partial^{\mu}u||_{L^{2}_{t,x}(S_% {T})}+\sup_{0\leq t\leq T}\sum_{|\mu|\leq 50}||\partial^{\mu}\partial u(t)||_{% L^{2}(\mathbb{R}^{4}\backslash\mathcal{K})}$$ $$\displaystyle+\sup_{0\leq t\leq T}\sum_{|\mu|\leq 50}||\partial^{\mu}\partial^% {2}u(t)||_{L^{2}(\mathbb{R}^{4}\backslash\mathcal{K})}$$ $$\displaystyle+\sum_{|\mu|\leq 50}||<x>^{-3/4}\partial^{\mu}\partial u||_{L^{2}% _{t,x}(S_{T})}+\sum_{|\mu|\leq 50}||<x>^{-3/4}\partial^{\mu}\partial^{2}u||_{L% ^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sup_{0\leq t\leq T}\sum_{|\mu|\leq 49}||Z^{\mu}u(t)||_{L^{2}(% \mathbb{R}^{4}\backslash\mathcal{K})}+(\log(2+T))^{-1/2}\sum_{|\mu|\leq 49}||<% x>^{-1/2}Z^{\mu}u||_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle+\sum_{|\mu|\leq 49}||<x>^{-3/4}Z^{\mu}u||_{L^{2}_{t,x}(S_{T})}+% \sup_{0\leq t\leq T}\sum_{|\mu|\leq 49}||Z^{\mu}\partial u(t)||_{L^{2}(\mathbb% {R}^{4}\backslash\mathcal{K})}$$ $$\displaystyle+\sup_{0\leq t\leq T}\sum_{|\mu|\leq 49}||Z^{\mu}\partial^{2}u(t)% ||_{L^{2}(\mathbb{R}^{4}\backslash\mathcal{K})}$$ $$\displaystyle+\sum_{|\mu|\leq 49}||<x>^{-3/4}Z^{\mu}\partial u||_{L^{2}_{t,x}(% S_{T})}+\sum_{|\mu|\leq 49}||<x>^{-3/4}Z^{\mu}\partial^{2}u||_{L^{2}_{t,x}(S_{% T})}.$$ (5.1) Assume $$\displaystyle M(0)\leq C_{0}\varepsilon.$$ (5.2) We will prove that if $\varepsilon>0$ is small enough, then for all $T\leq\exp{(\frac{c}{\varepsilon^{2}})}$ we have $$\displaystyle M(T)\leq 2A\varepsilon.$$ (5.3) Here $A$ and $c$ are positive constants independent of $\varepsilon$, to be determined later. Assume that $$\displaystyle M(T)\leq 4A\varepsilon,$$ (5.4) it follows from Lemma 4.1 and Lemma 4.2 that $$\displaystyle M^{2}(T)$$ $$\displaystyle\leq C_{1}^{2}\varepsilon^{2}+C\sum_{|\mu|\leq 49}||<x>^{-\frac{1% }{2}}Z^{\mu}F||^{2}_{L^{2}([0,T];L^{1,2}(|x|>\frac{3}{4}))}$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 49\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|Z^{\mu}\partial^{a}u^{\prime}|+\frac{|Z^{\mu}\partial^{a}u|}{r})|Z^{\nu}% \partial^{b}H|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 49\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial\gamma|+\frac{|\gamma|}{r})|Z^{\mu}\partial^{a}u^{\prime}|(|Z^{\nu% }\partial^{b}u^{\prime}|+\frac{|Z^{\nu}\partial^{b}u|}{r})dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 49\\ |a|,|b|=0,1\end{subarray}}\sum_{\alpha,\beta=0}^{4}\int_{0}^{T}\int_{\mathbb{R% }^{4}\backslash\mathcal{K}}(|Z^{\mu}\partial^{a}u^{\prime}|+\frac{|Z^{\mu}% \partial^{a}u|}{r})|[\gamma^{\alpha\beta}\partial_{\alpha\beta},Z^{\nu}% \partial^{b}]u|dxdt$$ $$\displaystyle+C\sum_{|\mu|\leq 50}||<x>^{-\frac{1}{2}}\partial^{\mu}F||^{2}_{L% ^{2}([0,T];L^{1,2}(|x|>\frac{3}{4}))}$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 50\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial^{\mu}\partial^{a}u^{\prime}|+\frac{|\partial^{\mu}\partial^{a}u|}% {r})|\partial^{\nu}\partial^{b}H|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 50\\ |a|,|b|=0,1\end{subarray}}\int_{0}^{T}\int_{\mathbb{R}^{4}\backslash\mathcal{K% }}(|\partial\gamma|+\frac{|\gamma|}{r})|\partial^{\mu}\partial^{a}u^{\prime}|(% |\partial^{\nu}\partial^{b}u^{\prime}|+\frac{|\partial^{\nu}\partial^{b}u|}{r}% )dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|,|\nu|\leq 50\\ |a|,|b|=0,1\end{subarray}}\sum_{\alpha,\beta=0}^{4}\int_{0}^{T}\int_{\mathbb{R% }^{4}\backslash\mathcal{K}}(|\partial^{\mu}\partial^{a}u^{\prime}|+\frac{|% \partial^{\mu}\partial^{a}u|}{r})|[\gamma^{\alpha\beta}\partial_{\alpha\beta},% \partial^{\nu}\partial^{b}]u|dxdt$$ $$\displaystyle+C\sum_{\begin{subarray}{c}|\mu|\leq 49\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}F||^{2}_{L^{\infty}([0,T];L^% {2}(\mathbb{R}^{4}\backslash\mathcal{K}))}+C\sum_{\begin{subarray}{c}|\mu|\leq 4% 9\\ |a|=0,1\end{subarray}}||\partial^{\mu}\partial^{a}F||^{2}_{L^{2}_{t,x}(S_{T})}$$ $$\displaystyle:=C_{1}^{2}\varepsilon^{2}+I+II+\cdots X.$$ (5.5) Now we estimate all terms on the right-hand side of (5), respectively. For $I$, by Hölder inequality we have $$\displaystyle I$$ $$\displaystyle\leq C\sum_{|\mu|\leq 27}||<x>^{-\frac{1}{2}}Z^{\mu}u||^{2}_{L^{2% }([0,T];L^{2,\infty}(|x|\geq\frac{3}{4}))}\sum_{|\mu|\leq 50}\sum_{|b|\leq 2}|% |Z^{\mu}\partial^{b}u||^{2}_{L^{\infty}([0,T];L^{2}(|x|\geq\frac{3}{4})}$$ $$\displaystyle\leq CA^{2}\varepsilon^{2}\sum_{|\mu|\leq 27}||<x>^{-\frac{1}{2}}% Z^{\mu}u||^{2}_{L^{2}([0,T];L^{2,\infty}(|x|\geq\frac{3}{4}))}.$$ (5.6) It follows from the Sobolev embedding on $S^{3}$: $$\displaystyle H^{2}(S^{3})\hookrightarrow L^{\infty}(S^{3})$$ (5.7) that $$\displaystyle\sum_{|\mu|\leq 27}||<x>^{-\frac{1}{2}}Z^{\mu}u||^{2}_{L^{2}([0,T% ];L^{2,\infty}(|x|\geq\frac{3}{4}))}$$ $$\displaystyle\leq\sum_{|\mu|\leq 29}||<x>^{-\frac{1}{2}}Z^{\mu}u||^{2}_{L^{2}(% [0,T];L^{2}(|x|\geq\frac{3}{4}))}$$ $$\displaystyle\leq C(\log(2+T))A^{2}\varepsilon^{2}.$$ (5.8) Thus, we get $$\displaystyle I\leq C(\log(2+T))A^{4}\varepsilon^{4}.$$ (5.9) Similarly, we have $$\displaystyle V\leq C(\log(2+T))A^{4}\varepsilon^{4}.$$ (5.10) For $II$, noting that $0\in\mathcal{K}$ and then $1/r$ is bounded on $\mathbb{R}^{4}\backslash\mathcal{K}$, by Hölder inequality and Lemma 4.3 we have $$\displaystyle II\leq$$ $$\displaystyle C\sum_{|\mu|\leq 49,|b|\leq 2}||<x>^{-\frac{3}{4}}Z^{\mu}% \partial^{b}u||^{2}_{L^{2}_{t,x}(S_{T})}\sum_{|\mu|\leq 27}||<x>^{\frac{3}{2}}% Z^{\mu}u||_{L^{\infty}([0,T];L^{\infty}(\mathbb{R}^{4}\backslash\mathcal{K}))}$$ $$\displaystyle\leq C\sum_{|\mu|\leq 49,|b|\leq 2}||<x>^{-\frac{3}{4}}Z^{\mu}% \partial^{b}u||^{2}_{L^{2}_{t,x}(S_{T})}\sum_{|\mu|\leq 30}||Z^{\mu}u||_{L^{% \infty}([0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}))}$$ $$\displaystyle\leq CA^{3}\varepsilon^{3}.$$ (5.11) Similarly, $$\displaystyle III,VI,VII\leq CA^{3}\varepsilon^{3}.$$ (5.12) Noting the fact that $[\partial,Z]$ belongs to the span of $\{\partial\}$, for all $|\nu|\leq 49,|b|\leq 1,0\leq\alpha,\beta\leq 4$, we have $$\displaystyle|[\gamma^{\alpha\beta}\partial_{\alpha\beta},Z^{\nu}\partial^{b}]% u|\leq\sum_{|\nu|\leq 49,|b|\leq 2}|Z^{\nu}\partial^{b}u|\sum_{|\nu|\leq 27}|Z% ^{\nu}u|.$$ (5.13) By the same method used to deal with $II$, we can obtain $$\displaystyle IV\leq CA^{3}\varepsilon^{3}.$$ (5.14) Similarly, $$\displaystyle VIII\leq CA^{3}\varepsilon^{3}.$$ (5.15) For the last two terms, it follows from Hölder inequality and Lemma 4.3 that $$\displaystyle IX$$ $$\displaystyle\leq C\sum_{|\mu|\leq 50,|b|\leq 2}||\partial^{\mu}\partial^{b}u|% |^{2}_{L^{\infty}([0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}))}\sum_{|\mu% |\leq 27}||\partial^{\mu}u||^{2}_{L^{\infty}([0,T];L^{\infty}(\mathbb{R}^{4}% \backslash\mathcal{K}))}$$ $$\displaystyle\leq CA^{2}\varepsilon^{2}\sum_{|\mu|\leq 30}||Z^{\mu}u||^{2}_{L^% {\infty}([0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}))}$$ $$\displaystyle\leq CA^{4}\varepsilon^{4}$$ (5.16) and $$\displaystyle X$$ $$\displaystyle\leq C\sum_{|\mu|\leq 50,|b|\leq 2}||<r>^{-\frac{3}{4}}\partial^{% \mu}\partial^{b}u||^{2}_{L^{2}([0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}% ))}\sum_{|\mu|\leq 27}||<r>^{\frac{3}{4}}\partial^{\mu}u||^{2}_{L^{\infty}([0,% T];L^{\infty}(\mathbb{R}^{4}\backslash\mathcal{K}))}$$ $$\displaystyle\leq C\sum_{|\mu|\leq 50,|b|\leq 2}||<r>^{-\frac{3}{4}}\partial^{% \mu}\partial^{b}u||^{2}_{L^{2}([0,T];L^{2}(\mathbb{R}^{4}\backslash\mathcal{K}% ))}\sum_{|\mu|\leq 30}||Z^{\mu}u||^{2}_{L^{\infty}([0,T];L^{2}(\mathbb{R}^{4}% \backslash\mathcal{K}))}$$ $$\displaystyle\leq CA^{4}\varepsilon^{4}.$$ (5.17) Combining all the previous estimates, we get $$\displaystyle M(T)\leq C_{1}\varepsilon+C(\log(2+T))^{1/2}A^{2}\varepsilon^{2}% +CA^{3/2}\varepsilon^{3/2}+CA^{2}\varepsilon^{2}.$$ (5.18) Taking $A=2\max\{C_{0},C_{1}\}$, we see that if $$\displaystyle C(\log(2+T))^{1/2}A\varepsilon,~{}CA^{1/2}\varepsilon^{1/2},~{}% CA\varepsilon\leq\frac{1}{2},$$ (5.19) then $$\displaystyle M(T)\leq 2A\varepsilon.$$ (5.20) From the above argument, we know that if the parameter $\varepsilon>0$ is small enough, then for all $T\leq\exp(\frac{c}{\varepsilon^{2}})$, we have $$\displaystyle M(T)\leq 2A\varepsilon.$$ (5.21) Consequently, we get the lifespan estimate of classical solutions to problem (1.1): $$\displaystyle T_{\varepsilon}\geq\exp(\frac{c}{\varepsilon^{2}}),$$ (5.22) where $c$ is a positive constant independent of $\varepsilon$. Acknowledgements The authors would like to express their sincere gratitude to Professor Ta-Tsien Li for his helpful suggestions and encouragement. References [1] S. Alinhac, The null condition for quasilinear wave equations in two space dimensions I, Invent. Math. 145 (2001), 597–618. [2] S. Alinhac, The null condition for quasilinear wave equations in two space dimensions II, Amer. J. Math. 123 (2001), 1071–1101. [3] S. Alinhac, Hyperbolic Partial Differential Equations. Universitext, Springer, 2009. [4] S. Alinhac, Geometric Analysis of Hyperbolic Differential Equations: An Introduction. London Mathematical Society Lecture Note Series 374, Cambridge University Press, 2010. [5] D. Christodonlou, Global solutions of nonlinear hyperbolic equations for small initial data, Comm. Pure Appl. Math. 39(1986), 267–282. [6] Y. Du, J. Metcalfe, C. D. Sogge, and Y. Zhou, Concerning the Strauss conjecture and almost global existence for nonlinear Dirichlet-wave equations in 4-dimensions, Comm. Partial Differential Equations 33 (2008), 1487–1506. [7] Y. Du and Y. Zhou, The lifespan for nonlinear wave equation outside of star-shaped obstacle in three space dimensions, Comm. Partial Differential Equations 33 (2008), 1455–1486. [8] J. Helms and J. Metcalfe, The lifespan for 3-dimensional quasilinear wave equations in exterior domains, arXiv:1204.4689v1. [9] K. Hidano, An elementary proof of global or almost global existence for quasilinear wave equations, Tohoku Math. J. 56 (2004), 271–287. [10] K. Hidano, J. Metcalfe, H. Smith, C. D. Sogge and Y. Zhou, On abstract Strichartz esti- mates and the Strauss conjecture for nontrapping obstacles, Trans. Amer. Math. Soc. 362 (2010), 2789–2809. [11] L. Hörmander, The lifespan of classical solutions of nonlinear hyperbolic equations, Institute Mittag–Leffler, Report No.5 1988, 211–234. [12] L. Hörmander, On the fully nonlinear cauchy problem with small data II. Microlocal Analysis and Nonlinear Waves, Vol. 30, IMA. Volumes in mathematics and its applications, Springer-Verlag, Berlin, 1991, 51–81. [13] L. Hörmander, Lectures on Nonlinear Hyperbolic Differential Equations. Springer-Verlag, Berlin (1997). [14] F. John, Blow-up of solutions of nonlinear wave equations in three space dimensions. Manuscripta Math. 28 (1979), 235–268. [15] F. John, Blow-up for quasilinear wave equations in three space dimensions. Comm. Pure and Appl. Math. 34 (1981), 29–51. [16] F. John and S. Klainerman, Almost global existence to nonlinear wave equations in three space dimensions, Comm. Pure Appl. Math. 37 (1984), 443–455. [17] F. John, Nonlinear Wave Equations, Formation of Singularities. University Lecture Series, Amer. Math. Soc., Providence, RI, 1990. [18] M. Keel, H. F. Smith and C. D. Sogge, Almost global existence for some semilinear wave equations, J. Anal. Math. 87 (2002), 265–279. [19] M. Keel, H. F. Smith and C. D. Sogge, Global existence for a quasilinear wave equation outside of star-shaped domains, J. Funct. Anal. 189 (2002), 155–226. [20] M. Keel, H. Smith and C. D. Sogge, Almost global existence for quasilinear wave equations in three apace dimentions, J. Am. Math. Soc. 17 (2004), 109–153. [21] S. Klainerman, Global existence for nonlinear wave equations, Comm. Pure and Appl. Math. 33 (1980), 43–101. [22] S. Klainerman, On “almost global” solutions to quasilinear wave equations in three space dimentions, Comm. Pure and Appl. Math. 36 (1983), 325–344. [23] S. Klainerman, Uniform decay estimate and the Lorentz invariance of the classical wave equations, Comm. Pure and Appl. Math. 38 (1985), 321–332. [24] S. Klainerman, The null condition and global existence to nonlinear wave equations, Nonlinear systems of partial differential equations in applied mathematics, Part 1(Santa Fe, N.M., 1884), Lectures in Appl. Math., Amer. Math. Soc., 23 (1986), 293–326. [25] S. Klainerman, Remarks on the global Sobolev inequalities in the Minkowski space $\mathbb{R}^{n+1}$, Comm. Pure Appl. Math. 40 (1987), 111–117. [26] S. Klainerman and T. C. Sideris, On almost global existence for nonrelativistic wave equations in 3D, Comm. Pure Appl. Math. 49 (1996), 307–321. [27] T. T. Li and Y. M. Chen, Global Classical Solutions for Nonlinear Evolution Equations, Longman Scientific & Technical, UK, 1992. [28] T. T. Li and Y. Zhou, Life-span of classical solutions to nonlinear wave equations in two space dimensions, J. Math. Pures Appl. 73 (1994), 223–249. [29] T. T. Li and Y. Zhou, Life-span of classical solutions to nonlinear wave equations in two space dimensions II, J. Partial Diff. Eqs. 6 (1993), 17–38. [30] T. T. Li and Y. Zhou, Nonlinear stability for two space dimensional wave equations with higher order perturbations, Nonlinear World 1 (1994), 35–58. [31] T. T. Li and Y. Zhou, A note on the life-span of classical solutions to nonlinear wave equations in four space dimentions, Indiana Univ. Math. J. 44 (1995), 1207–1248. [32] H£®Lindblad, On the lifespan of solutions of nonlinear wave equations with small data, Comm. Pure Appl. Math. 43 (1990), 445–472. [33] H. Lindblad and C. D. Sogge, Long time existence for small amplitude semilinear wave equation, Amer. J. Math. 118 (1996), 1047–1135. [34] J. Metcalfe and C. D. Sogge, Long-time existence of quasilinear wave equations exterior to star-shaped obstacles via energy methods, SIAM J. Math. Anal. 38 (2006), 188–209. [35] J. Metcalfe and C. D. Sogge, Global existence for high dimensional quasilinear wave equa tions exterior to star-shaped obstacles, Discrete Contin. Dyn. Sys. 28 (2010), 1589–1601. [36] W. A. Strauss, Nonlinear scattering theory at low energy, J. Funct. Anal. 41 (1981), 110–133. [37] H. Takamura and K. Wakasa, The sharp upper bound of the lifespan of solutions to critical semilinear wave equations in high dimensions, J. Diff. Eqs. 251 (2011), 1157–1171. [38] C. B. Wang and X. Yu, Recent works on the Strauss conjecture, Recent Advances in Harmonic Analysis and Partial Differential Equations, 235–256, Contemp. Math., 581, Amer. Math. Soc., Providence, RI, 2012. [39] Y. Zhou and W. Han, Life-span of solutions to critical semilinear wave equations, arXiv:1103.3758v1.
An Attention-Aided Deep Learning Framework for Massive MIMO Channel Estimation Jiabao Gao, Mu Hu, Caijun Zhong, Geoffrey Ye Li, and Zhaoyang Zhang J. Gao, M. Hu, C. Zhong, Z. Zhang are with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China (Email: {gao_jiabao, muhu, caijunzhong}@zju.edu.cn). Geoffrey Ye Li is with the Faculty of Engineering, Department of Electrical and Electronic Engineering, Imperial College London, England (Email: Geoffrey.Li@imperial.ac.uk). Abstract Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the “divide-and-conquer” policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability. Index Terms Massive MIMO, channel estimation, deep learning, attention mechanism, hybrid analog-digital, divide-and-conquer. I Introduction Massive multiple-input multiple-output (MIMO) is a key enabling technology for future wireless communication systems due to its high spectral and energy efficiency[1, 2]. However, the realization of various theoretical gains of massive MIMO is critically dependent on the quality of channel state information (CSI). Because of the large number of antennas and users, the CSI acquisition has long been a major challenge in practical massive MIMO systems. In the prior works, least square (LS) and minimal mean-squared error (MMSE)[3] are two most commonly used estimators for channel estimation. The LS is relatively simple and easy to implement while its performance is unsatisfactory. On the other hand, MMSE can refine the LS estimation if accurate channel correlation matrix (CCM) is available. However, the complexity of MMSE estimation is much higher than that of LS estimation due to the matrix inversion operation. On the other hand, to reduce the hardware and energy cost, the hybrid analog-digital (HAD) architecture is usually adopted in practical massive MIMO systems, where the multi-antenna array is connected to only a limited number of radio-frequency (RF) chains through phase shifters in analog domain[4, 5, 6]. With HAD, channel estimation becomes even more difficult since the received signals at the BS are only a few linear combinations of the original signals. If LS is used, multiple estimations are required since only part of the antennas’ channels can be estimated once due to limited number of RF chains. To avoid the dramatically increased overhead of LS, the slowly changing directions of arrival of channel paths are obtained first in the preamble stage in [7], then only channel gains of each path are re-estimated in a long period. Another alternative is to exploit the channel sparsity and estimate all antennas’ channels at once using the compressed sensing (CS) based methods, such as orthogonal matching pursuit[8] and sparse Bayesian learning[9]. In [10, 11], several improved CS algorithms have been developed through embedding the structural characteristics of channel sparsity, which can achieve better estimation performance without extra pilot overhead. Nevertheless, CS algorithms require high computational complexity and perform poor for channels with low sparsity. Therefore, it is highly desirable to develop channel estimators with less requirement for prior information and better performance-complexity trade-off. Inspired by the great performance and the low complexity during online prediction, deep learning (DL) has been applied to many wireless communication problems[12, 13], such as spectrum sensing[14], resource management[15, 16, 17, 18], beamforming[19, 20], signal detection[21, 22, 23], and channel estimation[26, 24, 25, 27, 28, 29, 30, 31, 32]. By exploiting the structural characteristics of the modulated signals, the customized deep neural network (DNN) in [14] significantly outperforms energy detection in spectrum sensing. In [15], a DNN has been proposed for resource management, which can achieve comparable performance as the iterative optimization algorithm. An unsupervised learning-based beamforming network has been developed for intelligent reconfigurable surface aided massive MIMO systems in [19]. In [21], channel estimation and signal detection in orthogonal frequency division multiplexing systems have been performed jointly by a DNN. Then, a model-driven based approach is further proposed in [22] to exploit the advantages of both conventional algorithms and DNN. In [23], rather than directly using a black-box DNN, the conventional orthogonal approximate message passing algorithm (OAMP) is unfolded for the detection network. There are mainly two categories of approaches for DL-based massive MIMO channel estimation. In the first category, “deep unfolding” methods unfold various iterative optimization algorithms and enhance their estimation performance by inserting learnable parameters. In [24], the AMP algorithm is unfolded into a cascaded neural network for millimeter wave channel estimation, where the denoiser is learned by a DNN. Thanks to the power of DL, the proposed method can outperform a series of conventional denoising-AMP based algorithms. In [25], the iterative shrinkage thresholding algorithm is unfolded to solve sparse linear inverse problems, where massive MIMO channel estimation is used as a case study. However, “unfolding” is only feasible to the iterative algorithms with simple structures, and the computational complexity is also high. In the other category, DL is used to directly learn the mapping from available channel-related information to the CSI for performance improvement or complexity reduction. In [26], a DNN has been proposed to refine the coarse estimation in HAD massive MIMO systems, where the channel correlation in the frequency and time domains is exploited for further performance improvement. In [28], the estimation performance is further improved by jointly training the pilot signals and channel estimator with an autoencoder in downlink massive MIMO systems. In [29], graph neural network has been used for massive MIMO channel tracking. Deep multimodal learning has been used for massive MIMO channel estimation and prediction in [30]. To reduce the complexity, the amplitudes of beamspace channels are predicted by a DNN and the dominant entries are estimated by LS in [31], thus avoiding the greedy search commonly adopted by CS algorithms. In [32], the uplink-to-downlink channel mapping in frequency-division duplex (FDD) systems is learned by a sparse complex valued network. Nevertheless, current DL-based channel estimation methods have seldom exploited the characteristics of channel distribution. In practice, the BS is often located in a high altitude with few surrounding scatters[33], so the angular spread of each user’s incident signal at the BS is narrow. Thus, the global distribution of channels corresponding to different users in the entire angular space can be viewed as the composition of many local distributions, where each local distribution represents channels within a small angular region. Due to narrow angular spread, a certain angular region contains much fewer channel cases than the entire angular space because of the limited angular range of channel paths, making the local distributions much simpler than the global distribution. Besides, different local distributions can be highly distinguishable from each other if the entire angular space is properly segmented into different angular regions. Under such a condition, the classic “divide-and-conquer” policy, which tackles a complex main problem by solving a series of its simplified sub-problems, is very suitable. Specifically, the estimation of channels in the entire angular space can be regarded as the main problem and the estimation of channels in different small angular regions can be regarded as different sub-problems. Motivated by this, in this paper, we propose a novel attention-aided DL-based channel estimation framework, where the “divide-and-conquer” policy is realized automatically through the dynamic adaptation of attention maps. The main contributions of this paper are summarized as follows: • An attention-aided DL-based channel estimation framework is proposed for massive MIMO systems, which achieves better performance than its counterpart without attention in simulation. To the best knowledge of the authors, this is the first work that introduces the attention mechanism to DL-based channel estimation111There are already some literature that uses attention-aided DL to solve communication problems, such as CSI compression[34, 35] and joint source and channel coding[36]. Nevertheless, the considered channel distribution in [34] does not possess strong separable property, and the proposed method in [36] requires extra side information. As for [35], the non-local neural network model is utilized to exploit the self-attention in the spatial dimension of channels.. • We extend the above framework to the scenario with HAD and an embedding method is proposed to effectively integrate the attention mechanism into the fully connected neural network (FNN), which expands the application range of the proposed approach. • We visually explain the “divide-and-conquer” policy reflected in the distributions of learned attention maps, which enhances the interpretability and rationality of the proposed approach. • Based on our results, the performance gain of attention mainly comes from the narrow angular spread characteristic of channels. Therefore, the proposed approach can be extended to many other problems apart from channel estimation as long as the channel distribution has certain separability, such as multi-user beamforming, FDD downlink channel prediction, and so forth. The rest of this paper is organized as follows. Section II introduces the system model, channel model, and problem formulation. Section III presents the attention-aided DL-based channel estimation framework, which is extended to the HAD scenario in Section IV. Extensive simulation results are demonstrated in Section V. Eventually, the paper is concluded in Section VI. Here are some notations used subsequently. We use italic, bold-face lower-case and bold-face upper-case letter to denote scalar, vector, and matrix, respectively. ${\bf A}^{T}$ and ${\bf A}^{H}$ denote the transpose and Hermitian or complex conjugate transpose of matrix $A$, respectively. $[\mbox{\boldmath$A$}]_{i,j}$ denotes the element at the $i$-th row and $j$-th column of matrix $A$. ${\left\|{\bf{x}}\right\|}$ denotes the $l$-2 norm of vector $x$, and $|a|$ denotes the amplitude of complex number $a$. ${\mathbb{C}^{x\times y}}$ denotes the ${x\times y}$ complex space. $\mathcal{CN}(\mu,\sigma^{2})$ denotes the distribution of a circularly symmetric complex Gaussian random variable with mean $\mu$ and variance $\sigma^{2}$. $\mathcal{U}[a,b]$ denotes the uniform distribution between $a$ and $b$. II System model and problem formulation In this section, system model and channel model are first introduced. Then, the conventional massive MIMO channel estimation problem is formulated. II-A System Model Consider a single cell massive MIMO system, where the BS is equipped with an $N$-antenna uniform linear array (ULA) and $K$ single-antenna users are randomly distributed in the cell of the corresponding BS, as illustrated in Fig. 1. II-B Channel Model Following the same channel model as in [37], the uplink channel from user $k$ to the BS can be expressed as $$\mbox{\boldmath$h$}_{k}=\frac{1}{\sqrt{N_{p}}}\sum_{i=1}^{N_{p}}\alpha_{ki}\mbox{\boldmath$a$}(\theta_{ki})\in\mathbb{C}^{N\times 1},$$ (1) where $N_{p}$ is the number of paths, $\alpha_{ki}$ and $\theta_{ki}$ are the complex gain and angle of arrival (AoA) at the BS of the $i$-th path from the $k$-th user, respectively. Without loss of generality, we consider half-wavelength antenna spacing in this paper, then the steering vector of the ULA can be written as $\mbox{\boldmath$a$}(\theta)=[1,e^{j\pi\sin(\theta)},\cdots,e^{j\pi\sin(\theta)(N-1)}]^{T}$. Define the average AoA and the angular spread of user $k$’s channel paths as $\bar{\theta}_{k}$ and $\bigtriangleup_{\theta}$, respectively, that is, $\theta_{ki}$ follows a uniform distribution $\mathcal{U}[\bar{\theta}_{k}-\bigtriangleup_{\theta},\bar{\theta}_{k}+\bigtriangleup_{\theta}]$. As in [37, 11], the narrow angular spread assumption is adopted, i.e., $\bigtriangleup_{\theta}\ll\pi$. To better understand this channel characteristic, we convert the original channel to the angular domain by $$\displaystyle\mbox{\boldmath$x$}_{k}=\mbox{\boldmath$F$}\mbox{\boldmath$h$}_{k}\in\mathbb{C}^{N\times 1},$$ (2) where $\mbox{\boldmath$x$}_{k}$ denotes the angular domain channel of user $k$, and $\mbox{\boldmath$F$}\in\mathbb{C}^{N\times N}$ is a shift-version discrete Fourier transform matrix[11], with the $n$-th row given by $\mbox{\boldmath$f$}_{n}=\frac{1}{\sqrt{N}}[1,e^{-j\pi\eta_{n}},\cdots,e^{-j\pi\eta_{n}(N-1)}]$, for $\eta_{n}=\frac{-N+1}{N},\frac{-N+3}{N},\cdots,\frac{N-1}{N}$. Due to narrow angular spread assumption, the angular domain channel exhibits the spatial-clustered sparsity structure[11]. Specifically, as shown in the right half of Fig. 1, $\mbox{\boldmath$x$}_{k}$ only has a few significant elements appearing in a cluster. If properly exploited, such sparsity structure can help to improve estimation performance and reduce estimation overhead. II-C Problem Formulation During the uplink training, orthogonal pilot sequences are sent by different users. Denote the pilot sequence of the $k$-th user as $\mbox{\boldmath$p$}_{k}\in{\mathbb{C}}^{1\times Lp}$, where $L_{p}\geq K$ is the length of pilot sequences. Notice that the channel during pilot training phase is assumed to be unchanged[11] since $L_{p}$ is relatively small. Therefore, the superimposed received signal at the BS can be expressed as $$\displaystyle\mbox{\boldmath$Y$}=\sum^{K}_{k=1}\mbox{\boldmath$h$}_{k}\mbox{\boldmath$p$}_{k}+\mbox{\boldmath$N$}\in\mathbb{C}^{N\times L_{p}},$$ (3) where $\mbox{\boldmath$N$}\sim\mathcal{CN}(0,\sigma^{2})\in{\mathbb{C}}^{N\times L_{p}}$ is the zero-mean additive white Gaussian noise at the BS with variance $\sigma^{2}$. Without loss of generality, we fix the power of pilot sequences to unit and adjust the transmit signal-to-noise ratio (SNR) by changing the noise variance. Then, we have $\mbox{\boldmath$p$}_{i}\mbox{\boldmath$p$}_{j}^{H}=0,\forall i\neq j$ and $\mbox{\boldmath$p$}_{i}\mbox{\boldmath$p$}_{i}^{H}=1,\forall i$. Exploiting the orthogonality of the pilot sequences, the LS estimation of user $k$’s channel can be obtained as $$\hat{\mbox{\boldmath$h$}}_{k}=\mbox{\boldmath$Y$}\mbox{\boldmath$p$}_{k}^{H}=\mbox{\boldmath$h$}_{k}+\widetilde{\mbox{\boldmath$n$}}_{k}\in\mathbb{C}^{N\times 1},$$ (4) where $\widetilde{\mbox{\boldmath$n$}}_{k}\triangleq\mbox{\boldmath$N$}\mbox{\boldmath$p$}_{k}^{H}$ is the effective noise for user $k$. For brevity, we will consider a specific user from now on and omit subscript $k$. Besides, we use $\hat{\mbox{\boldmath$h$}}_{\text{LS}}$ to denote the LS estimation. Therefore, the goal of channel estimation222Here we use the term “channel estimation” for consistency, actually “channel refinement” is more proper. is to find a function that maps $\hat{\mbox{\boldmath$h$}}_{\text{LS}}$ to $h$. One of the conventional methods is the MMSE estimation, where the LS estimation is refined by the CCM. However, accurate CCM is hard to obtain in practice and the complexity of matrix inversion in MMSE estimation is very high, especially when the antenna number is large. In [38], DL-based methods have been proposed to refine the channel estimation. In this paper, we will develop an attention-aided DL framework for conventional massive MIMO channel estimation by exploiting the characteristics of channel distribution. III Attention-aided DL framework for massive MIMO channel estimation In this section, input and output processing, network structure design, and detailed network training method of the proposed framework are introduced. III-A Input and Output Processing Since channel parameters can be canonically expressed in the angular domain, the input and output of the networks are all in the angular domain in the proposed framework. In simulation, we find that the more sparse angular domain input and output can lead to better channel estimation performance than the original ones. Once the angular domain channel estimation, $\hat{\mbox{\boldmath$x$}}$, is obtained, the original channel estimation can be readily recovered by $\hat{\mbox{\boldmath$h$}}=\mbox{\boldmath$F$}^{H}\hat{\mbox{\boldmath$x$}}$. Besides, the real and imaginary parts have to be separately processed since complex training is still not well supported by current DL libraries. To promote efficient training, we also perform standard normalization preprocessing on the input. III-B Attention-Aided Channel Estimation Network Structure Design As shown in Fig. 2, convolutional neural network (CNN) is a suitable choice for the network structure to exploit the local correlation in the input data due to the spatial-clustered sparsity structure of the angular domain channel. In this paper, one-dimensional convolution (Conv1D) is used due to the shape of input data. The input of a Conv1D layer is organized as a $(F,C)$-dimensional feature matrix, where $C$ denotes the number of channels333Here channel is a term in CNN representing a dimension of feature matrix, not the communication channel. and $F$ denotes the number of features in each channel. Then, the convolution operation slides $C^{\prime}$ filters over the input feature matrix in certain strides to obtain the output feature matrix, which is also the input of the next layer. Specifically, each filter contains a $(L,C)$-dimensional trainable weight matrix and a scalar bias term, where $L$ denotes the filter size. When a filter is located in a certain position of the feature matrix, the cross-correlation between the corresponding chunk of the feature matrix and the weight matrix of the filter is computed and the bias is added to obtain the convolution output of the position[39]. In the proposed channel estimation network, $N_{B}$ convolutional blocks and an output Conv1D layer are used to refine the LS coarse channel estimation. As depicted in the dashed box, in each convolutional block, a batch normalization (BN) layer to prevent gradient explosion or vanishing[43] and a ReLU activation function are inserted after the Conv1D layer. Besides, the Conv1D layer in the first block has $F$ filters of size $L_{I}$ and the Conv1D layers in the next $N_{B}-1$ blocks have $F$ filters of size $L_{H}$. The optimal values of $N_{B}$ and $F$ can be determined through simulation. Finally, the output Conv1D layer has 2 filters of size $L_{O}$, corresponding to the real and imaginary parts of the channel prediction, respectively. The stride is set to $S$ and all the Conv1D layers pad zeros to keep the dimension $N$ of the feature matrix unchanged. To effectively exploit the distribution characteristics of channel, the attention mechanism444Notice that, the term attention can refer to many related methods including [40, 42, 41]. In this paper, we use the classic “SENet” proposed in [40]. is applied in the network structure design. In the original CNN, all the features are used for all data samples with equal importance. However, certain features can definitely be more important or informative than others to certain data samples in practice, especially for highly separable data like narrow angular spread channel. For instance, key features, which are only aimed at dealing with channel distribution in a specific angular region, might be useless or even disruptive for the estimation of channels in another region far apart. Therefore, the idea of feature importance reweighting can be used here to improve network performance. As is demonstrated in Fig. 3, the original feature matrix is multiplied by an attention map in a channel-wise manner to obtain the reweighted feature matrix in the attention module, where more important or informative features to the current data sample will be paid more “attention” to. For the learning process of the attention map, global average pooling is performed first on the original feature matrix, $\mbox{\boldmath$Z$}_{O}$, to embed the global information into a $(1,C)$-dimensional squeezed feature matrix, $z$. Specifically, the $c$-th element of $z$ is calculated by $z_{c}=\sum_{f=1}^{F}[\mbox{\boldmath$Z$}_{O}]_{f,c}/F$[40]. Then, the $(1,C)$-dimensional attention map, $m$, is predicted by a dedicated attention network based on $z$. The attention network contains two fully connected (FC) layers. The first FC layer with $C/r$ neurons is followed by a ReLU activation, $f_{\text{ReLU}}(x)=\text{max}(0,x)$, where $r\geq 1$ denotes the reduction ratio. The second FC layer with $C$ neurons is followed by a Sigmoid activation, $f_{\text{Sigmoid}}(x)=1/(1+e^{-x})$, which limits the elements of $m$ between 0 and 1. As can be seen in Fig. 2, an attention module is inserted at the end of each convolutional block in the proposed channel estimation network. Besides, $r$ is set to 2 to balance performance and complexity and the FC layers in the attention network do not use bias to facilitate channel dependency modeling. III-C Network Training To train the designed network, the mean-squared error (MSE) between the true angular domain channel, $x$, and the predicted angular domain channel, $\hat{\mbox{\boldmath$x$}}$, is used as the loss function, which can be calculated by $$\text{MSE Loss}=\frac{1}{n}\sum_{i=1}^{n}\left\|\hat{\mbox{\boldmath$x$}}_{i}-\mbox{\boldmath$x$}_{i}\right\|^{2},$$ (5) where subscript $i$ denotes the $i$-th data sample in a mini-batch and $n=500$ is the size of the mini-batch. Xavier[44] is used as the weight initializer and Adam[45] is used as the weight optimizer. The initial learning rate is set to 0.001. To balance the training complexity and testing performance, we generate totally 200,000 data samples according to the adopted channel and transmission models. Then, the generated dataset is split into training, validation, and testing set with a ratio of 3:1:1. In order to accelerate loss convergence at the beginning and reduce loss oscillation near the end of training, the learning rate is set to decay 10 times if the validation loss does not decrease in 10 consecutive epochs. Besides, early stopping [46] with a patience of 25 epochs is applied to prevent overfitting and speed up the training process. IV Extension to the HAD scenario In practice, the HAD architecture is often adopted in massive MIMO systems to save hardware and energy cost. Due to the effect of phase shifters in the analog domain in the HAD architecture, the problem formulation of channel estimation changes and the channel estimation network structure has to be customized correspondingly as well. In the HAD architecture, we assume there is only $M\ll N$ RF chains available at the BS, as illustrated in Fig. 4. IV-A Problem Reformulation with HAD With HAD, the signals arriving at the antennas have to go through the phase shifters first before received by the RF chains. So, the eventual received signal on the baseband can be expressed as $$\displaystyle\mbox{\boldmath$Y$}_{\text{HAD}}=\mbox{\boldmath$WY$}\in\mathbb{C}^{M\times L_{p}},$$ (6) where $\mbox{\boldmath$W$}\in{\mathbb{C}}^{M\times N}$ denotes the analog combining matrix. As the phase shifters only change the phase of signals, we have $|[\mbox{\boldmath$W$}]_{i,j}|=1/\sqrt{N}$, $\forall i,j$ after normalization. We set $W$ to a matrix whose rows are length-$N$ Zadoff-Chu sequences with different shifting steps as in[10]. Again, exploiting the orthogonality of the pilot sequences, the received signal corresponding to user $k$ can be obtained as $$\mbox{\boldmath$y$}_{k}=\mbox{\boldmath$Y$}_{\text{HAD}}\mbox{\boldmath$p$}_{k}^{H}=\mbox{\boldmath$Wh$}_{k}+\widetilde{\mbox{\boldmath$n$}}^{\prime}_{k}\in\mathbb{C}^{M\times 1},$$ (7) where $\widetilde{\mbox{\boldmath$n$}}^{\prime}_{k}\triangleq\mbox{\boldmath$W$}\widetilde{\mbox{\boldmath$n$}}_{k}$ is the effective noise for user $k$ with HAD. Consider a specific user and omit the subscript $k$, the goal of channel estimation now becomes to find a function that maps $y$ to $h$. Since the overhead of LS estimation increases dramatically due to limited number of RF chains, CS algorithms are more often adopted to solve the channel estimation problem in HAD massive MIMO systems conventionally. However, the performance of CS algorithms is highly dependent on channel sparsity and the computational complexity is relatively high due to complex operations and a large number of iterations. Therefore, we extend the proposed framework to the HAD scenario and use DL to overcome these issues. IV-B Attention-Aided Channel Estimation Network Structure Design With HAD Different from the former scenario, in the problem of channel estimation with HAD, the input data becomes the received signal $y$, where little local correlation exists due to the compression of matrix $W$. Therefore, FNN should be used rather than CNN to achieve better performance. Although the attention mechanism has been originally proposed in the area of computer vision and is only compatible with CNN, its key idea, feature importance reweighting, is actually independent of network structure. Therefore, to exploit the benefit of the attention mechanism, we propose a simple but effective method to embed it into FNN. As introduced earlier, the attention module is inserted after a feature matrix and the attention map is learned from the squeezed feature matrix obtained by global average pooling. FNN can not directly use attention since all the neurons of the neighboring FC layers are fully connected and features of FC layers appear in the form of vectors instead of matrices. Therefore, as depicted in the dashed box in Fig. 5, we reshape the feature vector of a FC layer into a matrix first, like the feature matrix of a Conv1D layer. Then, with the matrix-shaped feature, the original attention mechanism can be normally applied. Finally, the reweighted feature vector can be obtained by flattening the reweighted feature matrix. The detailed network design is illustrated in Fig. 5. The first FC layer consists of $F\times C$ neurons, which is followed by a ReLU activation and a BN layer. The feature vector is then reshaped into a $(F,C)$ feature matrix, where $C$ and $F$ can be regarded as the number of channels and the number of features of each channel, respectively. Based on the feature matrix, the original attention module is inserted to get the reweighted feature matrix, which is then flattened back to the reweighted feature vector. Eventually, an output FC layer with $2N$ neurons is used to obtain the real and imaginary parts of the angular domain channel prediction. We only use one hidden FC layer here since experiments indicate that more hidden FC layers are not helpful to further improve the performance but increases the complexity dramatically. IV-C Complexity Analysis In this subsection, the complexity of various algorithms are analyzed. Two metrics are used to measure the complexity, namely the required number of floating point operations (FLOPs) and the total number of parameters. For brevity, only multiplication is considered and one complex multiplication is counted as four real multiplications when computing FLOPs, and the weights and biases of BN layers are ignored and one complex parameter is counted as two real parameters when computing parameter number. When analyzing the complexity of neural networks, we ignore the offline training phase and focus on the online testing phase since the network training only needs to be executed once and the BS usually has sufficient computational ability in practice. Using the notations in Section III, the FLOPs of the Conv1D layer and the $l$-th FC layer are $LFCC^{\prime}$ and $N_{l-1}N_{l}$, respectively, where $N_{l}$ denotes the number of neurons in the $l$-th FC layer. Without HAD, the overall FLOPs of CNN can be readily obtained as $(2L_{I}F+2L_{O}F+L_{H}F^{2}(N_{B}-1))N$ and the additional FLOPs of attention modules is $N_{B}F(N+F+1)$. The FLOPs of MMSE estimation is $4(2N^{3}+N^{2})$. Besides, for both algorithms, the LS estimation has to be obtained first, which also requires $4NL_{p}^{2}$ FLOPs. In the scenario with HAD, the FLOPs of the attention-aided FNN can be obtained as $FC(2M+2N+1)+C(C+1)$. In this paper, structured variational Bayesian inference (S-VBI) is selected as the CS-based baseline algorithm, whose FLOPs is $I_{E}(\frac{2}{3}M^{3}+(2M+2)N^{2})$ with $I_{E}$ denoting the number of iterations[11]. Again, for both algorithms, obtaining the received signal corresponding to a single user requires $4ML_{p}^{2}$ FLOPs. Notice that in both scenarios, the FLOPs of DL-based algorithms only scale linearly with $N$ and $M$, which is an attractive practical advantage, especially in large scale systems. By contrast, the FLOPs of conventional algorithms are much higher and grow cubically with $N$ and $M$. As for the total number of parameters, the Conv1D layer and the $l$-th FC layer contains $LCC^{\prime}$ and $N_{l-1}N_{l}$ parameters, respectively. Without HAD, CNN contains totally $2(L_{I}+L_{0})F+L_{H}(N_{B}-1)F^{2}$ parameters and the additional number of parameters of attention modules is $N_{B}F^{2}$. The CCM used in MMSE requires $2N^{2}$ parameters. In the scenario with HAD, attention-aided FNN contains totally $FC(2M+2N)+C^{2}$ parameters, while S-VBI does not need any parameters. V Simulation Results In this section, extensive simulation results are presented to evaluate the performance of the proposed DL-based channel estimation framework in scenarios with and without HAD. MSE is adopted as the performance metric. Notice that, converting the channel to angular domain does not change the MSE since $F$ is a unitary matrix. Some of the parameters used in simulation are summarized in Table I, unless otherwise specified. As for network hyper-parameters in the scenario without HAD, $L_{I}$, $L_{H}$, and $L_{O}$ are set to 7, 5, and 1, respectively, and $S$ is set to 1. We compare the proposed algorithm with the following baseline algorithms. The structures of all DL-based baselines are carefully determined by cross validation out of fairness. V-1 Without HAD The following algorithms are selected as baselines: • MMSE Single: Refine the LS estimation by the CCM, $\mbox{\boldmath$R$}_{hh}\triangleq\mathbb{E}({\mbox{\boldmath$h$}\mbox{\boldmath$h$}^{H}})\in\mathbb{C}^{N\times N}$ as [3] $$\hat{\mbox{\boldmath$h$}}_{\text{MMSE}}=\mbox{\boldmath$R$}_{hh}(\mbox{\boldmath$R$}_{hh}+\mbox{\boldmath$I$}/\text{SNR})^{-1}\hat{\mbox{\boldmath$h$}}_{\text{LS}}.$$ (8) • MMSE $\bm{3^{\circ}$}:$ Split the entire angular space into many $3^{\circ}$-angular regions and estimate a dedicated CCM for each region with only channel samples whose average AoAs are in the region. During the testing process of a channel sample, the angular region it belongs to will be estimated first555The angular region estimations of samples are assumed to be accurate for simplicity. and the corresponding CCM will be selected for channel refinement. Compared with using a single CCM for all channel samples, using multiple CCMs matching different angular regions can effectively exploit the narrow angular spread characteristic of channels and improve performance significantly. Actually, it can be regarded as the manual implementation of the “divide-and-conquer” policy, i.e., the channel samples are “divided” by their angular regions and “conquered” by different corresponding CCMs. • FNN: The FNN structure consists of three FC layers with 512, 1024, and 256 neurons, respectively, with one BN layer inserted between every two FC layers. The activation function of the first two FC layers is ReLU while the last FC layer does not use activation. • CNN without attention: The same CNN structure but with all the attention modules removed. V-2 With HAD The following algorithms are selected as baselines: – Separate LS: A total of $N/M$ estimates are executed. In each estimate, only $M$ antennas are switched on by adjusting $W$, and their channels are obtained by LS estimation[7]. – S-VBI: One of the state-of-the-art CS-based algorithms designed for narrow angular spread channel estimation in HAD massive MIMO systems, where the spatial-clustered channel sparsity is embedded to improve the estimation performance[11]. The source code is provided by the authors of [11]. – FNN without attention: Adopt the same structure of FNN as in the former scenario while the number of neurons reduces to 256, 512, and 256, respectively, with smaller input dimension. – CNN: The structure of CNN is also similar to the former scenario, except that the output layer is changed from Conv1D to FC for dimension conversion. – CNN without attention: The same CNN structure but with all the attention modules removed. V-A Impacts of Network Parameters To determine the best network structures for two scenarios, we investigate the impacts of key network parameters on network performance. Without HAD, the structure of CNN is mainly determined by the number of convolutional blocks, $N_{B}$, and the number of filters of each Conv1D layer, $F$. As illustrated in Fig. 6(a), attention can improve the performance of CNNs with various numbers of convolutional blocks and filters and the performance of a two-layer attention-aided CNN is even better than a four-layer CNN without attention, which indicates the superiority of the attention mechanism. In general, the performance of networks is better with stronger representation capability brought by more convolutional blocks. However, with enough filters, the performance improvement of attention-aided CNN is marginal if the number of filters keeps growing and it can even be harmful to CNN without attention sometimes. Besides, deeper and wider CNNs also have heavier computing and storage burdens. To strike a balance between performance and complexity, we choose to use four convolutional blocks and 96 filters for each Conv1D layer. With HAD, the structure of the attention-aided FNN is mainly determined by the number of neurons of the hidden FC layer $F\times C$ and the way of reshaping in the attention embedding module. As in Fig. 6(b), the network performs best when $F\times C=3072$ and the performance will deteriorate with either too few or many neurons. Besides, as can be indicated from the bowl shape of curves, a medium number of features in each channel performs best when $F\times C$ is fixed. The reason is that the number of channels is too small and there is not enough degrees of freedom for dynamic adjustment of attention maps when $F$ is too large, while each channel does not contain enough features to effectively capture the global information[40] when $F$ is too small. So, we choose to reshape the feature vector into 192 channels with 16 features in each channel. V-B Impacts of System Parameters In this subsection, the impacts of various system parameters are investigated to validate the superiority and universality of the proposed approach. V-B1 Impact of SNR As illustrated in Fig. 7(a), without HAD, all DL-based methods can refine and improve the channel quality of LS coarse estimation. The performance improvement of FNN decreases as the SNR increases while CNN outperforms LS significantly in various SNR regimes thanks to the exploitation of local correlation of input data. Then, with the aid of attention, the MSE of CNN further decreases moderately. Besides, the performance gain of the attention mechanism increases with SNR. When SNR is 0 dB, the MSE of CNN with attention is $89.55\%$ of that of CNN without attention while this ratio decreases to $71.83\%$ when SNR is 20 dB. The reason is that the narrow angular spread characteristic of the angular domain channel is more exposed and easier to be exploited with less noise, thereby amplifying the benefits of attention. As for MMSE, the performance improvement of MMSE Single is marginal while MMSE $3^{\circ}$ performs much better due to the exploitation of the narrow angular spread characteristic of channel. Nevertheless, the proposed attention-aided CNN still slightly outperforms MMSE $3^{\circ}$, demonstrating its superiority. From Fig. 7(b), the performance of FNN is much better than CNN and outperforms separate LS except in high SNR regimes when HAD is considered and attention is not used, but it is still obviously inferior to S-VBI. However, with the aid of attention, the performance of both CNN and FNN improves significantly. As can be observed, attention-aided CNN outperforms S-VBI except when SNR is higher than 15 dB while the attention-aided FNN is even better and outperforms S-VBI consistently in all SNR regimes. Besides, compared with Fig. 7(a), the performance gain of the attention mechanism is much more significant since the attention mechanism can not only help denoise but also plays an important role in reversing the effect of $W$ in the HAD scenario. Specifically, when restoring the high-dimensional channel from the low-dimensional received signal, the performance deterioration can be effectively reduced if the approximate AoA range of channel paths is known. Thanks to the attention mechanism, such processing can be automatically realized by the dynamic adjustment of attention maps. V-B2 Impact of Angular Spread As is illustrated in Fig. 8, attention-aided CNN has close performance to MMSE $3^{\circ}$ and consistently outperforms LS significantly with various angular spreads. As angular spread increases, the performance of all algorithms decreases in both scenarios since the channel estimation problem becomes more complex with less sparse angular domain channel. Besides, the performance gain of attention also decreases because the channel distribution is less separable, which makes the attention mechanism more difficult to realize the “divide-and-conquer” policy. In the scenario with HAD, the performance of attention-aided FNN is better than separate LS unless the angular spread is too large while only $M/N$ resource overhead is required. V-B3 Impacts of Antenna Number and RF Chain Ratio As can be observed from Fig. 9, in both scenarios, the performance of all algorithms improves as $N$ increases. Since the power leakage of angular domain channel is inversely proportional to the antenna number[37], the increased channel sparsity caused by more antennas can simplify channel estimation. Without HAD, attention-aided CNN has close performance to MMSE $3^{\circ}$ and the performance gain of attention can be amplified by sparser channel. With HAD, the performance of all algorithms improve as the RF chain ratio $M/N$ increases since more information is kept during the sensing phase. Besides, attention-aided FNN outperforms S-VBI consistently with various $M$ and $N$ and the performance gap increases with less antennas with fixed RF chain ratio, indicating that the DL-based approach is less dependent on channel sparsity. From the perspective of resource saving, attention-aided FNN is also superior to S-VBI. In particular, the MSE of attention-aided FNN with only $1/4$ RF chains is comparable to that of S-VBI with $1/2$ RF chains. As a result, the hardware and energy cost can be halved. Furthermore, given strict target MSE performance and a limited number of RF chains, S-VBI may need to estimate multiple times while attention-aided FNN completes the estimation at once, saving more resources for data transmission. Such an advantage can be very appealing in scenarios like high-mobility communication, where the channel is fast time-varying with short channel coherence time. V-C Generalization Ability The generalization ability to different parameters heavily influences the practicality of neural networks. In the considered problem, there are two categories of parameters, namely system parameters and channel parameters. System parameters include the number of antennas, RF chains, and users, which determine the input and output dimensions of the network. Channel parameters include SNR, number of paths, angular spread, and gain distribution of channel paths, which influence the input and output distributions of the network. For system parameters, the numbers of antennas and RF chains are usually fixed in practice, and different user numbers can also be handled by the same network since a multi-user channel estimation problem is decomposed into multiple single-user problems by exploiting the orthogonality of pilot sequences. Therefore, we focus on the generalization performance of channel parameters. The generalization to different SNRs is illustrated in Fig. 10. The legend “trained with accurate SNRs” denotes that for each SNR, a dedicated model trained with accurate SNR data is used for testing. In both scenarios, the proposed networks can only handle tiny SNR mismatch between the training and testing phases when the model is trained with a single SNR point and the performance degradation can be very severe when the SNR mismatch is large. To alleviate this issue, one common method is training with data under a variety of SNRs, then the characteristics of different SNRs can be captured by a single network. In simulation, we select five SNR points, namely 0, 5, 10, 15, and 20 dB for training. Besides, the number of training samples from each SNR point is kept same as when trained separately out of fairness. Based on our simulation results, directly using MSE as loss can lead to poor performance when different SNR points are trained together since the loss of high SNR data will be overwhelmed by the loss of low SNR data. To ensure that all SNR regimes get sufficient training, we use a heuristic loss function computed as $$\text{Weighted MSE Loss}=\frac{1}{n}\sum_{i=1}^{n}(\text{SNR}_{i}\cdot\left\|\hat{\mbox{\boldmath$x$}}_{i}-\mbox{\boldmath$x$}_{i}\right\|^{2}),$$ (9) where the MSE is weighted by the SNR of data sample. As can be indicated by the two close curves marked with circle and cross, networks trained with mixed SNRs achieve similar performance as trained with accurate SNRs and significantly outperform networks trained with a single SNR point. As for the generalization to other parameters, detailed results are omitted here due to space limitation while the trends and patterns are also similar. In conclusion, through mixed parameters training and proper design of the loss function, a single network with strong robustness can be obtained to handle all situations during testing, which is very appealing in practical applications. V-D The Role of Attention Although it is hard to rigorously analyze the representations learned by DNNs, we still try to attain at least a primitive understanding of the role of attention. Intuitively, the performance gain of attention can be considered to come from the “divide-and-conquer” policy realized by the dynamic adjustment of attention maps. In this way, sample-specific processing can be performed on different data samples to improve the performance. Without attention, the processing performed by the network is fixed for all data samples, which is less advanced. Next, we would like to analyze the distributions of learned attention maps to roughly corroborate this. Due to the narrow angular spread characteristic, the channel distribution is highly related to the average AoA parameter, or, more precisely, its sine value. So, we select three sine value ranges for comparison, where the first two ranges are close to each other and the third range is far away from the first two ranges. The average attention maps of validation data samples whose average AoAs are inside the three ranges are plotted in Fig. 11. The number of elements of each attention map equals to the corresponding channel number of the feature matrix and the values of the elements represent the scale factors acting on the original features. Due to space limitation, only the 16-th to the 48-th channels are displayed here. A larger scale factor indicates more important channel of features. From the figure, we have the following observations: – Without HAD, the role of attention is different in different depths of the attention-aided CNN. Specifically, as is shown in the first two subfigures, features are scaled in an angle-agnostic manner in shallower layers with small differences among average attention maps of different sine value ranges while the distributions of average attention maps become increasingly angle-specific in deeper layers. Notice that, the mean value of the 38-th scale factor of the third attention map varies significantly with sine value ranges. Reasonably, it can be inferred as a key angle-related feature in the considered problem. Such a phenomenon is also consistent with a typical discipline in DNNs that earlier layer features are more general while later layer features exhibit greater specificity[47]. – The distributions of average attention maps of closer sine value ranges are more similar. From the second subfigure, the curves of the first two ranges are very close to each other, while the curve of the third range is apparently different from them. It can be regarded as the embodiment of “divide-and-conquer” since the channel estimation for data samples in the first two ranges and the third range can be regarded as two different subproblems, which are “divided” by different attention maps first and then “conquered” subsequently. – As is illustrated in the third subfigure, all scale factors in the fourth attention map are 0.5, which is due to the zero output of the former ReLU activation function and the Sigmoid activation function used to predict the attention map. Therefore, the last attention module is actually useless and can be removed during testing to further reduce the complexity[40]. – From the fourth subfigure, the differences of average attention maps between sine value ranges are bigger and the binarization level of scale factors is higher in the HAD scenario. Only one attention module is used in the attention-aided FNN, so the “divide” process has to be realized more intensely, which is different from the attention-aided CNN used in the scenario without HAD. Another reason might be that compared with the denoising process in the former scenario, reversing the effect of $W$ is more angle-related, therefore the “divide-and-conquer” policy is reflected more fully. When dealing with a certain subproblem, only specific features are kept and others are totally abandoned. Apart from the statistical characteristics, Fig. 12 also presents the attention maps of two exemplary data samples with close average AoAs. Although the average AoAs are almost same, the attention maps of these two data samples are still dramatically different, which reveals the sample-specific nature of attention. The reason is that although average AoA can reflect most of the channel’s characteristics, there are still some features, such as the specific AoAs and gains of channel paths, which can also be exploited by attention for further performance improvement. V-E Complexity Comparison Under typical system settings where $N=128$, $M=32$, $L_{p}=K=10$, and $I_{E}=50$, the specific complexity of different algorithms is compared in Table II. Notice that the last attention layer in attention-aided CNN is removed during testing as mentioned above. Besides, for MMSE $3^{\circ}$, CCMs computed by channel samples whose average AoAs have same sine values can be shared to halve the number of parameters. As we can see, without HAD, the number of parameters only increases $19.86\%$ with the use of attention, and the additional FLOPs overhead introduced by attention is almost negligible. Although the FLOPs of attention-aided CNN are slightly higher than MMSE currently, it will be much smaller than MMSE if the antenna number keeps growing. Besides, the parameter number of MMSE $3^{\circ}$ is also quite large since tens of CCMs are required to exploit the narrow angular spread characteristic of channels. In the scenario with HAD, we only compare three algorithms with practical performance. Both attention-aided CNN and FNN have similar parameter numbers while the FLOPs of attention-aided FNN is much lower. Remember that, its performance is also better than attention-aided CNN, which indicates the superiority of the proposed design. The FLOPs of S-VBI is significantly higher than the DL-based methods. In simulation, when both run on CPU, attention-aided FNN can be hundreds of times faster than S-VBI in terms of clock time and the advantage is even more exaggerated if accelerated by GPU. VI Conclusion In this paper, we have proposed a novel attention-aided DL framework for massive MIMO channel estimation. Both the scenarios without and with HAD are considered and scenario-specific neural networks are customized correspondingly. By integrating the attention mechanism into CNN and FNN, the narrow angular spread characteristic of channel can be effectively exploited, which is realized by the “divide-and-conquer” policy to dynamically adjust attention maps. The proposed approach can significantly improve the performance but is with relatively low complexity. References [1] F. Rusek et al., “Scaling up MIMO: Opportunities and challenges with very large arrays,” IEEE Signal Process. Mag., vol. 30, no. 1, pp. 40–60, Jan. 2013. [2] E. G. Larsson, O. Edfors, F. Tufvesson, and T. Marzetta, “Massive MIMO for next generation wireless systems,” IEEE Commun. Mag., vol. 52, no. 2, pp. 186–195, Feb. 2014. [3] Y. S. Cho, J. Kim, W. Y. Yang, and C.-G. Kang, MIMO-OFDM Wireless Communications with MATLAB. Singapore: John Wiley $\&$ Sons (Asia) Pte Ltd. 2010. [4] F. Sohrabi and W.Yu, “Hybrid digital and analog beamforming design for large-scale antenna arrays,” IEEE J. Sel. Topics Signal Process., vol. 10, no. 3, pp. 501–513, Apr. 2016. [5] A. F. Molisch et al., “Hybrid beamforming for massive MIMO: A survey,” IEEE Commun. Mag., vol. 55, no. 9, pp. 134–141, Sep. 2017. [6] S. Guo, H. Zhang, P. Zhang, P. Zhao, L. Wang, and M. Alouini, “Generalized beamspace modulation using multiplexing: A breakthrough in mmWave MIMO,” IEEE J. Sel. Areas in Commun., vol. 37, no. 9, pp. 2014–2028, Jul. 2019. [7] D. Fan et al., “Angle domain channel estimation in hybrid millimeter wave massive MIMO systems,” IEEE Trans. Wireless Commun., vol. 17, no. 12, pp. 8165–8179, Dec. 2018. [8] J. Lee, G. Gil, and Y. Lee, “Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications,” IEEE Trans. Commun., vol. 64, no. 6, pp. 2370–2386, Jun. 2016. [9] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, Jun. 2008. [10] Y. Wang, A. Liu, X. Xia, and K. Xu, “Learning the structured sparsity: 3D massive MIMO channel estimation and adaptive spatial interpolation,” IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 10663–10678, Nov. 2019. [11] X. Xia, K. Xu, S. Zhao, and Y. Wang, “Learning the time-varying massive MIMO channels: Robust estimation and data-aided prediction,” IEEE Trans. Veh. Technol., vol. 69, no. 8, pp. 8080–8096, Aug. 2020. [12] Z. Qin, H. Ye, G. Y. Li, and B. F. Juang, “Deep Learning in Physical Layer Communications,” IEEE Wireless Commun., vol. 26, no. 2, pp. 93–99, Apr. 2019. [13] H. Ye, L. Liang, G. Y. Li, and B. Juang, “Deep learning-based end-to-end wireless communication systems with conditional GANs as unknown channels,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3133–3143, May 2020. [14] J. Gao, X. Yi, C. Zhong, X. Chen, and Z. Zhang, “Deep learning for spectrum sensing,” IEEE Wireless Commun. Lett., vol. 8, no. 6, pp. 1727–1730, Dec. 2019. [15] H. Sun, et al., “Learning to optimize: Training deep neural networks for wireless resource management”, IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018. [16] H. Ye, G. Y. Li, and B. F. Juang, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163–3173, Apr. 2019. [17] L. Liang, H. Ye, and G. Y. Li, “Spectrum sharing in vehicular networks based on multi-agent reinforcement learning,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2282–2292, Oct. 2019. [18] L. Liang, H. Ye, G. Yu, and G. Y. Li, “Deep-learning-based wireless resource allocation with application to vehicular networks,” Proc. IEEE, vol. 108, no. 2, pp. 341–356, Feb. 2020. [19] J. Gao, C. Zhong, X. Chen, H. Lin, and Z. Zhang, “Unsupervised learning for passive beamforming”, IEEE Commun. Lett., vol. 24, no. 5, pp. 1052–1056, May 2020. [20] H. Song, M. Zhang, J. Gao, and C. Zhong, “Unsupervised learning based joint active and passive beamforming design for recongurable intelligent surfaces aided wireless networks,” IEEE Commun. Lett., early access, Dec. 2020. doi: 10.1109/LCOMM.2020.3041510. [21] H. Ye, G. Y. Li, and B. Juang, “Power of deep learning for channel estimation and signal detection in OFDM systems,” IEEE Wireless Commun. Lett., vol. 7, no. 1, pp. 114–117, Feb. 2018. [22] P. Jiang, T. Wang, B. Han, X. Gao, J. Zhang, C. Wen, S. Jin, and G. Y. Li, “Artificial intelligence-aided OFDM receiver: Design and experimental results,” Dec. 2018, arXiv:1812.06638. [Online]. Available: https://arxiv.org/abs/1812.06638 [23] H. He, C. Wen, S. Jin, and G. Y. Li, “Model-driven deep learning for MIMO detection,” IEEE Trans. Signal Process., vol. 68, pp. 1702–1715, Feb. 2020. [24] H. He, C. Wen, S. Jin, and G. Y. Li, “Deep learning-based channel estimation for beamspace mmWave massive MIMO systems,” IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 852–855, Oct. 2018. [25] W. Chen, B. Zhang, S. Jin, B. Ai, and Z. Zhong, “Solving sparse linear inverse problems in communication systems: A deep learning approach with adaptive depth,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 4–17, Jan. 2021. [26] P. Dong, H. Zhang, G. Y. Li, I. S. Gaspar, and N. NaderiAlizadeh, “Deep CNN-based channel estimation for mmWave massive MIMO systems,” IEEE J. Sel. Topics Signal Process., vol. 13, no. 5, pp. 989–1000, Sept. 2019. [27] P. Wu and J. Cheng, “Acquiring measurement matrices via deep basis pursuit for sparse channel estimation in mmWave massive MIMO systems,” July 2020, arXiv:2007.05177. [Online]. Available: https://arxiv.org/abs/2007.05177 [28] X. Ma and Z. Gao, “Data-driven deep learning to design pilot and channel estimator for massive MIMO,” IEEE Trans. Veh. Technol., vol. 69, no. 5, pp. 5677–5682, May. 2020. [29] Y. Yang, S. Zhang, F. Gao, J. Ma, and O. A. Dobre, “Graph neural network based channel tracking for massive MIMO networks,” IEEE Commun. Lett., vol. 24, no. 8, pp. 1747–1751, Aug. 2020. [30] Y. Yang, F. Gao, C. Xing, J. An, and A. Alkhateeb, “Deep multimodal learning: Merging sensory data for massive MIMO channel prediction,” Jul. 2020, arXiv:2007.09366. [Online]. Available: https://arxiv.org/abs/2007.09366 [31] W. Ma, C. Qi, Z. Zhang, and J. Cheng, “Sparse channel estimation and hybrid precoding using deep learning for millimeter wave massive MIMO,” IEEE Trans. Commun., vol. 68, no. 5, pp. 2838–2849, Jan. 2020. [32] Y. Yang, F. Gao, G. Y. Li, and M. Jian, “Deep learning based downlink channel prediction for FDD massive MIMO system,” IEEE Commun. Lett., vol. 23, no. 11, pp. 1994–1998, Nov. 2019. [33] A. Al-Hourani, S. Kandeepan, and A. Jamalipour, “Modeling air-to-ground path loss for low altitude platforms in urban environments,” in Proc. IEEE Global Commun. Conf., Austin, TX, 2014, pp. 2898–2904. [34] Q. Cai, C. Dong, and K. Niu,“Attention model for massive MIMO CSI compression feedback and recovery,” 2019 IEEE WCNC, Marrakesh, Morocco, 2019, pp. 1–5. [35] D. J. Ji and D. -H. Cho, “ChannelAttention: Utilizing attention layers for accurate massive MIMO channel feedback,” IEEE Wireless Commun. Lett., vol. 10, no. 5, pp. 1079–1082, May 2021. [36] J. Xu, B. Ai, W. Chen, A. Yang, and P. Sun, “Wireless image transmission using deep source channel coding with attention modules,” Nov. 2020, arXiv:2012.00533. [Online]. Available: https://arxiv.org/abs/2012.00533 [37] H. Xie, F. Gao, S. Zhang, and S. Jin, “A unified transmission strategy for TDD/FDD massive MIMO systems with spatial basis expansion model,” IEEE Trans. Veh. Technol., vol. 66, no. 4, pp. 3170–3184, April 2017. [38] Y. Yang, F. Gao, X. Ma, and S. Zhang, “Deep learning-based channel estimation for doubly selective fading hannels,” IEEE Access, vol. 7, pp. 36579–36589, Mar. 2019. [39] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. [40] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-Excitation networks,” Sept. 2017, arXiv:1709.01507. [Online]. Available: https://arxiv.org/abs/1709.01507 [41] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” Nov. 2017, arXiv:1711.07971. [Online]. Available: https://arxiv.org/abs/1711.07971 [42] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual Attention network for scene segmentation,” Sept. 2018, arXiv:1809.02983. [Online]. Available: https://arxiv.org/abs/1809.02983 [43] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML 2015, Lille, France, Jul. 6-11, 2015, pp. 448–456. [44] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in AISTATS 2010, vol. 9, pp. 249–256, May 2010. [45] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Dec. 2014, arXiv:1412.6980. [Online]. Available: https://arxiv.org/abs/1412.6980 [46] R. Caruana, S. Lawrence, and L. Giles, “Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping,” in NIPS 2000, Denver, CO, USA, Dec. 6-12, 2020. [47] A. S. Morcos, D. G. Barrett, N. C. Rabinowitz, and M. Botvinick, “On the importance of single directions for generalization,” in ICLR 2018, Vancouver, BC, Canada, Apr. 30-May 3, 2018.
RKKY interactions of CeB${}_{6}$ based on effective Wannier model Takemi Yamada and Katsurou Hanzawa Department of PhysicsDepartment of Physics Faculty of Science and Technology Faculty of Science and Technology Tokyo University of Science Tokyo University of Science Chiba Chiba Noda Noda 278-8510 278-8510 Japan Japan t-yamada@rs.tus.ac.jp Abstract We examine the RKKY interactions of CeB${}_{6}$ between multipole moments based on the effective Wannier model obtained from the bandstructure calculation including 14 Ce-$f$ orbitals and 60 conduction orbitals of Ce-$d,s$ and B-$p,s$. By using the $f$-$c$ mixing matrix elements of the Wannier model together with the conduction band dispersion, the multipole couplings with the RKKY oscillation are obtained for the active moments in $\Gamma_{8}$ subspace. Both of the $\Gamma_{5g}$ quadrupole $O_{xy}$ and the $\Gamma_{2u}$ octupole $T_{xyz}$ couplings are largely enhanced with $\bm{q}=(\pi,\pi,\pi)$ which naturally explains the antiferro-quadrupolar phase of the phase II, and are also enhanced with $\bm{q}=(0,0,0)$ corresponding to the elastic softening of $C_{44}$. Also the couplings of the $\Gamma_{5u}$ octupole $T_{z}^{\beta}$ is quite large for $\bm{q}=(0,0,\pi)$ which is related to the antiferro-octupolar ordering of a possible candidate for the phase IV of Ce${}_{x}$La${}_{1-x}$B${}_{6}$. CeB${}_{6}$, RKKY interaction, multipole, bandstructure calculation \recdate September 17, 2019 1 Introduction Electronic state of CeB${}_{6}$ has been one of the central issues in the heavy Fermion systems, since it exhibits a rich phase diagram of the multipole orderings[1, 2] due to the $\Gamma_{8}$ quartet ground state with the degrees of freedom of multipole moments as shown in Table 1. Several multipole orderings have been observed in temperature and magnetic field $(T,H)$ phase diagram such as the antifero-quadrupolar (AFQ) ordering of $\Gamma_{5g}$ quadrupole moments $(O_{yz},O_{zx},O_{xy})$ with a critical transition temperature $T_{Q}=3.2$ K (phase II) and the antifero-magnetic ordering of $\Gamma_{4u}$ magnetic multipoles $(\sigma^{x},\sigma^{y},\sigma^{z})$ with $T_{N}=2.3$ K (phase III). The antifero-octupolar (AFO) ordering of $\Gamma_{5u}$ octupoles $(T^{\beta}_{x},T^{\beta}_{y},T^{\beta}_{z})$ is also discussed as a possible candidate of the phase IV in the La-substitution system of Ce${}_{x}$La${}_{1-x}$B${}_{6}$. The 4$f$ electrons of CeB${}_{6}$ are almost localized from several experiments. The Fermi-surface (FS) has been observed in the de Haas-van Alphen (dHvA) experiments[3], the angle resolved photoemission spectroscopy (ARPES)[4, 5] and the high-resolution photoemission tomography[6], where an ellipsoidal FS centered at X point in the Brillouin zone (BZ) has been confirmed and is almost the same as that of LaB${}_{6}$ with the 4$f^{0}$ state. In such a localized $f$ electron system, Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction[7, 8, 9] plays an important role for the multipole ordering and must determine the ordering moments and wavevectors. The phonomenological RKKY models of CeB${}_{6}$[10, 11] succeeded in reproducing the basic phase diagram in $T$-$H$ plane and giving a great advance in the multipole physics in $\Gamma_{8}$ ground state system. However in these studies, only the nearest neighbor RKKY Hamiltonian with a symmetric coupling and/or asymmetric correction terms was used for explaining the experimental phase diagram. As shown in the original studies[7, 8, 9], the RKKY interaction has a decaying and oscillating function of $2k_{\rm F}R$ with the Fermi wavenumber $k_{\rm F}$ of conduction $(c)$ band and distance between multipole moments $R$, and thus these effects should be taken into account through the microscopic description of the $f$-$c$ mixing and $c$ band states. The explicit derivation of the RKKY multipole couplings based on the realistic bandstructure calculation still has been an important challenge for the microscopic understanding of the multipole order. In this study, we derive the RKKY interaction of CeB${}_{6}$ based on the 74-orbital effective Wannier model derived from the bandstructure calculation directly[12]. By using the realistic $c$ band dispersion together with the $f$-$c$ mixing matrix elements from the Wannier model of CeB${}_{6}$, we calculate the RKKY couplings between the active multipole moments in $\Gamma_{8}$ subspace mediated by the realistic $c$ band states explicitly. The obtained RKKY multipole interactions show that the 1st leading multipole mode is the $\bm{q}=(\pi,\pi,\pi)$-AFQ ordering with $\Gamma_{5g}$ quadrupole $O_{xy}$ together with $\Gamma_{2u}$ octupole $T_{xyz}$ and the 2nd leading mode is the $\bm{q}=(0,0,\pi)$-AFO ordering with $\Gamma_{5u}$ octupole $T_{z}^{\beta}$. 2 Model & Formulation First we perform the bandstructure calculation of CeB${}_{6}$ and LaB${}_{6}$ by using the WIEN2k code[13], which is based on the density-functional theory (DFT) and includes the effect of the spin-orbit coupling (SOC) within the second variation approximation. The explicit bandstructures and FSs are shown in Fig.1 (a)-(d) of Ref. [12], where three FSs are obtained in CeB${}_{6}$, while an ellipsoidal FS centered at X point is obtained in LaB${}_{6}$, which well accounts for the experimental results[3, 4, 5, 6]. Next we construct the 74-orbital effective Wannier model based on the maximally localized Wannier functions (MLWFs) method[14, 15] from the DFT bandstructure of CeB${}_{6}$, where 14 $f$-states from Ce-$f$ (7 orbital $\times$ 2 spin) and 60 $c$-states from Ce-$d$ (5 orbital $\times$ 2 spin), Ce-$s$ (1 orbital $\times$ 2 spin), B-$p$ (6 site $\times$ 3 orbital $\times$ 2 spin) and B-$s$ (6 site $\times$ 1 orbital $\times$ 2 spin) are fully included and the obtained bandstructure well reproduces the DFT band as shown in Fig.1 (e) and (f) of Ref. [12]. The obtained tight-binding (TB) Hamiltonian is given by the following form as, $$\displaystyle H_{\rm TB}$$ $$\displaystyle=\sum_{\bm{k}}\sum_{mm^{\prime}{}}h_{mm^{\prime}{}}^{ff}(\bm{k})f% _{\bm{k}m}^{\dagger}f_{\bm{k}m^{\prime}{}}+\sum_{\bm{k}}\sum_{\ell\ell^{\prime% }{}}h_{\ell\ell^{\prime}{}}^{cc}(\bm{k})c_{\bm{k}\ell}^{\dagger}c_{\bm{k}\ell^% {\prime}{}}+\sum_{\bm{k}}\sum_{m\ell}\left(V_{\bm{k}m\ell}f_{\bm{k}m}^{\dagger% }c_{\bm{k}\ell}+h.c.\right)$$ (1) where $f_{\bm{k}m}^{\dagger}~{}(c_{\bm{k}\ell}^{\dagger})$ is a creation operator for a $f~{}(c)$ electron with wavevector $\bm{k}$ and 14 (60) spin-orbital states $m~{}(\ell)$. Here 14 $f$ states of $m$ are represented by the CEF eigenstates as $\Gamma_{8}$ quartet and $\Gamma_{7}$ doublet with the total angular momentum $J=5/2$, and $\Gamma_{6}$, $\Gamma_{7}$ doublets and $\Gamma_{8}$ quartet with $J=7/2$. The $f$-$f$ [$c$-$c$] matrix element of $h_{mm^{\prime}{}}^{ff}(\bm{k})~{}[h_{\ell\ell^{\prime}{}}^{cc}(\bm{k})]$ includes the $f~{}(c)$ energy levels, SOC couplings, CEF splittings and $f$-$f$ ($c$-$c$) hopping integrals, and $V_{\bm{k}m\ell}$ is the $f$-$c$ mixing element. Here we consider the RKKY interaction between the multipole moments of $\Gamma_{8}$ quartet. For this purpose, we start from the localized $f$ limit where a $4f^{1}$ state is realized in $\Gamma_{8}$ at each Ce site and the $f$-$f$ hopping and $f$-$c$ mixing become zero. Hence the remained $c$ electron Hamiltonian $H_{\rm TB}^{c}$ is diagonalized as follows, $$\displaystyle H_{\rm TB}^{c}=\sum_{\bm{k}}\sum_{\ell\ell^{\prime}{}}h_{\ell% \ell^{\prime}{}}^{cc}(\bm{k})c_{\bm{k}\ell}^{\dagger}c_{\bm{k}\ell^{\prime}{}}% =\sum_{\bm{k}s}\varepsilon_{\bm{k}s}^{c}a_{\bm{k}s}^{\dagger}a_{\bm{k}s},$$ (2) where $a_{\bm{k}s}^{\dagger}$ is a creation operator for a electron with $\bm{k}$ and band-index $s$ with the $c$ band dispersion $\varepsilon_{\bm{k}s}^{c}$ and the eigenvector $u_{\bm{k}s\ell}^{c}$ where $a_{\bm{k}s}=\sum_{\ell}u_{\bm{k}s\ell}^{c}c_{\bm{k}\ell}$. Figure 1 shows the $c$ bandstructure [Fig. 1(a)] and their FSs of the 21st band [Fig. 1(b) & (c)] and the obtained $c$ state well describes the experimental FSs[3, 4, 5, 6] and is also almost same as the DFT-bandstructure of LaB${}_{6}$ without $f$ electron. By using the second-order perturbation w. r. t. the $f$-$c$ mixing $V_{\bm{k}m\ell}$ in the third term of $H_{\rm TB}$, we obtain the multi-orbital Kondo lattice Hamiltonian which is given by, $$\displaystyle H_{\rm MKL}$$ $$\displaystyle=\sum_{im}\left(\varepsilon_{\Gamma_{8}}^{f}+\Delta\varepsilon_{% \Gamma_{8}}^{f}\right)f_{im}^{\dagger}f_{im}+\sum_{\bm{k}s}\varepsilon_{\bm{k}% s}^{c}a_{\bm{k}s}^{\dagger}a_{\bm{k}s}+\sum_{i}\sum_{mm^{\prime}{}}\sum_{\bm{k% }\bm{k^{\prime}{}}}\sum_{\ell\ell^{\prime}{}}J_{imm^{\prime}{}}^{\bm{k}\ell,% \bm{k}^{\prime}{}\ell^{\prime}{}}f_{im}^{\dagger}f_{im^{\prime}{}}c_{\bm{k}% \ell}^{\dagger}c_{\bm{k}^{\prime}{}\ell^{\prime}{}},$$ (3) where $f_{im}^{\dagger}$ is a creation operator for a $f$ electron with a Ce-atom $i$ and 4-states in $\Gamma_{8}$ quartet $\Ket{m}=\Ket{1}\sim\Ket{4}$ which are given with the $J_{z}$-base of $J=5/2$ $\Ket{M}$ explicitly as, $$\displaystyle\Ket{1}=-\sqrt{\frac{1}{6}}\Ket{+\frac{3}{2}}-\sqrt{\frac{5}{6}}% \Ket{-\frac{5}{2}}$$ (4) $$\displaystyle\Ket{2}=+\Ket{+\frac{1}{2}}$$ (5) $$\displaystyle\Ket{3}=-\Ket{-\frac{1}{2}}$$ (6) $$\displaystyle\Ket{4}=+\sqrt{\frac{1}{6}}\Ket{-\frac{3}{2}}+\sqrt{\frac{5}{6}}% \Ket{+\frac{5}{2}},$$ (7) where $\varepsilon_{\Gamma_{8}}^{f}$ is the bare $f$ energy-level and $\Delta\varepsilon_{\Gamma_{8}}^{f}$ is a energy shift due to the DFT potential which is of the order of a few eV. The Kondo coupling $J_{imm^{\prime}{}}^{\bm{k}\ell,\bm{k}^{\prime}{}\ell^{\prime}{}}$ can be written by the following simple form, $$\displaystyle J_{imm^{\prime}{}}^{\bm{k}\ell,\bm{k}^{\prime}{}\ell^{\prime}{}}% =\frac{2}{N}\frac{V_{\bm{k}m\ell}V_{\bm{k}^{\prime}{}m^{\prime}{}\ell^{\prime}% {}}^{*}}{\mu-\varepsilon_{\Gamma_{8}}^{f}}e^{-i(\bm{k}-\bm{k}^{\prime}{})\cdot% \bm{R}_{i}},$$ (8) where only $f^{0}$-intermediate process is considered and the scattered $c$ orbital energies are fixed to $\mu$. The RKKY Hamiltonian can be obtained from the second-order perturbation w. r. t. the third term of $H_{\rm MKL}$ together with the thermal average for the $c$ states. The final form is given by, $$\displaystyle H_{\rm RKKY}\!=\!-\!\sum_{\Braket{ij}}\!\sum_{m_{1}m_{2}}\!\sum_% {m_{3}m_{4}}\!K_{m_{1}m_{2}m_{3}m_{4}}(\bm{R}_{ij})f_{im_{1}}^{\dagger}f_{im_{% 2}}f_{jm_{4}}^{\dagger}f_{jm_{3}},$$ (9) $$\displaystyle K_{m_{1}m_{2}m_{3}m_{4}}(\bm{R}_{ij})=\frac{1}{N}\sum_{\bm{q}}K_% {m_{1}m_{2}m_{3}m_{4}}(\bm{q})~{}e^{i\bm{q}\cdot(\bm{R}_{i}-\bm{R}_{j})},$$ (10) where $K_{m_{1}m_{2}m_{3}m_{4}}(\bm{R}_{ij})$ is the RKKY coupling between $\{m_{1},m_{2}\}$ at Ce-atom $\bm{R}_{i}$ and $\{m_{3},m_{4}\}$ at $\bm{R}_{j}$ and $\Braket{ij}$ represents a summation for Ce-Ce vectors $\bm{R}_{ij}=\bm{R}_{i}-\bm{R}_{j}$ and $K_{m_{1}m_{2}m_{3}m_{4}}(\bm{q})$ is given by, $$\displaystyle K_{m_{1}m_{2}m_{3}m_{4}}(\bm{q})=\frac{1}{N}\sum_{\bm{k}ss^{% \prime}{}}\frac{v_{\bm{k}s}^{m_{3}m_{1}}v_{\bm{k}+\bm{q}s^{\prime}{}}^{m_{2}m_% {4}}}{(\mu-\varepsilon_{\Gamma_{8}}^{f})^{2}}\frac{f(\varepsilon_{\bm{k}+\bm{q% }s^{\prime}{}}^{c})-f(\varepsilon_{\bm{k}s}^{c})}{\varepsilon_{\bm{k}s}^{c}-% \varepsilon_{\bm{k}+\bm{q}s^{\prime}{}}^{c}},$$ (11) where $f(x)$ is the Fermi distribution function $f(x)=1/(e^{(x-\mu)/T}+1)$ and $\mu$ is a chemical potential. Here $v_{\bm{k}s}^{mm^{\prime}{}}$ is a $f$-$c$ mixing matrix between $m$ and $m^{\prime}{}$ via the $c$ band state with $\bm{k},s$ given by, $$\displaystyle v_{\bm{k}s}^{mm^{\prime}{}}=\sum_{\ell\ell^{\prime}{}}V_{\bm{k}m% \ell}^{*}V_{\bm{k}m^{\prime}{}\ell^{\prime}{}}u_{\bm{k}s\ell}^{c*}u_{\bm{k}s% \ell^{\prime}{}}^{c},$$ (12) which has all information about the $f$ state scattering between $\{m,m^{\prime}{}\}$ through the $c$ state with $\bm{k},s$. The multipole interaction in $\bm{q}$-space is explicitly given by the follwing form, $$\displaystyle\overline{K}_{O_{\Gamma}}(\bm{q})=\sum_{m_{1}m_{2}}\sum_{m_{3}m_{% 4}}O_{m_{1}m_{2}}^{\Gamma}O_{m_{4}m_{3}}^{\Gamma}\left(K_{m_{1}m_{2}m_{3}m_{4}% }(\bm{q})-K_{m_{1}m_{2}m_{3}m_{4}}^{\rm loc}\right)$$ (13) where $O_{mm^{\prime}{}}^{\Gamma}$ is the matrix element of the multipole operator and $K_{m_{1}m_{2}m_{3}m_{4}}^{\rm loc}=(1/N)\sum_{\bm{q}}K_{m_{1}m_{2}m_{3}m_{4}}(% \bm{q})$. The mean-field multipole susceptibility $\chi_{O_{\Gamma}}(\bm{q})$ is written by, $$\displaystyle\chi_{O_{\Gamma}}(\bm{q})=\frac{\chi_{O_{\Gamma}}^{0}(\bm{q})}{1-% \chi_{O_{\Gamma}}^{0}(\bm{q})\overline{K}_{O_{\Gamma}}(\bm{q})},$$ (14) which is enhanced towards the multipole ordering instability for the ordering moment $O_{\Gamma}$ and wavevector $\bm{q}=\bm{Q}$, and diverges at a critical point of the multipole ordering transition temperature $T=T_{O_{\Gamma}}^{\bm{Q}}$ where $\chi_{O_{\Gamma}}^{0}(\bm{q})\overline{K}_{O_{\Gamma}}(\bm{q})$ reaches unity. In the localized $f$ limit, the $\Gamma_{8}$ ground state is degenerate and the single-site susceptibility for all multipole moments exhibits the Curie law, $\chi_{O_{\Gamma}}^{0}(\bm{q})=1/T$, and then the transition temperature for a certain multipole ordering is determined by the condition $T_{O_{\Gamma}}^{\bm{Q}}=\overline{K}_{O_{\Gamma}}^{\rm max}(\bm{Q})$. Therefore the sign and maximum value of $\overline{K}_{O_{\Gamma}}(\bm{q})$ plays an central role for the multipole ordering. Hereafter we set $\mu-\varepsilon_{\Gamma_{8}}^{f}=2$ eV, and $\mu$ is determined so as to keep $n_{tot}=n^{c}=21$ and $T$ is set to $T=0.005$ eV throughout the calculation. 3 Results The RKKY couplings $\overline{K}_{O_{\Gamma}}(\bm{q})$ for several multipole moments as a function of the wavevector $\bm{q}$ along the high symmetry line in the BZ are plotted as shown in Fig. 2, where the positive (negative) coupling for a certain multipole with ($O_{\Gamma}$, $\bm{q}$) enhances (suppresses) the corresponding multipole fluctuation and its positive maximum value gives a leading multipole ordering mode. All leading modes are summarized in Table II of Ref. [12]. The couplings of the $\Gamma_{5g}$ quadrupole $O_{xy}$ and $\Gamma_{2u}$ ocutupole $T_{xyz}$ for $\bm{q}=(\pi,\pi,\pi)$ become largest among all moments and $\bm{q}$, which corresponds to the AFQ ordering of CeB${}_{6}$ as phase II. From the analysis of the real space couplings of $O_{xy}$ and $T_{xyz}$, we have found that the main origin of this mode comes from the fact that the couplings with the 1st and 2nd neighbor Ce-Ce vectors exhibit an anti-ferro (AF) and ferro (F) interaction respectively, which indicates the realization of the RKKY oscillation as shown in Fig. 7 in Ref. [12] but the absolute value of the 2nd neighbor coupling is almost same or slightly larger than that of the 1st neighbor coupling. The 2nd neighbor F couplings of $O_{xy}$ and $T_{xyz}$ also increase the uniform mode with a substantial peak for $\bm{q}=(0,0,0)$ as shown in Fig. 2 corresponding to the elastic softening of $C_{44}$[16]. The difference of the couplings between $O_{xy}$ and $T_{xyz}$ has often been discussed in the early studies[17, 18], where the two couplings must have the same value within the 1st neighbor coupling due to the point group symmetry. In the present calculation, the same coupling value of the 1st neighbor $O_{xy}$ and $T_{xyz}$ has been obtained, while in the 2nd neighbor couplings, a slightly but finite difference between $O_{xy}$ and $T_{xyz}$ has been observed for the Ce-Ce vectors $\bm{R}=a(011),a(101)$ where $a$ is the lattice constant, which enhances the peak of $O_{xy}$ over that of $T_{xyz}$ at $\bm{q}=(\pi,\pi,\pi)$. The anisotropy of the present interaction Hamiltonian including the origin of the above correction together with the so-called ‘bond density’[17] will be presented in the subsequent paper[19]. The next largest coupling is the $\Gamma_{5u}$ octupole $\zeta^{z}$ at $\bm{q}=(0,0,\pi)$ [X(Z) point] which is degenerate for $\zeta^{x}~{}[\zeta^{y}]$ octupole at $\bm{q}=(\pi,0,0)~{}[(0,\pi,0)]$. In addition to this, the $\Gamma_{3g}$ quadrupole $O_{v}=O_{x^{2}-y^{2}}$ coupling is quite large for $\bm{q}=(0,0,\pi)$ and becomes similar value of the octupole coupling $\zeta^{z}$, which is also degenerate for the rotated moments to the each principle-axis $O_{y^{2}-z^{2}}$ and $O_{z^{2}-x^{2}}$ for $\bm{q}=(\pi,0,0)$ and $\bm{q}=(0,\pi,0)$ respectively. In real space, the coupling of $\zeta^{z}$ and $O_{v}$ are highly anisotropic with the AF couplings for the Ce-Ce vector $\bm{R}=a(001)$ and the F couplings for $\bm{R}=a(100),a(010)$, which induces the enhancement of the $\bm{q}=(0,0,\pi)$ mode. In these situation, the triple-$\bm{Q}$ mode of $(\zeta^{x},\zeta^{y},\zeta^{z})$ and $(O_{y^{2}-z^{2}},O_{z^{2}-x^{2}},O_{x^{2}-y^{2}})$ with $\bm{Q}=(\pi,0,0),(0,\pi,0),(0,0,\pi)$ may become possible for the phase IV in Ce${}_{x}$La${}_{1-x}$B${}_{6}$ with $x<0.8$, which is different from the $\bm{q}=(\pi,\pi,\pi)$ AFO ordering of $(\zeta^{x}+\zeta^{y}+\zeta^{z})/\sqrt{3}$ [20]. The present development of the $\zeta^{z}$ and $O_{v}$ X-point mode appears to be related to the recent inelastic neutron scattering experiments in Ce${}_{x}$La${}_{1-x}$B${}_{6}$[21], where the intensity $\bm{q}=(\pi,0,0)$ is enhanced and becomes dominant mode for $x<0.8$. As for the phase III of the AFM order, the $\Gamma_{4u}$ magnetic multipole couplings of $\sigma^{z}$ and $\eta^{z}$ shall be dominant when the system enters into phase II. They does not become so large in the present paramagnetic system (phase I) and their maximum values are less than half of the 1st leading peak value of $\Gamma_{5g}$-$(\pi,\pi,\pi)$. The realistic description of the successive transition from phase I (paramagnetic) to phase II (AFQ) and from phase II (AFQ) to phase III (AFM) is an important future problem. 4 Summary In summary, we have performed a direct calculation of the RKKY interactions based on the 74-orbital effective Wannier model derived from the bandstructure calculation of CeB${}_{6}$. We obtain the RKKY couplings for the active multipole moments in $\Gamma_{8}$ subspace explicitly as functions of wavevector $\bm{q}$. The couplings of the $\Gamma_{5g}$ quadrupole $O_{xy}$ together with the $\Gamma_{2u}$ octupole $T_{xyz}$ are highly enhanced for $\bm{q}=(\pi,\pi,\pi)$ and $\bm{q}=(0,0,0)$ where the former explains the AFQ ordering of the phase II and the latter corresponds to the elastic softening of $C_{44}$. The present approach enables us to access the possible multipole ordering moments and wavevectors without any assumption and to provide a good insight for searching the multipole ordering in connect with the inherent feature and the concrete situation of actual compounds. References [1] A. S. Cameron, G. Friemel, and D. S. Inosov, Rev. Prog. Phys. 79 066502 (2016). [2] P. Thalmeier, A. Akbari, R. Shiina, arXiv:1907.10967. [3] Y. Ōnuki et al., J. Phys. Soc. Jpn. 58 3698 (1989). [4] M. Neupane et al., Phys. Rev. B 92 104420 (2015). [5] S. V. Ramankuttya el al., J. Electron Spectrosc. Relat. Phenom. 208 43 (2016). [6] A. Koitzsch et al., Nat. Commun. 7 10876 (2016). [7] M. A. Ruderman and C. Kittel, Phys. Rev. 96 99 (1954). [8] T. Kasuya, Prog. Theor. Phys. 16 45 (1956). [9] K. Yosida, Phys. Rev. 106 893 (1957). [10] F. J. Ohkawa, J. Phys. Soc. Jpn. 52 3897 (1983). [11] R. Shiina, H. Shiba, and O. Thalmeier, J. Phys. Soc. Jpn. 66 1741 (1997). [12] T. Yamada and K. Hanzawa, J. Phys. Soc. Jpn. 88 084703 (2019). [13] P. Blaha et al., WIEN2k, A Full-Potential Linearized Augmented-Plane Wave Package for Calculating Crystal Properties (Vienna University of Technology, 2002, http://www.wien2k.at/.). [14] A. Mostofi et al., Comput. Phys. Commun. 178 685 (2008). [15] J. Kune$\check{\rm s}$ et al., Comput. Phys. Commun. 181 1888 (2010). [16] B. L$\ddot{\rm u}$thi et al., Z. Phys. B 58 31 (1984). [17] H. Shiba, O. Sakai, and R. Shiina, J. Phys. Soc. Jpn. 68 1988 (1999). [18] K. Hanzawa, J. Phys. Soc. Jpn. 69 510 (2000). [19] K. Hanzawa and T. Yamada, submitted to J. Phys. Soc. Jpn. [20] K. Kubo and Y. Kuramoto, J. Phys. Soc. Jpn. 73 216 (2004). [21] S. E. Nikitin et al., Phys. Rev. B 97 075116 (2018).
The extreme high frequency peaked BL Lac 1517+656††thanks: Based on observations from the German-Spanish Astronomical Center, Calar Alto, operated by the Max-Planck-Institut für Astronomie, Heidelberg, jointly with the Spanish National Commission for Astronomy V. Beckmann 1Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany 12Osservatorio Astronomico di Brera, Via Brera 28, I-20121 Milano, Italy2    N. Bade 1Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany 1    and O. Wucknitz 1Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany 1 (Received date; accepted date) Abstract We present optical spectroscopy data that allowed a measurement of the redshift for the X-ray selected BL Lacertae object 1517+656. With a redshift of $z=0.702$ this object has an absolute magnitude of $M_{B}=-26.4$ and is also an extremely powerful radio and X-ray source. Although being a high frequency peaked BL Lac, this object is one of the most luminous BL Lac objects known so far. Being also a candidate for gravitational lensing, this object is of high interest for the BL Lac research. Assuming several cosmological models and a realistic redshift for the lensed object, we find that 1517+656 has a mass $>2\cdot 10^{12}M_{\sun}$ and a high velocity dispersion $>350\;{\rm km\;sec^{-1}}$. Key Words.: BL Lacertae objects: general - BL Lacertae objects: individual: 1517+656 - Galaxies: distances and redshifts - Cosmology: gravitational lensing ††offprints: vbeckmann@hs.uni-hamburg.de 1 Introduction The physical nature of BL Lacertae objects is not well understood yet. The most common view about BL Lac objects is that we are looking into a highly relativistic jet (Blandford & Rees blandford ). This model can explain several observational parameters, but there are still unsolved problems like the nature of the mechanisms that generate and collimate the jet or the physical nature and evolution along the jet. An important question is also, if there is a fundamental difference between BL Lac objects that are found because of their emission in the radio or in the X-ray range respectively. In order to study the nature of this class of BL Lac, extreme objects help to give constraints on the physics which is involved. One of the greatest problems while studying BL Lac is the difficulty in determining their redshifts because of the absence of strong emission and absorption lines. Usually large telescopes and long exposure times are needed to detect the absorption lines in the surrounding host galaxy. 2 History of 1517+656 Even though 1517+656 is an X-ray selected BL Lac, this object was detected in the radio band before being known as an X-ray source. It was first noted in the NRAO Green Bank $4.85\;{\rm GHz}$ catalog with a radio flux density of $39\pm 6\;{\rm mJy}$ (Becker et al. becker ) and was also included in the 87 Green Bank Catalog of Radio Sources with a similar flux density of $35\;{\rm mJy}$ (Gregory & Condon 87GB ) but in both cases without identification of the source. The NRAO Very Large Array at $1.4\;{\rm GHz}$ confirmed 1517+656 as having an unresolved core with no evidence of extended emission although a very low surface brightness halo could not be ruled out (Kollgaard et al. kollgaard ). The source was first included as an X-ray source in the HEAO-1 A-3 Catalog and was also detected in the Einstein Slew Survey (Elvis et al. elvis ) in the soft X-ray band ($\sim 0.2-3.5\;{\rm keV}$) the Imaging Proportional Counter (IPC, Gorenstein et al. IPC ). The IPC count rate was $0.91\;{\rm cts\;sec^{-1}}$, but the total Slew Survey exposure time was only $13.7\;{\rm sec}$. Even though 1517+656 by then was a confirmed BL Lac object (Elvis et al. elvis ) with an apparent magnitude of $B=15.5\;{\rm mag}$, no redshift data were available. Known as a bright BL Lac, 1517+656 has been studied several times in different wavelengths in the recent years. Brinkmann & Siebert (brinkmann ) presented ROSAT PSPC ($0.07-2.4\;{\rm keV}$) data and determined the flux to $f_{\rm X}=2.89\;\cdot 10^{-11}\;{\rm erg\;cm^{-2}\;sec^{-1}}$ and the spectral index to $\Gamma=2.01\pm 0.08$ 111The energy index $\alpha_{E}$ is related to the photon index $\Gamma=\alpha_{E}+1$. Observations of 1517+656 with BeppoSAX in the $2-10\;{\rm keV}$ band in March 1997 gave an X-ray flux of $f_{\rm X}=1.03\;\cdot 10^{-11}\;{\rm erg\;cm^{-2}\;sec^{-1}}$ and a steeper spectral slope of $\Gamma=2.44\pm 0.09$ (Wolter et al. wolter ). The Energetic Gamma Ray Experiment Telescope (EGRET, Kanbach et al. EGRET ) on the Compton Gamma Ray Observatory did not detect 1517+656 but gave an upper flux limit of $8\cdot 10^{-8}\;{\rm photons\;cm^{-2}\;sec^{-1}}$ for $E>100\,{\rm MeV}$ (Fichtel et al. fichtel ). In the hard X-rays 1517+656 was first detected with OSSE with $3.6\pm 1.2\cdot 10^{-3}\;{\rm photons\;cm^{-2}\;sec^{-1}}$ at $0.05-10\;{\rm MeV}$ (McNaron-Brown et al. mcnaron ). The BL Lac was then detected in the EUVE-All-Sky Survey with a gaussian significance of $2.6\sigma$ during a $1362\;{\rm sec}$ exposure, giving a lower and upper count rate limit of $0.0062\;{\rm cps}$ and $0.0189\;{\rm cps}$ respectively (Marshall et al. EUVE ). For a plot of the spectral energy distribution see Wolter et al. wolter . 3 Optical Data The BL Lac 1517+656 was also included in the Hamburg BL Lac sample selected from the ROSAT All-Sky Survey. This complete sample consists of 35 objects forming a flux limited sample down to $f_{\rm X}(0.5-2.0\;{\rm keV})=8\cdot 10^{-13}\;{\rm erg\;cm^{-2}\;sec^{-1}}$ (Bade et al. bade98 , Beckmann beckmann ). Studying evolutionary effects, we had to determine the redshifts of the objects in our sample. In February 1998 we took a half hour exposure of 1517+656 with the 3.5m telescope on Calar Alto, Spain, equipped with MOSCA. Using a grism sensitive in the $4200-6600\,{\rm\AA}$ range with a resolution of $\sim 3\,{\rm\AA}$ it was possible to detect several absorption lines. The spectrum was sky subtracted and flux calibrated by using the standard star HZ44. Identifying the lines with iron and magnesium absorption we determined the redshift of 1517+656 to $z\geq 0.7024\pm 0.0006$ (see Fig. 1). The part of the spectrum with the FeII and MgII doublet is shown in Fig. 3. The BL Lac has also been a target for follow-up observation for the Hamburg Quasar Survey (HQS; Hagen et al. Hagen ) in 1993, because it had no published identification then and was independently found by the Quasar selection of the HQS. The $2700\;{\rm sec}$ exposure, taken with the 2.2m telescope on Calar Alto and Boller & Chivens spectrograph, showed a power-law like continuum; the significance of the absorption lines in the spectrum was not clear due to the moderate resolution of $\simeq 10\,{\rm\AA}$ (Fig. 2). Nevertheless the MgII doublet at 4761 and $4774{\rm\AA}$ is also detected in the 1993 spectrum, though only marginally resolved (see Table 2). The equivalent width of the doublet is comparable in both images ($W_{\rm\AA}=0.8/0.9$ for the 1993/1998 spectrum respectively). Also the Fe II absorption doublet at $4403/4228\,{\rm\AA}$ ($\lambda_{\rm rest}=2586.6/2600.2\,{\rm\AA}$) and Mg I at $4859\,{\rm\AA}$ ($\lambda_{\rm rest}=2853.0\,{\rm\AA}$) is detectable. For a list of the detected lines, see Table 1. Comparison with equivalent widths of absorption lines in known elliptical galaxies is difficult because of the underlying non-thermal continuum of the BL Lac jet. But the relative line strengths in the FeII and MgII dublet are comparable to those measured in other absorption systems detected in BL Lac objects (e.g. 0215+015, Blades et al. blades ). Because no emission lines are present and the redshift is measured using absorption lines, the redshift could belong to an absorbing system in the line of sight, as e.g. detected in the absorption line systems in the spectrum of 0215+015 (Bergeron & D’Odorico bergeron ). A higher redshift would make 1517+656 even more luminous; we will consider this case in the further discussion, though we assume that the absorption is caused by the host galaxy of the BL Lac. Assuming a single power law spectrum with $f_{\nu}\propto\nu^{\alpha_{o}}$ the spectral slope in the $4700-6600\,{\rm\AA}$ band can be described by $\alpha_{o}=0.86\pm 0.07$. The high redshift of this object is even highly plausible, because it was not possible to resolve its host galaxy on HST snap shot exposures (Scarpa et al. scarpa ). The apparent magnitude varies slightly through the different epochs, having reached the faintest value of $R=15.9$ mag and $B=16.6$ mag in February 1999 (direct imaging with Calar Alto 3.5m and MOSCA). These values were derived by comparison with photometric standard stars in the field of view (Villata et al. villata ). $H_{0}=50\;{\rm km\;sec^{-1}\;Mpc^{-1}}$ and $q_{0}=0.5$ leads to an absolute optical magnitude of at least $M_{R}=-27.2\;{\rm mag}$ and $M_{B}\leq-26.4$ (including K-correction). 4 Mass of 1517+656 Scarpa et al. (scarpa ) report the discovery of three arclike structures around 1517+656 in their HST snapshot survey of BL Lac objects. The radius of this possible fragmented Einstein ring is 2.4 arcsec. If this feature indeed represents an Einstein ring, the mass of the host galaxy of 1517+656 can easily be estimated. As the redshift of these background objects is not known, we can only derive a lower limit for the mass of the lens. For a spherically symmetric mass distribution (with $\theta$ being the radius of the Einstein ring, $D_{\rm d}$ the angular size distance from the observer to the lens, $D_{\rm s}$ from observer to the source, and $D_{\rm ds}$ the distance from the lens to the source) we get (cf. Schneider et al. schneider ): $$M=\theta^{2}\frac{D_{\rm d}\,D_{\rm s}}{D_{\rm ds}}\frac{c^{2}}{4G}$$ (1) Thus the lower limit for the mass inside the Einstein ring is $M=1.5\cdot 10^{12}\,M_{\sun}$ for Einstein-de Sitter cosmology and $H_{0}=50\,\rm km\,sec^{-1}\,Mpc^{-1}$. For other realistic world models (also including a positive cosmological constant), this limit is even higher. Assuming an isothermal sphere for the lens, the velocity dispersion in the restframe can be calculated by $$\sigma_{v}^{2}=\frac{\theta}{4\pi}\frac{D_{\rm s}}{D_{\rm ds}}c^{2}$$ (2) Independent of $H_{0}$ we get a value of at least $330\,\rm km\,sec^{-1}$ for Einstein-de Sitter cosmology, and slightly less ($320\,\rm km\,sec^{-1}$) for a flat low-density universe ($\Omega_{\rm M}=0.3$, $\Omega_{\Lambda}=0.7$). Other models again lead to even higher values. The true values of the mass and velocity dispersion might be much higher if the redshift of the source is significantly below $z\approx 2$. Figures 4 and 5 show the mass and velocity dispersion as a function of the source redshift. If the observed absorption is caused by a foreground object and the redshift of 1517 is higher than 0.7, the mass and velocity dispersion of the host galaxy have to be even higher. More detailled modelling of this system will be possible when the redshift of the background object is measured. If the arcs are caused by galaxies at different redshift, the mass distribution in the outer parts of the host galaxy of 1517+656 can be determined which will provide very important data for the understanding of galaxy halos. High resolution and high S/N direct images may allow to use more realistic models than symmetrical mass distributions by providing further constraints. 5 Discussion The BL Lac 1517+656 with $M_{R}\leq-27.2\;{\rm mag}$ and $M_{B}\leq-26.4$ is the most luminous BL Lac object in the optical band. Padovani & Giommi (padovani ) presented in their catalogue of 233 known BL Lacertae objects an even brighter candidate than 1ES1517+656: PKS 0215+015 (redshift $z=1.715$, $V=15.4\;{\rm mag}$, Véron-Cetty & Veron veron93 ). This radio source has been identified by Bolton & Wall (bolton ) as an $18.5\;{\rm mag}$ QSO. The object has been mainly in a bright phase starting from 1978, and became faint again since mid-1983 (Blades et al. blades ). Its brightness is now $V=18.8\;{\rm mag}$ ($M_{V}=-26.2\;{\rm mag}$; Kirhakos et al. kirhakos , Véron-Cetty & Veron veron98 ). Also the X-ray properties of 1517+656 are extreme: with an X-ray flux of $f_{\rm X}=2.89\;\cdot 10^{-11}\;{\rm erg\;cm^{-2}\;sec^{-1}}$ in the ROSAT PSPC band we have a luminosity of $L_{X}=7.9\;\cdot 10^{46}\;{\rm erg\;sec^{-1}}$ which is a monochromatic luminosity at $2\;{\rm keV}$ of $L_{X}=4.6\;\cdot 10^{21}\;{\rm W\;Hz^{-1}}$. The radio flux of $37.7\rm{mJy}$ at $1.4\;{\rm GHz}$ leads to $L_{R}=1.02\;\cdot 10^{26}\;{\rm W\;Hz^{-1}}$. Thus 1517+656 is up to now one of the most luminous known BL Lac in X-rays, radio and optical band, also compared to newest results from HST observations (Falomo et al. falomo ). They give detailed analysis for more than 50 BL Lac objects with redshift $z<0.5$, showing none of them having an absolute magnitude $M_{R}<-26$. Compared to the 22 BL Lac in the complete EMSS sample (Morris et al. morris ), 1517+656 is even more luminous in the radio, optical and X-ray band than all of those high frequency peaked BL Lac objects (HBL). Finding an HBL, like 1517+656 with $\nu_{\rm peak}=4.0\cdot 10^{16}\;{\rm Hz}$ (Wolter et al. wolter ), of such brightness is even more surprising, because the HBL are usually thought to be less luminous than the low frequency peaked ones (e.g. Fossati et al. fossati , Perlman & Stocke perlman , Januzzi et al. januzzi ). In comparison to the SED for different types of Blazars, as shown in Fossati et al. (fossati ), 1517+656 shows a remarkable behaviour. The radio-properties are similar to an HBL ($\log(\nu L_{4.85\;{\rm GHz}})=42.7$), in the V-Band ($\log(\nu L_{5500\;\AA})=46.1$) and in the X-rays ($\log(\nu L_{1\;{\rm keV}})=46.4$) between bright LBL and faint FSRQ objects. On the other hand it is not surprising to find one of the most luminous BL Lac objects in a very massive galaxy with $M>2\cdot 10^{12}M_{\sun}$. This mass is a lower limit, as long as the redshift of 1517+656 could be larger than $z=0.702$, and is depending on the cosmological model and on the redshift of the lensed object (see Fig. 4). Acknowledgements.We would like to thank H.-J. Hagen for developing the optical reduction software and for taking the 1993 spectrum of HS 1517+656. Thanks to Anna Wolter and the other colleagues from the Osservatorio Astronomico di Brera for fruitful discussion. This work has received partial financial support from the Deutsche Akademische Austauschdienst. References (1998) Bade N., Beckmann V., Douglas N. G., et al., 1998, A&A 334, 459 (1991) Becker R. H., White R. L., Edwards A. L., 1991, ApJS 75, 1 (1999) Beckmann V., 1999, in: PASPC Vol. 159, eds. L. O. Takalo, A. Silanpää (1986) Bergeron J., D’Odorico S., 1986, MNRAS 220, 833 (1985) Blades J. C., Hunstead R. W., Murdoch H. S., Pettini M., 1985, ApJ 288, 580 (1978) Blandford R., Rees M. J., 1978, in Pittsburgh Conference on BL Lac objects, ed. A. M. Wolfe (1969) Bolton J. G., and Wall J. V., 1969, Astrophysical Letters 3, 177 (1994) Brinkmann W., Siebert J., 1994, A&A 285, 812 (1992) Elvis M., Plummer D., Schachter J., Fabbiano G., 1992, A&AS 80, 257 (1999) Falomo R., Urry C. M., Scarpa R., Pesce J. E., Treves A., 1999, in: PASPC Vol. 159, eds. L. O. Takalo, A. Silanpää (1994) Fichtel C. E., Bertsch D. L., Chiang J., et al., 1994, ApJS 94, 551 (1998) Fossati G., Maraschi L., Celotti A, et al., 1998, MNRAS 299, 433 (1981) Gorenstein P., Harnden R. F., Fabricant D. E., 1981, IEEE Trans. Nucl. Sci. NS-28, 869 (1991) Gregory P. C., Condon J. J., 1991, ApJS 75, 1011 (1995) Hagen H.-J., Groote D., Engels D., Reimers D., 1995, A&AS 111, 195 (1994) Januzzi B. T., Smith P. S., Elstan R., 1994, ApJ 428, 130 (1988) Kanbach G., Bertsch D. L., Fichtel C. E., et al., 1988, Space Sci. Rev. 49, 69 (1994) Kirhakos S., Sargent W. L. W., Schneider D. P., et al., 1994, PASP 106, 646 (1996) Kollgaard R. I., Palma C., Laurent-Muehleisen S. A., Feigelson E. D., 1996, ApJ 465, 115 (1995) Marshall H. L., Fruscione A., Carone T. E., 1995, ApJ 439, 90 (1995) McNaron-Brown K., Johnson W. N., Jung G. V., et al., 1995, ApJ 451, 575 (1991) Morris S. L., Stocke J. T., Gioia I. M., et al., 1991, ApJ 380, 49 (1995) Padovani P., Giommi P., 1995, MNRAS 277, 1477 (1993) Perlman E. S., Stocke J. T., 1993, ApJ 406, 430 (1999) Scarpa R., Urry C. M., Falomo R., Pesce J. E., Treves A., 1999, in: PASPC Vol. 159, eds. L. O. Takalo, A. Silanpää (1993) Schneider P., Ehlers J., Falco E. E., 1993, “Gravitational Lenses”, Springer (1993) Véron-Cetty M.-P., Véron P., 1993, A&AS 100, 521 (1998) Véron-Cetty M.-P., Véron P., 1998, ESO Sci. Rep. 18, 1 (1998) Villata M., Raiteri C. M., Lanteri L., et al., 1998, A&AS 130, 305 (1998) Wolter A., Comastri A., Ghisellini G., et al., 1998, A&A 335, 899
Ambiguity, Invisibility, and Negativity 111Contribution to Stanley Deser Memorial Volume “Gravity, Strings and Beyond” Frank Wilczek Center for Theoretical Physics, MIT, Cambridge, MA 02139 USA; T. D. Lee Institute and Wilczek Qtauantum Center, Shanghai Jiao Tong University, Shanghai, China; Arizona State University, Tempe, AZ, USA; Stockholm University, Stockholm, Sweden Abstract Many widely different problems have a common mathematical structure wherein limited knowledge lead to ambiguity that can be captured conveniently using a concept of invisibility that requires the introduction of negative values for quantities that are inherently positive. Here I analyze three examples taken from perception theory, rigid body mechanics, and quantum measurement. {textblock*} 5cm(11cm,-8.2cm) MIT-CTP/5609 Stanley Deser’s generosity and humor lifted my spirits on many occasions over many years. Our professional work in physics had very different centers, but there was some overlap. We even wrote a short paper together [1]. That paper is a minor work by any standard, though it does touch on a significant point. In it, we gave several examples of nonabelian gauge potentials that generate the same gauge fields but different gauge structures, so that (for instance) $F^{1}_{\alpha\beta}=F^{2}_{\alpha\beta}$ but $\nabla_{\gamma}F^{1}_{\alpha\beta}\neq\nabla_{\gamma}F^{2}_{\alpha\beta}$ . This contrasts with the abelian case, where the fields determine the gauge potentials up to a gauge transformation, locally. (Globally, of course, they do not [2, 3].) The problem of classifying the ambiguity in cases like this has no general solution; indeed, the closely related problem of classifying spaces with equal curvature data of different kinds in different dimensions up to isometry quickly points us to some milestone theorems, famous unsolved problems, and unexplored territory. Here I will describe a trio of more down-to-earth problems that have the same flavor, but which share a mathematical structure that is much more tractable. In the context of “Gravity, Strings, and Beyond” they fall firmly within “Beyond”, not in the sense of “Transcending”, but rather just “Outside”. They are sufficiently direct and simple that further introduction seems unwarranted. 1 Metamers in Visual Perception Within the vast and complex subject of visual perception [4] there is a useful idealization, with roots in the work of Maxwell [5], that captures important aspects of the primary perception of color. This is called colorimetry. The book by Koenderink [6] is a very attractive presentation of many aspects of theoretical colorimetry. The central concept of colorimetry is that the primary perception of the color of an illumination source – essentially meaning, in this context, a uniform beam of light – can be predicted using a few linear functions of its spectrum. Thus we summarize the responses of several detectors’ $\alpha$ with response functions $c_{\alpha}(\lambda)$ to different illumination sources $k$ with intensity spectra $I_{k}(\lambda)$ according to $$M_{\alpha k}~{}=~{}\int\,d\lambda\,c_{\alpha}(\lambda)I_{k}(\lambda)$$ (1) What we mean by “predicting” the primary perception is that illumination sources that induce the same values of $M_{\alpha k}$ will be indistinguishable to the detectors. This is the possibility we will be analyzing.. “Normal” – i.e., majority – human color perception is trichromatic. That is to say, most people share three very similar sensitivity functions, often called “blue, green, red” after the location of their peak values. They are rather broadly tuned, however, and in the scientific literature “S, M, L” (for “short, medium, long”) is generally preferred. Maxwell did ingenious psychophysical experiments to establish the linearity and three-dimensionality of normal human color perception. Nowadays we can trace its molecular origin. There are three basic pigments, concentrated in three types of cone cells in the fovea, that can undergo shape changes upon absorbing photons. The shape changes trigger electrical impulses that are the primary events in color vision. These absorption events are probabilisitic and all-or-none. Human color vision is a beautiful case study in quantum mechanics at work! An illumination $I(b_{k},\lambda)\equiv\sum\limits_{k}b_{k}I_{k}(\lambda)$ that satisfies $$0~{}=~{}\int\,d\lambda c_{\alpha}(\lambda)\,I(b_{k},\lambda)~{}=~{}\sum\limits_{k}\,M_{\alpha k}b_{k}$$ (2) will be invisible to all the detectors. Given $M_{\alpha k}$, conditions (2) are a system of linear equations for the $b_{k}$. Their solutions define a linear space that we will refer to as the space of invisible metamers. (The term “black metamers” is often used, but – like “dark matter” and “dark energy” - it tends to evoke misleading imagery.) Since the $c_{\alpha}(\lambda)$ and $I_{k}(\lambda)$ are intrinsically positive, so are the $M_{\alpha k}$. To obey Eqn. (2), therefore, some of the $b_{k}$ will have to be negative. Since $b_{k}$ represents the strength with which illumination source $k$ is present, however, only $b_{k}\geq 0$ are physically realizable. Nevertheless, the invisible metamer concept is quite useful, because it parameterizes the ambiguity left open by perception. The point is that two illumination choices $b^{(1)}_{k},b^{(2)}_{k}$ look the same to all the detectors if and only if $b^{(1)}_{k}-b^{(2)}_{k}$ belongs to the space of invisible metamers. Thus, given any physical illumination choice $b^{\rm phys.}_{k}$, we can find all the perceptually equivalent by illuminations by adding in vectors from the space of invisible metamers, as $b^{\rm phys.}_{k}+b^{\rm inv.}_{k}$. The situation becomes richer, and our conceptual clarity bears fruit, when we come to compare different sets of detectors [7]. Let me describe a sample application from that paper. There are common forms of variant color perception, usually called color blindness, that result from mutations of the S, M, or L receptor molecules. Now suppose that we want to make a differential diagnosis among them. The invisible metamer concept suggests a powerful and efficient way to do that. Indeed, if we have four illumination sources (say four types of LEDs) with adjustable brightness, then there will be different one-dimensional invisible metamer spaces associated with the normal and variant receptor sets. Let us call the basis vectors $b^{N}_{k},b^{S^{\prime}}_{k},b^{M^{\prime}}_{k},b^{L^{\prime}}_{k}$, in an obvious notation. Then, starting with a reference color combination $b^{\rm O}_{k}$ that has all positive components, we can mix dial in illumination patterns of the types $$\displaystyle{\rm normal\ metamers}:$$ $$\displaystyle b^{\rm O}+\lambda b^{N}$$ $$\displaystyle{\rm S\ mutant\ metamers}:$$ $$\displaystyle b^{\rm O}+\lambda b^{S^{\prime}}$$ $$\displaystyle{\rm M\ mutant\ metamers}:$$ $$\displaystyle b^{\rm O}+\lambda b^{M^{\prime}}$$ $$\displaystyle{\rm L\ mutant\ metamers}:$$ $$\displaystyle b^{\rm O}+\lambda b^{L^{\prime}}$$ (3) with variable $\lambda$. The first type will provide, for different values of $\lambda$, a set of colors that cannot be distinguished by normal trichromats, but that are distinguishable by the mutants. This phenomenon shows, rather dramatically, why it is not entirely appropriate to refer to the mutations as “color blindness”. The second type provides colors that cannot be distinguished by S mutants, but can be distinguished by normal trichromats and M or L mutants, and so forth. By choosing appropriate illumination sources we can accentuate the differences. Following this strategy, we have made good, simple practical devices. Along similar lines, one can design quantitative tests for different hypothetical forms of “super” color vision. Indeed, since the relevant genes lie on the X chromosomes, females (with two X chromosomes) can carry both majority and mutant forms of the different receptors, allowing different kinds of tetrachromacy or even pentachromacy. For more on this and other applications, see [7]. 2 Equivalent Rigid Bodies In classical mechanics, a rigid body is defined by a distribution of masses $m_{j}$ in space, at positions $x_{j}^{\alpha}$,. According to the definition of a rigid body, we only consider motions that correspond to common rotation and translation of all the masses, induced by given summed forces (and torques). The degrees of freedom can be taken as the overall position and orientation of a “body-fixed” reference system. As is shown in textbooks, the dynamics of a rigid body – i.e., the evolution of its position and orientation – depends only on its total mass and its inertia tensor $$I^{\alpha\beta}~{}=~{}\sum\limits_{j}\,m_{j}(|x_{j}|^{2}\delta^{\alpha\beta}-x_{j}^{\alpha}x_{j}^{\beta})$$ (4) referred to a coordinate system where the center of mass $$x^{\alpha}_{\rm CM}~{}=~{}\frac{\sum\limits_{j}m_{j}x_{j}^{\alpha}}{\sum\limits_{j}m_{j}}$$ (5) is at the origin. It is possible for different distributions of mass, i.e. different bodies, to agree in those properties. In that case, if we have access only to those bodies’ overall motion - for example, if they are rigidly attached within identical opaque shells - then we will not be able to distinguish them. We can say that they are dynamically equivalent. The problem arises, to clarify and exemplify this ambiguity mathematically. The conditions for equality of total mass and inertia tensors, and zeroing of centers of mass, are all linear in the component mass variables $m^{j}$. It is therefore natural, by analogy to our treatment of metamerism, to introduce a space of “dynamically invisible bodies”. Dynamically invisible bodies are defined by distributions of mass such that $$\displaystyle 0~{}$$ $$\displaystyle=$$ $$\displaystyle~{}\sum\limits_{j}m_{j}$$ $$\displaystyle 0~{}$$ $$\displaystyle=$$ $$\displaystyle~{}\sum\limits_{j}m_{j}x_{j}^{\alpha}$$ $$\displaystyle 0~{}$$ $$\displaystyle=$$ $$\displaystyle~{}\sum\limits_{j}\,m_{j}(|x_{j}|^{2}\delta^{\alpha\beta}-x_{j}^{\alpha}x_{j}^{\beta})$$ (6) In order for Eqn. (2) to be satisfied some of the $m_{j}$ will need to be negative. Thus, dynamically invisible bodies, like invisible metamers, are not directly physical. But dynamically invisible bodies are relatively simple to construct, because their defining conditions are linear and highly symmetric. Dynamically invisible bodies are a useful conceptual tool, because we can construct physical dynamically equivalent objects from dynamically invisible bodies (i.e., their mass distributions) by adding invisible bodies to a positive mass distribution. Simple but flexible constructions based on these ideas can be used to generate complex, non-obvious examples of dynamically equivalent bodies. Here are two such constructions: 1. Parity construction: To any distribution of masses $m_{j}$ at positions $x_{j}^{\alpha}$, $j=1,...,n$ whose center of mass is at the origin, add reflected negative masses at the inverted positions, according to $$\displaystyle m_{-j}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}-m_{j}$$ $$\displaystyle x_{-j}^{\alpha}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}-x_{j}^{\alpha}$$ (7) This creates a dynamically invisible body. 2. Rotation construction: To any distribution of masses $m_{j}$ at positions $x_{j}^{\alpha}$, $j=1,...,n$ whose center of mass is at the origin, and whose inertia tensor is proportional to the unit tensor, and any rotation $R^{\alpha}_{\beta}$, add negative masses at the rotated positions, according to $$\displaystyle m_{-j}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}-m_{j}$$ $$\displaystyle x_{-j}^{\alpha}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}R^{\alpha}_{\beta}x_{j}^{\beta}$$ (8) Here we can allow improper rotations, or use an equal-mass, equal-inertia tensor body of different form. Naturally, this begs the question of constructing non-trivial distributions whose inertia tensor is proportional to the unit tensor. Mass distributions that are symmetric under appropriate discrete subgroups of the rotation group, such as the symmetry groups of the Platonic solids, will have that property. 3. Superposition: The invisible bodies form a linear manifold: their mass distributions can be multiplied by constants, and added together freely. To ground the discussion, let us consider a minimal example of an invisible body. We put masses $m_{1}\equiv m,m_{2}=ml_{1}/l_{2}$ at positions $l_{1}\hat{z},-l_{2}\hat{z}$. The parity construction gives us a dynamically invisible body if we add in $m_{3}=-m,m_{4}=-ml_{1}/l_{2}$ at positions $-l_{1}\hat{z},l_{2}\hat{z}$. Now if we add this to a mass distribution $M_{1},M_{2},M_{3},M_{4}$ at $l_{1}\hat{z},-l_{2}\hat{z},-l_{1}\hat{z},l_{2}\hat{z}$ and $$\displaystyle M_{3}~{}$$ $$\displaystyle\geq$$ $$\displaystyle~{}m$$ $$\displaystyle M_{4}~{}$$ $$\displaystyle\geq$$ $$\displaystyle~{}ml_{1}/l_{2}$$ (9) we will define a physical mass distribution. By varying $m>0$ within these constraints, we produce a family of dynamically equivalent physical mass distributions. Untethered point masses are an extreme idealization of any actual rigid body, of course. We can make the foregoing construction more realistic by replacing the point masses with distributions of mass around the same centers, and by adding supporting material whose mass distribution is independent of $m$ to fill the interstices. In this way, we reach practically realizable designs for dynamically equivalent rigid bodies. 3 Quantum Grey Boxes The state of a system in quantum mechanics is specified by a density matrix $\rho$, which is required to be Hermitian and non-negative, with unit trace. Observables are represented by hermitian operators $M$, and the expectation value of $M$ in the state described by $\rho$ is ${\rm Tr}\,\rho M$. Thus when a suite of measurements of the observables $M_{j}$ on a system yield results $v_{j}$, we learn $${\rm Tr\/}\rho M_{j}~{}=~{}v_{j}$$ (10) These results might not determine $\rho$ completely, and the issues arises, to parameterize the resulting ambiguity. (The measurements take us from a black box to a grey box.) Clearly, there is a strong family resemblance among this problem, the preceding one, and the color metamer problem. Following the same line of thought, we define a linear space of invisible density matrices consisting of hermitian matrices $\rho^{\rm inv.}$ that obey the equations $$\displaystyle{\rm Tr\/}\rho^{\rm inv.}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}0$$ $$\displaystyle{\rm Tr\/}\rho^{\rm inv.}M_{j}~{}$$ $$\displaystyle=$$ $$\displaystyle~{}0$$ (11) Invisible density matrices cannot be non-negative, so they do not describe physically realizable states. Basically, they contain negative probabilities. An extremely simple example may be helpful here, to ground the discussion. For a two-level system physical density matrices have the form $$\rho~{}=~{}\left(\begin{array}[]{cc}a&\beta\\ \beta^{*}&1-a\end{array}\right)$$ (12) where $0\leq a\leq 1$ is a real number and $\beta$ is a complex number, subject to the constraint $$a(1-a)-|\beta|^{2}\geq 0$$ (13) For measurement of $\sigma_{3}$, the invisible state conditions for the hermitian matrix $M\equiv\left(\begin{array}[]{cc}r&\gamma\\ \gamma^{*}&s\end{array}\right)$ read $$\displaystyle{\rm Tr\/}M~{}$$ $$\displaystyle=$$ $$\displaystyle~{}r+s~{}=~{}0$$ $$\displaystyle{\rm Tr\/}\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right)M~{}$$ $$\displaystyle=$$ $$\displaystyle~{}r-s~{}=~{}0$$ (16) so $$M~{}=~{}\left(\begin{array}[]{cc}0&\gamma\\ \gamma^{*}&0\end{array}\right)~{}=~{}{\rm Re}\,\gamma\ \sigma_{1}-{\rm Im}\,\gamma\ \sigma_{2}$$ (17) Thus, we see that the space of invisible density matrices a spanned by a mixture of spin up and spin down in the $\hat{x}$ direction with equal and opposite probabilities, together with a mixture of spin up and spin down in the $\hat{y}$ direction, with equal and opposite probabilities. Suppose that we measure the expectation value of $\sigma_{3}$ in the state represented by $\rho$ to be $v$, i.e. $${\rm Tr\/}\,\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right)\left(\begin{array}[]{cc}a&\beta\\ \beta^{*}&1-a\end{array}\right)~{}=~{}2a-1~{}=~{}v$$ (18) This leaves $\beta$ undetermined. Evidently, that ambiguity corresponds to motion within the space of invisible states. But (noting $a=\frac{v+1}{2}$) the physical states must obey $$1-v^{2}\geq 4|\beta|^{2}$$ (19) Thus we can only make use of a portion of the invisible state space, whose extent depends on $v$. Negative probabilities as they appear in different contexts were the subject of a very entertaining presentation by Feynman, written up in [8]. Here, they can offer the same sorts of mathematical convenience and conceptual clarity here as do the invisible metamers and invisible bodies in their contexts; and we can take over ideas from one problem to the others. We can construct distinct physically realizable density matrices that cannot be resolved by a given measurement suite, or we can compare the blind spots of different measurement suites, for example. A natural extension will bring in superdensity matrices [9] and time-dependent measurements. Then we will have a precise concept of invisible histories, which arise in any realistic measurement protocol. Acknowledgements: Thanks to Nathan Newman and Jordan Cotler for helpful comments. This work is supported by the U.S. Department of Energy under grant Contract Number DE-SC0012567, by the European Research Council under grant 742104, and by the Swedish Research Council under Contract No. 335-2014-7424. References [1] S. Deser and F. Wilczek, Non-Uniqueness of Gauge Field Potentials Phys. Lett. 65B, 391 (1976). [2] P. A. M. Dirac, Quantised Singularities in the Electromagnetic Field Proc. Royal Society A. London. 133 (821): 60 (1931). [3] Y. Aharonov and D. Bohm, Significance of Electromagnetic Potentials in Quantum Theory Phys. Rev. 115 (3): 485–491 (1959). [4] B. Wandell, Foundations of Vision (Sinauer Associates, Sunderland Mass.) (1995). [5] J. C. Maxwell, The Selected Papers of J. C. Maxwell Papers 6, 7, 12, 13, 16, 21, 22, 42, 47 (Dover Publications, New York) [6] J. Koenderink, Color for the Sciences (MIT Press) (2011). [7] A. Borchert, N. Newman, B, Schenck, J. Weekes, F. Wilczek, Experiments in Color Vision Processing: Metamer Synthesis, Display Re-Mapping, and Graded Filtration (paper in preparation). [8] R. P. Feynman, Negative Probability cds.cern.ch/record/154856/files/pre-27827.pdf (1984). [9] J. Cotler, C.-M. Jian, X.-L. Qi, F. Wilczek, Superdensity Operators for Spacetime Quantum Mechanics Journal of High Energy Physics (9) 93 (2018).
JINR preprint E2-94-488 TO THE PROBLEM OF $1/N_{c}$ APPROXIMATION IN THE NAMBU- JONA-LASINIO MODEL D.Ebert Institut für Elementarteilchenphysik, Humboldt-Universität, Invaliden Str. 110, D-10115 Berlin, FRG M.Nagy Institute of Physics of Slovak Academy of Sciences, 842 28 Bratislava, Slovakia M.K.Volkov Joint Institute for Nuclear Research, 141980 Dubna, Russian Federation Abstract In this article, the gap equation for the constituent quark mass in the U(2)$\times$U(2) Nambu-Jona-Lasinio model for the $1/N_{c}$ approximation is investigated. It is shown that taking into account scalar isovector mesons plays an important role for the correct description of quark masses in this approximation. The role of the Ward identity in calculations of $1/N_{c}$ corrections to the meson vertex functions is shortly discussed. The NJL model in the leading $1/N_{c}$ approximation, Hartree approximation, allows us to obtain a relatively complete picture of low-energy meson physics [1-5] ($N_{c}$ is the number of quark colors). However, in the last time, there have been undertaken some attempts to describe next to the leading $1/N_{c}$ approximation in the NJL model and to consider mesons not in the tree diagrams only but also in the loop diagrams [6-12]. Interesting results have been obtained in this direction for the description of the behaviour of the thermodynamical potential and the bulk of thermodynamical quantities in the vicinity of the critical temperature. It has been shown that mesonic degrees of freedom play the dominant role at low $T$, and the quark degrees of freedom are most relevant at high $T$. Thus, it seems to be very useful to continue these investigations, to study more carefully the $1/N_{c}$ approximation in the NJL model by using different methods. Here, we consider the perturbation theory and calculate $1/N_{c}$ corrections to the gap equation. We will show how to correctly use the perturbation theory for the description of constituent quark mass in the $1/N_{c}$ approximation. Our results are remarkably different from those obtained in the series of previous papers (see e.g. [7]). It will be shown that the inclusion of scalar isovector mesons $a_{0}(980)$ plays an important role in the description of the $1/N_{c}$ approximation. We consider the NJL model for the $U(2)\times U(2)$ chirally symmetric case [1-2] $$L({\bar{q}},q)={\bar{q}}(i{\hat{\partial}}-m^{0})q+{G\over 2}[({\bar{q}}{% \lambda}^{a}q)^{2}+({\bar{q}}i\gamma_{5}{\lambda}^{a}q)^{2}],$$ (1)1( 1 ) where $q$ are the fields of u and d quarks, $m^{0}$ is the current quark mass, $\lambda^{0}=1$ is the unique matrix and $\lambda^{a}$ = $\tau^{a}$ $(a=1,2,3)$ are the Pauli matrices. After the introduction of meson fields by using the technique of generating functional [1-3] and performing the integration over quark fields in the functional integral, we come to the Lagrangian $$L^{\prime}({\tilde{\sigma}},\phi)=-{{\tilde{\sigma}}^{2}_{a}+\phi^{2}_{a}\over 2% G}-i{\rm Tr}\ln S^{-1}(x-y),$$ (2)2( 2 ) where ${\tilde{\sigma}}_{a}$ and $\phi_{a}$ are the scalar and pseudoscalar meson fields, respectively, ${\tilde{\sigma}}_{0}=\sigma_{0}-m+m_{0}~{}$, ${\tilde{\sigma}}_{a}=\sigma_{a}$ $(a=1,2,3)$ $$S^{-1}(x,y)=[i{\partial}_{x}-m+\sigma_{a}\lambda^{a}+i\gamma_{5}\lambda^{a}% \phi_{a}]{\partial}^{4}(x-y).$$ (3)3( 3 ) To get the $\sigma$-model, it is enough to consider the divergent quark loops, depicted in Fig.1, and to perform the renormalization of meson fields [1-3] As a result, we obtain the meson Lagrangian of the following type: $$L^{{}^{\prime\prime}}(\sigma,\phi)={1\over 4}{\rm Tr}\left\{(\partial_{\mu}{% \bar{\sigma}})^{2}+(\partial_{\mu}{\bar{\phi}})^{2}+2g\left({m-m_{0}\over G}-8% mI_{1}(m,\Lambda)\right){\bar{\sigma}}-\right.$$ $$-\left.g^{2}\left({1\over G}-8I_{1}(m,\Lambda)\right)({\bar{\sigma}}^{2}+{\bar% {\phi}}^{2})-g^{2}\left[{\bar{\sigma}}^{2}-2{m\over g}{\bar{\sigma}}+{\bar{% \phi}}^{2}\right]^{2}\right\}-$$ $$-i{\rm Tr}\ln\left\{1+{g\over i{\hat{\partial}}-m}[{\bar{\sigma}}+i\gamma_{5}{% \bar{\phi}}]\right\},$$ (4)4( 4 ) where $g=[4I_{2}(m,\Lambda)]^{-1/2}$, ${\bar{\sigma}}=\sigma^{a}\lambda_{a}$, ${\bar{\phi}}=\phi^{a}\lambda_{a}$, and $I_{1}(m,\Lambda)$ and $I_{2}(m,\Lambda)$ are divergent integrals ($\Lambda$ is the cut-off parameter) $$I_{n}(m,\Lambda)=-i{N_{c}\over(2\pi)^{4}}\int^{\Lambda}{d^{4}k\over(m^{2}-k^{2% })^{n}}.$$ (5)5( 5 ) From (5) we can see that the coupling constant $g^{2}$ has the order $1/N_{c}$. Remind that the coupling constant $G$ also has the order $1/N_{c}$. From the condition ${\delta L^{\prime\prime}(\sigma,\phi)\over\delta\sigma}|_{\sigma,~{}\phi=0}=0$ (absence of linear in $\sigma$ terms in $L^{\prime\prime}(\sigma,\phi)$) we obtain the gap equation $$m=m^{0}+8mGI_{1}(m,\Lambda)~{}.$$ (6)6( 6 ) How does the gap equation change, if we permit the existence of meson propagators inside the quark loops? ($1/N_{c}$ approximation). From Fig.1 one can easily see that in this case, in addition to the tadpole 1a there appear complementary terms (linear in $\sigma$) from the diagram 1c which lead to the appearance of additional terms in the gap equation (6) (see Fig.2) $$m=m^{0}+8mGI_{1}(m,\Lambda)+\Delta=$$ $$=m^{0}+2G{iN_{c}\over(2\pi)^{4}}{\rm Tr}\int^{\Lambda}{d^{4}k\over{\hat{k}}-m}% +2G{iN_{c}\over(2\pi)^{4}}{\rm Tr}\int^{\Lambda}d^{4}k{1\over{\hat{k}}-m}% \Sigma(k){1\over{\hat{k}}-m}+...$$ (7)7( 7 ) The last two terms in (7) can be written in the form of one tadpole with the modified quark mass: $$m=m^{0}+2G{iN_{c}\over(2\pi)^{4}}{\rm Tr}\int^{\Lambda}{d^{4}k\over{\hat{k}}-m% -\Sigma(k)}~{},$$ (7⁢a)7𝑎( 7 italic_a ) where $\Sigma(k)$ is the operator of quark self-energy $$\Sigma(k)=3\Sigma_{\pi}(k)+\Sigma_{\sigma_{0}}(k)+3\Sigma_{a_{0}}(k)~{},$$ (8)8( 8 ) $$\Sigma_{\pi}(k)=i{g^{2}_{\pi}\over(2\pi)^{4}}\int^{\bar{\Lambda}}d^{4}q{{\hat{% q}}-m\over(m^{2}-q^{2})(M^{2}_{\pi}-(k-q)^{2})},$$ (9)9( 9 ) $$\Sigma_{\sigma_{i}}(k)=i{g^{2}\over(2\pi)^{4}}\int^{\bar{\Lambda}}d^{4}q{{\hat% {q}}+m\over(m^{2}-q^{2})(M^{2}_{\sigma_{i}}-(k-q)^{2})}.$$ (10)10( 10 ) Here $M_{\pi}$ and $M_{\sigma_{i}}$ are the masses of pions and $\sigma$-particles $(\sigma_{i}=\sigma_{0},a^{0}_{0},a_{0}^{+},a_{0}^{-})$, respectively, $g_{\pi}={m\over F_{\pi}}$, where $F_{\pi}=93$ MeV is the pion decay constant . 111After accounting $\pi-a_{1}$ transitions the constants $g_{\sigma}$ and $g_{\pi}$ will be different from each other [1b]. Here $a_{1}$ is the axial vector meson. In the general case the cut-off parameters $\Lambda$ and $\bar{\Lambda}$ are not equal to each other .Here, we assume that $\Lambda$ = $\bar{\Lambda}$ = 1.2GeV. The gap equation (7a) can be written in the form of the Schwinger - Dyson equation for the new quark mass ${\bar{m}}$ =$m+\Sigma(m)$. For this purpose, we add the term $\Sigma({\bar{m}})$ to both the sides of equation (7a) and write it in the form $${\bar{m}}=m^{0}+2G{iN_{c}\over(2\pi)^{4}}{\rm Tr}\int^{\Lambda}{d^{4}k\over{% \hat{k}}-{\bar{m}}}+\Sigma({\bar{m}})=$$ $$=m^{0}+8G{\bar{m}}I_{1}({\bar{m}},\Lambda)+\Sigma({\bar{m}})~{}.$$ (11)11( 11 ) From equation (11) we can find the correction $\delta m$ to the quark mass $m_{H}$, obtained in the Hartree approximation, after taking account of the first order in $1/N_{c}$ expansion. That is why we write the mass $\bar{m}$ in the form $${\bar{m}}=m_{H}+\delta m$$ (12)12( 12 ) and expand the second term in the r.h.s. of (11) over $\delta m$, conserving the terms of first order over $1/N_{c}$ $$m_{H}+\delta m=m_{0}+(m_{H}+\delta m)8G\left[I_{1}(m_{H},\Lambda)+\delta m{% \delta I_{1}\over\delta m}|_{m=m_{H}}\right]+\Sigma(m_{H})~{}.$$ (13)13( 13 ) By using the formulae $${\delta I_{1}(m,\Lambda)\over\delta m}=-2mI_{2}(m,\Lambda)=-{m\over 2g^{2}}$$ (14)14( 14 ) and the gap equation in the Hartree approximation (see formula (6)) $$m_{H}=m_{0}+8Gm_{H}I_{1}(m_{H},\Lambda),$$ we find for $\delta m$ the following expression: $$\delta m=Z^{-1}\Sigma(m_{H}),$$ (15)15( 15 ) where $$Z=16Gm_{H}^{2}I_{2}(m_{H},\Lambda)+{m_{0}\over m_{H}}=\left({2m_{H}\over g}% \right)^{2}G+{m_{0}\over m_{H}}~{}.$$ (16)16( 16 ) For the parameters we use here [1b]: $m_{H}=280$ MeV, $m_{0}=3.3$ MeV, $\Lambda=1.2$ GeV, $G=5.4$ GeV${}^{-2}$,and $g^{2}\approx 2\pi$, we get 222The results, obtained in the papers [7,8], correspond to the value $Z=1$. $$Z^{-1}=3.6,~{}~{}~{}\delta m=3.6~{}\Sigma(m_{H},\Lambda).$$ Now we have to determine the operators $\Sigma_{\sigma_{i}}(p,\Lambda)$ and $\Sigma_{\pi}(p,\Lambda)$ at the point ${\hat{p}}=m_{H}$. One can easily evaluate the integrals in formulae (9) and (10) and get the following expressions: $$\Sigma_{\pi}(p,\Lambda)={g^{2}_{\pi}\over(4\pi)^{2}}\int_{0}^{1}dx~{}(m-x{\hat% {p}})\left[\ln\left({\Lambda^{2}\over m^{2}}+1\right)+\right.$$ $$+\left.\ln{1+{\bar{b}_{\pi}}x+{\bar{c}}x^{2}\over 1+b_{\pi}x+cx^{2}}-\left(1+{% m^{2}\over\Lambda^{2}}\right)^{-1}{1\over 1+{\bar{b}_{\pi}}x+{\bar{c}}x^{2}}% \right]=$$ $$={g_{\pi}^{2}\over(4\pi)^{2}}\left[mC_{1}^{\pi}(p,\Lambda)-{\hat{p}}C_{2}^{\pi% }(p,\Lambda)\right],$$ (17)17( 17 ) $$\Sigma_{\sigma_{i}}(p,\Lambda)=-{g^{2}\over(2\pi)^{4}}\int_{0}^{1}dx~{}(m+x{% \hat{p}})\left[\ln\left({\Lambda^{2}\over m^{2}}+1\right)+\right.$$ $$+\left.\ln{1+{\bar{b}}_{\sigma_{i}}x+{\bar{c}}x^{2}\over 1+b_{\sigma_{i}}x+cx^% {2}}-\left(1+{m^{2}\over\Lambda^{2}}\right)^{-1}{1\over 1+{\bar{b}}_{\sigma_{i% }}x+{\bar{c}}x^{2}}\right]=$$ $$=-{g^{2}\over(4\pi)^{2}}\left[mC_{1}^{\sigma_{i}}(p,\Lambda)+{\hat{p}}C_{2}^{% \sigma_{i}}(p,\Lambda)\right],$$ (18)18( 18 ) where $$b_{i}={M^{2}_{i}-m^{2}-p^{2}\over m^{2}},~{}~{}c={p^{2}\over m^{2}},~{}~{}{% \bar{b}}_{i}={M^{2}_{i}-m^{2}-p^{2}\over a},{\bar{c}}={p^{2}\over a},a=m^{2}+% \Lambda^{2}~{},$$ $$C_{1}^{i}=\ln\left({\Lambda^{2}\over m^{2}}+1\right)+\left(1+{{\bar{b}}_{i}% \over 2{\bar{c}}}\right)\ln(1+{\bar{b}}_{i}+{\bar{c}})-\left(2+{b_{i}\over c}% \right)\ln{M_{i}\over m}+$$ $$+\left(1-{{\bar{b}}^{2}_{i}\over 2{\bar{c}}}\right){\bar{I}}_{0}+\left({b^{2}_% {i}\over 2c}-2\right)I_{0}~{},$$ (19)19( 19 ) $$C_{2}^{i}=-{1\over 2}\left({{\bar{b}}_{i}\over{\bar{c}}}-{b_{i}\over c}\right)% +{1\over 2}\ln\left({\Lambda^{2}\over m^{2}}+1\right)+{1\over 2}\left(1-{{\bar% {b}}^{2}_{i}\over 2{\bar{c}}^{2}}\right)\ln(1+{\bar{b}}_{i}+{\bar{c}})-$$ $$-\left[1+{1\over c}\left(1-{b^{2}_{i}\over 2c}\right)\right]\ln{M_{i}\over m}-% {{\bar{b}}_{i}\over 2{\bar{c}}}\left(1-{{\bar{b}}^{2}_{i}\over 2{\bar{c}}}% \right){\bar{I}}_{0}+{b_{i}\over 2c}\left(2-{b^{2}_{i}\over 2c}\right)I_{0}~{},$$ (20)20( 20 ) $$I_{0}=\int^{1}_{0}{dx\over 1+b_{i}x+cx^{2}},~{}~{}~{}~{}~{}{\bar{I}}_{0}=\int^% {1}_{0}{dx\over 1+{\bar{b}}_{i}x+{\bar{c}}x^{2}}~{}.$$ Scalars and pions give contributions to the quark mass with the opposite signs and strongly compensate each other. Therefore, it is important to take into account contributions of all mesons corresponding to the considered group of symmetry. In our case of the group U(2)$\times$U(2), to the three pions there correspond four scalar mesons in the scalar sector (scalar isoscalar $\sigma_{0}(700)$ and three scalar isovectors $a_{0}(980)$). 333The isoscalar partner of pions appears only in the U(3)$\times$U(3) group in the form of a $\eta$ meson. Therefore, we will not consider it here.Scalar mesons have the masses: $m_{\sigma_{0}}=700MeV$ and $m_{a_{0}}=980MeV$. Table.1 gives the coefficients $C^{i}_{1}$ and $C^{i}_{2}$ evaluated for all these mesons at $p^{2}=m^{2}$. Then, for $\Sigma(m,\Lambda)$ we obtain $$\Sigma(m,\Lambda)={m\over(4\pi)^{2}}[-g^{2}(1.48+3\times 1.13)+{g_{\pi}^{2}}(3% \times 1.3)=-30.6+35.4=5]$$ $$\Sigma(m,\Lambda)=0.03~{},~{}~{}~{}~{}~{}~{}\delta m=0.11~{}m~{}.$$ As a result, the mass of a constituent quark increases by $11~{}\%$ and is equal to 310 MeV, which completely corresponds to the standard value. If we consider only one scalar meson $\sigma_{0}(700)$, the corrections rapidly increase, amounting to $60~{}\%$ $(\delta m=0.60~{}m)$, which does not correspond to the $1/N_{c}$ approximation. 444If we use the model values for the masses of the scalar mesons:$m_{\sigma_{i}}^{2}=m_{\pi}^{2}+4m^{2}$, m = 580MeV, we get the negative value for $\delta m$. ($C_{1}^{\sigma_{i}}$ = 1.3, $C_{2}^{\sigma_{i}}$ = 0.55 ) These calculations have shown, that for the correct estimates performed in the $1/N_{c}$ approximation, it is very important to take into account all real contributions of mesons from the scalar and pseudoscalar sectors. As an example of another approach to estimation of the quark mass in the $1/N_{c}$ approximation within the NJL model we can illustrate the paper [7]. In this article two incorrect actions have been done, in our opinion: the first when the additional contributions (in the $1/N_{c}$ approximation) from the leading tadpole term in the Schwinger-Dyson equation were not taken into account. This led to the lowered result which did not correspond to the real $1/N_{c}$ approximation. At the second step, the contribution of only one scalar isoscalar meson was considered instead of four scalar mesons. This step substantially increased their estimate. As a result of these two operations, the final $1/N_{c}$ corrections to the quark mass did not go beyond the limit of $20~{}\%$ of the Hatree approximation. One of the interesting tasks is the construction of chirally symmetric perturbation theory for the $1/N_{c}$ expansion. The positive results in this direction have been obtained by G.S.Guralnik with coauthors already in 1976 [12]. They showed that in the $1/N_{c}$ approximation for the NJL model with one scalar and one pseudoscalar mesons the pion mass was equal to zero when the current quark mass was vanishing. Therefore, the pion remains the Goldstone particle in this approximation as well. It is interesting to consider the changes of the meson coupling constants $g$ and $g_{\pi}$ in the $1/N_{c}$ approximation. As we have shown in the Appendix, the scalar meson coupling constant $g$ does not change in the $1/N_{c}$ approximation. A more complicated situation took place for the coupling constants $g_{\pi}$ and the Goldberger-Treiman identity. When this work has been fulfilled, we found out that a very interesting paper appeared just now [13].In this work, a chirally symmetric self-consistent $1/N_{c}$ approximation scheme to the NJL model was developed.The authors used the correct $1/N_{c}$ approximation for the gap equation and demonstrated explicitly that their scheme fulfills all the chiral symmetry theorems - the Goldstone theorem, Goldberger-Treiman relation and the conservation of the quark axial current. This paper is very close to ref. [12]. In contrast with our work they considered the $SU(2)\times SU(2)$ chiral symmetry Lagrangian with only one scalar isoscalar meson and the case when the current quark mass was equal to zero. In conclusion, we would like to say that the papers [12-13] and this one give the full picture of the chirally symmetric $1/N_{c}$ approximation in the NJL model. One of the authors (MKV) would like to express his gratitude to Prof. J.Hüfner and Dr. S.Klevansky for the useful discussions and JSPS Program of Japan, INTAS fund (grant N 2915) and Russian Fundation of the Fundamental Researches (grant N 93-02-14411) for financial support.This work was supported also by DFG project 436 RUS 113. APPENDIX The scalar vertex function and Ward identity Let us show that $1/N_{c}$ corrections to the scalar coupling constant $g$ are equal to zero. For this aim we consider diagrams depicted in Fig.3. The scalar vertex function for $\sigma_{0}$ meson in $1/N_{c}$ approximation takes the form $$\Gamma^{(1/N_{c})}(p,p^{\prime}|q)=g_{\sigma}+\Gamma_{\sigma_{0}}^{b}(p,p^{% \prime}|q)+\Gamma_{\sigma_{0}}^{(c+d)}(p,p^{\prime}|q)+3\Gamma_{a_{0}}^{b}(p,p% ^{\prime}|q)+3\Gamma_{a_{0}}^{(c+d)}(p,p^{\prime}|q)+$$ $$+3\Gamma_{\pi}^{b}(p,p^{\prime}|q)+3\Gamma_{\pi}^{(c+d)}(p,p^{\prime}|q).$$ (A⁢.1)𝐴.1( italic_A .1 ) Now consider the case when $q=0$, $p=p^{\prime}$. Then $$\Gamma_{\sigma_{0}}^{(b)}(p,p|0)=-i{g^{2}\over(2\pi)^{4}}\int{d^{4}k\over({% \hat{k}}+{\hat{p}}-m)^{2}(M^{2}_{\sigma_{0}}-k^{2})},$$ (A⁢.2)𝐴.2( italic_A .2 ) $$\Sigma_{\sigma_{0}}(p+k)=-i{g^{2}\over(2\pi)^{4}}\int{d^{4}k\over({\hat{k}}+{% \hat{p}}-m)(M^{2}_{\sigma_{0}}-k^{2})},$$ (A⁢.3)𝐴.3( italic_A .3 ) $$\Gamma^{(c+d)}_{\sigma_{0}}(p,p|0)={\Sigma_{\sigma_{0}}(p)-\Sigma_{\sigma_{0}}% (m)\over{\hat{p}}-m}|_{{\hat{p}}=m}={\delta\Sigma_{\sigma_{0}}(p)\over\delta{% \hat{p}}}|_{{\hat{p}}=m}=$$ $$=i{g^{2}\over(2\pi)^{4}}\int{d^{4}k\over({\hat{k}}+{\hat{p}}-m)^{2}(M^{2}_{% \sigma_{0}}-k^{2})}=-\Gamma_{\sigma_{0}}^{(b)}(p,p|0)$$ (A⁢.4)𝐴.4( italic_A .4 ) The similar situation takes place for $\Gamma_{a_{0}}$ and $\Gamma_{\pi}$. As a result all contributions of the diagrams depicted in Fig.3b-d cancel each other and finally we got $$\Gamma^{(1/N_{c})}(p,p|0)=g_{\sigma}~{}.$$ (A⁢.5)𝐴.5( italic_A .5 ) References [1] M.K.Volkov - a) Ann. Phys. (N.Y.) 157 (1984) 282; b) Sov. J. Part. Nucl. 17 (1986) 186. [2] D.Ebert, H.Reinhardt - Nucl. Phys. 271 (1986) 188. [3] D.Ebert, H.Reinhardt, M.K.Volkov - Progr. Part. Nucl. Phys. 33 (1994) 1. [4] U.Vogl, W.Weise - Progr. Part. Nucl. Phys. 27 (1994) 1. [5] S.P.Klevansky - Rev. Mod. Phys. 64 (1992) 649. [6] A.Blotz, K.Goeke - Preprint RUB-TPII-28/92, 1992 (Phys. Rev. D). [7] Nan-Wei Cao, C.M.Shakin, Wei-Dong Sun - Phys. Rev. C 46 (1992) 2535. [8] E.Quack, S.P.Klevansky - Phys. Rev. C 49 (1994) 3283. [9] P.Zhuang, J.Hüfner, S.P.Klevansky - Nucl. Phys. A 576 (1994) 525. [10] P.Zhuang, J.Hüfner, S.P.Klevansky, H.Voss - Ann. Phys. (N.Y.) 234 (1994) 225. [11] P.Zhuang, J.Hüfner, S.P.Klevansky, L.Neise - Heidelberg Univ. Preprint HD-TVP-94-09. [12] C.Bender,F.Cooper,G.S.Guralnik - Ann. Phys. (N.Y.) 109 (1977) 165. F.Cooper,G.S.Guralnik,S.H.Kasdan - Phys.Rev. D 14 (1976) 1607. Neal J. Snyderman - Ph.D. Thesis, Brown University, May 1976, (unpublished). [13] V.Dmitrasinovic,H.-J.Schulze,R.Tegen,R.H.Lemmer - Ann. Phys. (N.Y.) 1994 (in press). Figure captions Fig.1 The quadratically (1a, 1b) and logarithmically (1c, 1d) divergent quark loop diagrams in the NJL model. Fig.2 The additional tadpole diagram in the $1/N_{c}$ approximation. The $\Sigma$ is the self-energy part of the quark propagator with pion and scalar meson internal lines. Fig.3 The scalar vertex diagrams in the $1/N_{c}$ approximation .
Multi-Resolution 3D Convolutional Neural Networks for Object Recognition Sambit Ghadai Department of Mechanical Engineering, Iowa State University, Ames, Iowa, 50011, USA {sambitg|xylee|baditya|soumiks|adarsh}@iastate.edu Xian Lee Department of Mechanical Engineering, Iowa State University, Ames, Iowa, 50011, USA {sambitg|xylee|baditya|soumiks|adarsh}@iastate.edu Aditya Balu Department of Mechanical Engineering, Iowa State University, Ames, Iowa, 50011, USA {sambitg|xylee|baditya|soumiks|adarsh}@iastate.edu Soumik Sarkar Department of Mechanical Engineering, Iowa State University, Ames, Iowa, 50011, USA {sambitg|xylee|baditya|soumiks|adarsh}@iastate.edu Adarsh Krishnamurthy Department of Mechanical Engineering, Iowa State University, Ames, Iowa, 50011, USA {sambitg|xylee|baditya|soumiks|adarsh}@iastate.edu () Abstract Learning from 3D Data is a fascinating idea which is well explored and studied in computer vision. This allows one to learn from very sparse LiDAR data, point cloud data as well as 3D objects in terms of CAD models and surfaces etc. Most of the approaches to learn from such data are limited to uniform 3D volume occupancy grids or octree representations. A major challenge in learning from 3D data is that one needs to define a proper resolution to represent it in a voxel grid and this becomes a bottleneck for the learning algorithms. Specifically, while we focus on learning from 3D data, a fine resolution is very important to capture key features in the object and at the same time the data becomes sparser as the resolution becomes finer. There are numerous applications in computer vision where a multi-resolution representation is used instead of a uniform grid representation in order to make the applications memory efficient. Though such methods are difficult to learn from, they are much more efficient in representing 3D data. In this paper, we explore the challenges in learning from such data representation. In particular, we use a multi-level voxel representation where we define a coarse voxel grid that contains information of important voxels(boundary voxels) and multiple fine voxel grids corresponding to each significant voxel of the coarse grid. A multi-level voxel representation can capture important features in the 3D data in a memory efficient way in comparison to an octree representation. Consequently, learning from a 3D object with high resolution, which is paramount in feature recognition, is made efficient. 1   Introduction Data encountered in real life are often three dimensional (3D) in nature. Previous works [1, 2] has shown that information extracted from the data in two dimensions (in the form of 2D images or 2.5D with depth channel) is often sufficient for most of the traditional object detection problems. However, some of the problems necessitate one for having cognition of the data in its raw format, Eg. Point cloud data obtained from a Lidar [3], 3D object data used for rendering smooth 3D graphics [4], or engineering data used in design for manufacturing [5] etc. These problems may certainly be solvable in the 2D space, yet there is a huge loss of information while converting data from 3D space to 2D space. There are several works to demonstrate the learning from multiple views of the object [6, 7, 8, 9]. Though effective in many applications as shown above, the spatial understanding of the features are not available and this makes it not possible to learn certain features of the object. For example, even for a simple task of recognizing the volume of an object is not possible by taking multiple views, but it is possible to do that using a spatial representation of the object. Furthermore, the 3D data’s spatial features could be ambiguous unless augmented with additional information such as depth, normals etc. of the object and many approaches make use of such an approach. Thus, learning from 3D data in the spatial domain is rather quintessential for 3D object recognition. Learning from data using only fully connected neural networks is difficult due to the curse of dimensionality. Thus, in order to learn efficiently, sophisticated architectures such as convolutional neural networks (CNN), which preserves the spatial localization and learn features in a hierarchical fashion are usually used [10, 11]. Since CNN are more efficient and has better learning capabilities due to its shift invariance, there is a strong affinity to use such methods for higher dimensional data. However, extensions of such methods to learning from 3D data also poses the challenge of dimensionality again due to the resolution of the voxel grid. 3D data is usually represented by overlaying an uniform grid over it to convert it from a continuous euclidean representation to a discrete voxelized representation. This voxelized representation is then learnt by the CNN. Though such methods have great performance on standard datasets such as ModelNet10 and ModelNet40, there is the challenge of not being able to learn from data that exceeds a certain number of samples or resolution due to limitation of computation hardware. Thus, intelligent ways to overcome the challenge of high voxel resolution while maintaining the ability to learn from the data effectively at higher resolution and larger data is of great interest. In this paper, we try to address this issue by taking a unique approach for representing the data and then learning from it using the proposed representations. We take a multi-resolution approach for representing the 3D data where we have multiple levels of resolution of the voxel grid (thus having a non-uniform grid representation). The first level represents very coarse features in the 3D data and wherever there are boundaries or key features, we generate the next level of voxels to get a finer resolution occupancy information of the 3D data. We can further extend this idea to various levels of resolutions to represent the data accurately and efficiently. The following are the specific contributions of this paper: 1. We use a novel multi-resolution voxel representation to efficiently represent the 3D data and develop an algorithm to learn from such a non-uniform voxel representation. 2. We developed a new algorithm to train a new multi-level network with similar architectures for the coarser and finer level of resolution and achieve better performance in the case of benchmark datasets, such as ModelNet10, in compared to 3D data represented by a coarse resolution and a dense resolution that is equivalent to the combined coarse and fine resolution. 3. We also show comparison with other related methods in terms of performance in learning and computation for learning. The paper is arranged as follows. In section 2, we discuss a few significant works in the field of learning from 3D data with various data representations. Section 3 describes the multi-resolution representation of 3D data in the form of voxels, that we implement using a GPU accelerated algorithm, to efficiently represent the data in a sparse manner. We explain about the Multi-Resolution Convolutional Neural Networks (MRCNN), that effectively learns the features from the multi-resolution voxel representation, in section 4. Finally in section 5, we present the preliminary results from evaluating the MRCNN on multi-resolution voxel data to classify objects in the ModelNet 10 dataset and also explain the effectiveness of MRCNN to learn from sparse data with moderate memory requirements. 2   Related Work There are several related works, while we try to enlist all the related work, it is certainly possible to have missed some of the works. Learning from 3D data in deep learning started from VoxNet [11] on using 3D-CNN and they faced a lot of challenges for acheiving good accuracy on the modelnet10 and modelnet40 dataset. After that, many works involved voxelizing the 3D data and then using different kinds of architectures such as variational convolutional autoencoders and deep convolutional generative adversarial networks [12, 13] etc. But one common challenge with these works was the resolution. Most of papers could go upto ($32^{3}$) resolution but recently with the increase in compute power we can go to a resolution of ($64^{3}\text{ or }128^{3}$). But still going beyond that is very difficult. There are several new works on PointNet [3] and similar works to deal with point clouds. This work is the first bifurcation in the 3D data between CAD models which cannot be represented as point clouds. Though, PointNets solve the problem of resolution in the realm of learning of point clouds, learning from CAD models was still dependent on resolution of the voxels. One of the best attempts and somewhat very closely related to our work is the work on OctNets [4, 14]. OctNets try to learn from an octree based voxel representation of the data. This is related to our work as they could go to an effective resolution of $256^{3}$ in a depth of 3. Unlike Octrees, we could achieve the same effective resolution in a depth of 2 (i.e. a finer level 2 and a coarser level 1) in the multi-resolution data. Now, we shall describe our work on multi-resolution representation. 3   Multi-resolution Voxelization In this section, we describe our GPU-accelerated algorithm to create a multi-level voxelization of a B-rep model. To create a multi-level voxelization, we first construct a grid of voxels in the region occupied by the object. We then use a triangle-box intersection test to identify the boundary voxels. Once the boundary voxels are identified, we create an index array that identifies the memory location of each coarse level boundary voxel in the fine level voxel grid. We then make use of the same triangle-box intersection test to identify the fine level voxels that intersect with the triangles of the B-rep model in parallel. Using this method on the GPU, a fine voxelization of the model with a effective voxel resolution of $1000^{3}$ can be generated. This voxel resolution was chosen because it is sufficient to resolve the fine surface features of the model. 3.1   Coarse Level Voxelization To identify the boundary voxels of the coarse voxelization, the voxels that intersect with the triangles of the B-rep model are first identified. Since a valid B-rep model does not contain any triangles in it’s interior, any voxel that intersects with a triangle is then classified as a boundary voxel that contains a part of the solid model’s boundary surface. In our multi-level voxelization process, the boundary voxels are identified for two specific reasons. First, the boundary voxels needs to be identified in order to generate the finer level voxel grid inside the boundary voxels. This allows for a higher resolution of voxelization without exponentially increasing the total voxel count. In addition, once the boundary voxels are identified, the average surface normals of the triangles that intersect with the voxel can be embedded into that voxel. This allows for realistic lighting calculation using only the voxelization to generate a surface rendering of the model using ray casting. The process of identifying the boundary voxels was sped up by parallelizing the triangle-box intersection test on the GPU for each triangle. To accelerate this operation further, we first identify the voxels that contains the triangle’s vertices so that we can cull the voxels that need not be tested for intersections with the triangle. This is attributed to the fact that once the boundaries of the triangle are known, we do not need to perform the triangle-box intersection test on voxels that lie outside of the boundaries as shown in Figure  1. The index of the voxel that a certain vertex lies in can be calculated using the position of the vertex relative to the boundaries of the AABB of the object using $$\displaystyle i$$ $$\displaystyle=$$ $$\displaystyle\left[N_{x}\,(x_{p}-x_{min})/(x_{max}-x_{min})\right],$$ (1) $$\displaystyle j$$ $$\displaystyle=$$ $$\displaystyle\left[N_{y}\,(y_{p}-y_{min})/(y_{max}-y_{min})\right],$$ (2) $$\displaystyle k$$ $$\displaystyle=$$ $$\displaystyle\left[N_{z}\,(z_{p}-z_{min})/(z_{max}-z_{min})\right],$$ (3) $$\displaystyle v_{array}$$ $$\displaystyle=$$ $$\displaystyle k\,N_{y}\,N_{x}+j\,N_{x}+i,$$ (4) where $N$ contains the dimensions of the voxel grid generated to encapsulate the model, $*_{min}$ and $*_{max}$ corresponds to the minimum and maximum corner of the AABB, and the $v_{array}$ is the index in the global array of the coarse voxelization. We then perform an intersection test with all the voxels that lie within the bounds and the triangle using the separating-axis test [15] on the GPU. The rendering-based voxelization has already classified the voxel as being inside or outside the model, and hence, once the voxel has been identified as a boundary voxel we can change the value in the voxelization to indicate it as a boundary voxel. For illustration purposes, the boundary voxels are shown as a separate array in Figure 2; in practice, we use the same array but indicate boundary voxels using a different integer, say $2$. The specific algorithm is outlined in Algorithm 1. One of the main challenges in parallelizing the above algorithm with respect to the triangles of the object is that if two different triangles intersect with the same voxel, it can lead to a race condition in storing the triangle indices in the voxel data structure. We overcome this potential race condition by using an atomic addition operation on the GPU to find a free memory location to store the triangle index. The specific code listing for a CUDA implementation is given in the Appendix. We also need to choose an appropriate buffer size for storing all the triangles intersecting a voxel. If the number of triangles intersecting a voxel is more than the buffer length, the code produces a warning and the classification is re-run with a larger buffer length based on the first computation. This reduces the performance, but in practice, having a reasonably large buffer length alleviates this problem in the models that were tested. Once all the boundary voxels are identified, we make use of an exclusive prefix-sum array [16] to keep track of the address of the fine level voxelization. The size of the prefix sum array will be the same as the coarse level voxelization (see Figure 2). The prefix sum array will be referenced later when performing the fine level voxelization. 3.2   Fine-Level Voxelization After classifying the voxel centers in the boundary voxel, the classification result along with any additional information about the voxel (such as coordinates, surface normals, etc.) need to be stored in a flat array data structure for retrieval on the GPU. However, since the number of boundary voxels will vary depending on the model and the coarse level voxelization, the size of the fine level voxelization is not constant. To keep track of the address locations of the boundary voxels in the fine level array, we make use of the exclusive prefix sum. The prefix sum array keeps track of the boundary voxels in the coarse level voxelization. In our implementation, all the boundary voxels are divided into the same number of user-defined fine level voxels, and hence, using the prefix sum address array, we can directly access the memory location of the fine level voxelization. An example of this operation in 2D is shown in Figure 2. After the fine level voxels have been classified as inside or outside the B-rep solid model, the boundary voxels in the fine level voxelization can also be identified. Classifying the boundary voxels for the fine level uses the same procedure as the coarse level. However, it is faster than the coarse level since instead of testing all the triangles in every face, we just need to test the triangles that have already been classified as intersecting with the coarse level voxel. We test the triangles that intersect with the coarse level voxels for intersection with the fine level voxels. The GPU kernel again performs the triangle-box intersection test; this operation is parallelized for all fine level voxel boxes in all the boundary voxels simultaneously as shown in Algorithm 2. Once the boundary voxels in the fine level have been identified, we store the average surface normal of all the triangles that intersect with the voxel. This surface normal is then used while rendering the voxelization. We can also directly render the voxels as wireframe to check for errors in voxelization. We assign different colors to the fine and coarse level voxels, and also to inside and boundary voxels as shown in Figure 2. The voxel resolution in the image is set lower to better differentiate between the coarse and fine level voxels. 4   Multi-Resolution Convolutional Neural Networks Multi-Resolution Convolutional Neural Network (MRCNN) was mainly inspired from the network in network architecture and sparse convolutional neural network architecture . We developed an architecture in a hierarchical way that has two architectures; one to learn features from a finer level of resolution and one to learn from the coarse level of resolution. Let $\theta_{1}$ be the set of weights for the coarse level convolutional network and let $f(x,\theta_{1})$ be the predicted output for a given input and $\theta_{1}$. This provides us with a benchmark of the network’s performance with some loss $\epsilon$ in the prediction. We then augment the performance of the network by adding an additional architecture with another set of weights, $\theta_{2}$, which are used to learn features from the finer level of resolution. We embed it in the input volume occupancy grid instead of the boundary voxels. Thus, making the learning from the finer grid just any augmentation rather than an integration making it as computationally expensive as sparse convolution. The final output from this multi-resolution network shall be denoted with $f^{\prime}(x,\theta_{1},\theta_{2})=f(\{x_{1}|x_{1_{j}}=g(x_{2j},\theta_{2})\;% for\;j\;in\;numBoundaryVoxels\},\theta_{1})$. The algorithm for forward evaluation of the MRCNN is as follows: forall Boundary voxels, $i_{b},j_{b},k_{b}$ in parallel do         $v_{b}$= $Forward_{CNN_{L2}}$($x_{2_{b}})$);         $x_{1}(i_{b},j_{b},k_{b})=v_{b}$ end forall $y_{pred}=Forward_{CNN_{L1}}(x_{1})$ Algorithm 3 Forward Computation of MRCNN $dx_{1}=Backward_{CNN_{L1}}(x_{1},dy_{pred})$ forall Boundary voxels, $i_{b},j_{b},k_{b}$ in parallel do         $dv_{b}=dx_{1}(i_{b},j_{b},k_{b})$         $dx_{2_{b}}=Backward_{CNN_{L2}}(x_{2_{b}},dv_{b})$;         end forall Algorithm 4 Backward Computation of MRCNN The forward computations of the fine level network are embedded in the input of the coarse level network. This was achieved by making use of the prefix sum array created while storing the voxel information of the finer level. This helps in embedding the output of the fine level network in the coarse level network. This could also be extended further to have a vectorized embedding of the finer level output by embedding it in the activations obtained from first layer of the coarse level voxelization. In this study, we only explore the non-vectorized embeddings. While the forward computation of MRCNN maybe trivial, the back-propagation of the MRCNN can be tricky. The main challenge of the back-propagation computation is to link the two networks so that the gradients can passed on from the coarser level network to the finer level network. Without this link, the weights of the finer level network wouldn’t be updated accordingly. The final loss between the $y_{pred}$ and $y_{true}$ of the course level network was first computed using categorical cross-entropy loss. (Note: Any other loss metric may also be used) Back-propagating this loss through the coarse level network is performed using the traditional back-propagation method where the gradient of $y_{pred}$ from the loss $L$ is back-propagated all the way through the intermediate layers of the network and finally to the input data to obtain their respective gradients. Let the gradient of the loss with respect to coarse input be $dx_{1}$, using the prefix sum, we track the gradients of the outputs of fine level network and use it to back-propagate through the network to obtain the gradients of fine level network. This process is explained in the algorithm below: It is also worthwhile to note that since the same $CNN_{L2}$ is shared among all the boundary voxels, the gradients for every parallel operation would be adding gradient to each intermediate layer. This can be understood as an atomic addition of all the gradients from each boundary voxel. With that gradient, the network could be trained to update its weights $\theta_{1}$ and $\theta_{2}$ in such a way that the loss $L$, of the final prediction $y_{pred}$, is minimized. The network parameter’s update could be performed using any optimizer, such as SGD, Adam, Adadelta etc. In this paper, we used SGD as the optimizer. 5   Experimental Results In this study, we evaluated our proposed method of learning from a multi-resolution 3D data on Princeton’s ModelNet10 dataset that contains 3D CAD models from 10 different categories. The 3D CAD models were voxelized with 3 different resolutions and the neural networks were trained on the 3D data to learn to categorize the 3D models. Although there are areas in this study that needs further exploration, some preliminary results are shown here. A comparison was made between data with a naive coarse resolution of $8^{3}$, a naive dense resolution of $32^{3}$ and multi-resolution data with a coarse resolution of $8^{3}$ and a finer resolution of $4^{3}$, thus giving an effective resolution of $32^{3}$. A traditional CNN was used to train on both the coarse resolution $(8^{3})$ and the dense resolution $(32^{3})$ data while our MRCNN was trained using the multi-resolution data. All the training of the neural networks were performed on a machine with an Intel Xeon CPU having 320 GB of RAM and a NVIDIA Quadro P40 GPU having 24 GB RAM. The validation loss and validation accuracy of the three networks are shown in Figure 4. It can be seen that the loss of the CNN trained on coarse data $(8^{3})$ is much higher than the loss of the CNN trained on the dense data $(32^{3})$ while MRCNN trained on multi-resolution data has a loss in between both the networks. Figure 6 and Figure 6 compares the validation accuracies of the three CNN. Once again, the CNN trained on the data with a course resolution has a poor accuracy as compared to the network trained on dense data. However, the MRCNN that was trained on multi-resolution data has a classification accuracy that is comparable to the CNN that was trained on dense data. This shows that a similar object classification performance can be achieved using a multi-resolution data representation as compared to a dense resolution data representation in 3D object recognition. A more intriguing insight of the MRCNN is observed when the memory requirements for training the networks are analyzed. Figure 7 shows the memory requirements of the GPU while training the dense resolution network and MRCNN. An unoptimized version of the CNN that was trained on the dense resolution data requires 16 GB of GPU memory and an optimized version of the CNN that was trained the same dense data requires at least 5 GB of memory. In contrast, our MRCNN that was trained on the multi-resolution data only requires 1 GB memory while still achieving a similar classification performance as the CNN that was trained on dense data. Thus, the benefits of using the MRCNN network are huge since this allows networks to be trained on data with a higher effective resolution without compromising the performances of the learning process. 6   Conclusion In this paper, we explore a novel method to represent 3D data in a hierarchical manner using multi-resolution voxels and also detailed the process of learning from such hierarchical data. This method can be extended in several other domains. We also showed some preliminary results of the proposed method where our network performs better than networks trained on coarse-resolution data and performs almost as well as networks trained dense resolution data while keeping the memory requirements 5 times lower. Future work includes exploring our proposed method on different datasets where having high resolution data is more imperative and getting a bound for the network performance in terms of the coarse and fine resolutions. References [1] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912--1920, 2015. [2] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik G. Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In Proc. ICCV, 2015. [3] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017. [4] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. arXiv preprint arXiv:1703.09438, 2017. [5] Sambit Ghadai, Aditya Balu, Soumik Sarkar, and Adarsh Krishnamurthy. Learning localized features in 3d cad models for manufacturability analysis of drilled holes. Computer Aided Geometric Design, 62:263 -- 275, 2018. [6] Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. arXiv preprint arXiv:1803.04249, 2018. [7] Asako Kanezaki, Yasuyuki Matsushita, and Yoshifumi Nishida. Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. arXiv preprint arXiv:1603.06208, 2016. [8] Konstantinos Sfikas, Ioannis Pratikakis, and Theoharis Theoharis. Ensemble of panorama-based convolutional neural networks for 3d model classification and retrieval. Computers & Graphics, 2017. [9] Charles R Qi, Hao Su, Matthias Nießner, Angela Dai, Mengyuan Yan, and Leonidas J Guibas. Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5648--5656, 2016. [10] Anastasia Ioannidou, Elisavet Chatzilari, Spiros Nikolopoulos, and Ioannis Kompatsiaris. Deep learning advances in computer vision with 3d data: A survey. ACM Computing Surveys (CSUR), 50(2):20, 2017. [11] D. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 922--928, Sept 2015. [12] André Brock, Theodore Lim, James M. Ritchie, and Nick Weston. Generative and discriminative voxel modeling with convolutional neural networks. CoRR, abs/1608.04236, 2016. [13] Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, and Joshua B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. CoRR, abs/1610.07584, 2016. [14] Gernot Riegler, Ali Osman Ulusoys, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. arXiv preprint arXiv:1611.05009, 2016. [15] Stefan Gottschalk, Ming C Lin, and Dinesh Manocha. OBBTree: A hierarchical structure for rapid interference detection. In Proceedings of the 23rd annual conference on Computer Graphics and Interactive Techniques, pages 171--180. ACM, 1996. [16] Guy E Blelloch. Prefix sums and their applications. Technical report, Carnegie Mellon University, 1990.
\title Sequence of families of lattice polarized $K3$ surfaces, modular forms and degrees of complex reflection groups \authorAtsuhira Nagano Abstract We introduce a sequence of families of lattice polarized $K3$ surfaces. This sequence is closely related to complex reflection groups of exceptional type. Namely, we obtain modular forms coming from the inverse correspondences of the period mappings attached to our sequence. We study a non-trivial relation between our modular forms and invariants of complex reflection groups. Especially, we consider a family concerned with the Shepherd-Todd group of No.34 based on arithmetic properties of lattices and algebro-geometric properties of the period mappings. 000Keywords: $K3$ surfaces ; Modular forms ; Complex reflection groups ; Compactifications defined by arrangements. 000Mathematics Subject Classification 2020: Primary 14J28 ; Secondary 11F11, 20F55, 32S22. Introduction In the 20th century, Brieskorn founded an interesting theory which connects finite real reflection groups, Klein singularities and families of rational surfaces (see [Br]). For example, according to this theory, a family of rational surfaces defined by the equation $$\displaystyle z^{2}=y^{3}+(\alpha_{2}x^{3}+\alpha_{8}x^{2}+\alpha_{14}x+\alpha_{20})y+(x^{5}+\alpha_{12}x^{3}+\alpha_{18}x^{2}+\alpha_{24}x+\alpha_{30})$$ (0.1) is characterized by the real reflection group $W(E_{8})$. Namely, the theory enables us to interpret the parameters $\alpha_{2},\alpha_{8},\alpha_{12},\alpha_{14},\alpha_{18},\alpha_{20},\alpha_{24}$ and $\alpha_{30}$ as the invariants of $W(E_{8})$ (see [Sl] Chapter IV or [H] Chapter 5). Many researchers have been attracted by this theory and they have tried to generalize it. Indeed, Arnold suggested that it is an interesting problem to obtain an analogous theory for finite complex reflection groups (see [A], p.20). There are many works for that problem based on various ideas, viewpoints and techniques. On the other hand, the author has studied modular forms derived from periods of $K3$ surfaces and realized a potential of those modular forms to be applied to researches for complex reflection groups. In this paper, we introduce a sequence of families of $K3$ surfaces whose period mappings are closely related to complex reflection groups. Let $U$ be the hyperbolic lattice of rank $2$. Let $A_{m}$ or $E_{m}$ be the root lattices of rank $m$. Then, the $K3$ lattice $L_{K3}$ is given by $U^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}$. Here, if a lattice $\Lambda$ with the intersection matrix $(c_{ij})$ is given, the lattice given by $(nc_{ij})$ is denoted by $\Lambda(n)$. The $2$-homology group of a $K3$ surface $S$ is isometric to $L_{K3}$. The Néron-Severi lattice, which is denoted by ${\rm NS}(S)$, is a sublattice of $H_{2}(S,\mathbb{Z})$ of signature $(1,\rho-1)$. The transcendental lattice ${\rm Tr}(S)$ is the orthogonal complement of ${\rm NS}(S)$ in $L_{K3}$. Then, ${\rm Tr}(S)$ is an even lattice of signature $(2,20-\rho)$. In this paper, we study a sequence of even lattices $${\bf A}_{3}\subset{\bf A}_{2}\subset{\bf A}_{1}\subset{\bf A}_{0}={\bf A},$$ where $$\displaystyle\begin{cases}&{\bf A}_{0}=U\oplus U\oplus A_{2}(-1)\oplus A_{1}(-1),\\ &{\bf A}_{1}=U\oplus U\oplus A_{2}(-1),\\ &{\bf A}_{2}=U\oplus U\oplus A_{1}(-1),\\ &{\bf A}_{3}=U\oplus\begin{pmatrix}2&1\\ 1&-2\end{pmatrix}.\end{cases}$$ (0.2) Set ${\bf M}_{j}={\bf A}_{j}^{\perp}$ in $L_{K3}$. Then, ${\bf A}_{j}$ is of type $(2,5-j)$ and ${\bf M}_{j}$ is of type $(1,14+j).$ We introduce a sequence of analytic sets $$\mathfrak{A}_{3}\subset\mathfrak{A}_{2}\subset\mathfrak{A}_{1}\subset\mathfrak{A}_{0}=\mathfrak{A}$$ with ${\rm dim}(\mathfrak{A}_{j})=7-j$ (for detail, see Lemma 2.1 and (4.1)). Then, we obtain an explicit family $$\displaystyle\varpi_{j}:\mathfrak{F}_{j}\rightarrow\mathfrak{A}_{j}\quad(j\in\{0,1,2,3\})$$ (0.3) of ${\bf M}_{j}$-polarized $K3$ surfaces. This roughly means that the Néron-Severi lattice of a generic member of $\mathfrak{F}_{j}$ is ${\bf M}_{j}.$ These families (0.3) constitute a sequence of $K3$ surfaces indicated in the following diagram: (0.8) Here, $i_{j}$ and $\tilde{i}_{j}$ are natural inclusion. The period domain for $\mathfrak{F}_{j}$ is given by a connected component $\mathcal{D}_{j}$ of $$\displaystyle\mathcal{D}_{{\bf M}_{j}}=\{[\xi]\in\mathbb{P}({\bf A}_{j}\otimes\mathbb{C})\hskip 2.84526pt|\hskip 2.84526pt{}^{t}\xi{\bf A}_{j}\xi=0,{}^{t}\xi{\bf A}_{j}\overline{\xi}>0\},$$ (0.9) which is $(5-j)$-dimensional. The main theme of the present paper is to show that there is an interesting and non-trivial relation between the sequence (0.8) and finite complex reflection groups of exceptional type. The first purpose of this paper is to study the period mapping of the family $\mathfrak{F}_{0}$. The inverse of the period mapping gives a pair of meromorphic modular forms on $\mathcal{D}_{0}$ (see Definition 5.1). We will obtain a system of generators of the ring of these modular forms by applying techniques for periods of $K3$ surfaces (Theorem 5.1 and 5.2). In short, we study a family of $K3$ surfaces defined by the equation $$\displaystyle z^{2}=y^{3}+(a_{0}x^{5}+a_{4}x^{4}+a_{8}x^{3})y+(a_{2}x^{7}+a_{6}x^{6}+a_{10}x^{5}+a_{14}x^{4})$$ (0.10) and we show that the parameters $a_{2},a_{4},a_{6},a_{8},a_{10}$ and $a_{14}$ with positive weight induce a system of generators of the ring of modular forms. Our modular forms are highly expected to have a closed relation with the complex reflection group of No.34 in the list of Shepherd-Todd [ST] (see also [LT] Appendix D), because three times the weights of the modular forms (namely, $6,12,18,24,30$ and $42$) are equal to the degrees of the group. This group has the maximal rank among finite complex reflection groups of exceptional type. This expectation is based on not only the above mentioned apparent similarity between the weights and the degrees, but also the following fact. There are exact descriptions of the period mappings for the subfamilies $\mathfrak{F}_{1},\mathfrak{F}_{2},\mathfrak{F}_{3}$ via invariants of complex reflection groups. Precisely, the period mappings for $\mathfrak{F}_{1}$ ($\mathfrak{F}_{2},\mathfrak{F}_{3}$, resp.) derive Hermitian (Siegel, Hilbert, resp.) modular forms with explicit expressions via the invariants of the complex reflection group No.33 (No.31, No.23, resp.) of rank $r_{j}$ and a system of appropriate theta functions (for detail, see Section 4.3; see also Remark 5.1). Letting $(w_{1}^{(j)},\ldots,w_{r}^{(j)})$ be the weights of the modular forms as in Table 1, then the degrees of the complex reflection groups are given by $(\kappa_{j}w_{1}^{(j)},\ldots,\kappa_{j}w_{r}^{(j)}).$ Here, the weights of the theta functions account for the integer $\kappa_{j}$. In these works for the families $\mathfrak{F}_{j}$ $(j\in\{1,2,3\})$, the Satake-Baily-Borel compactifications for bounded symmetric domains play big roles. However, in the case of $\mathfrak{F}_{0}$, the Satake-Baily-Borel compactifications are inadequate, because we need to consider modular forms defined on a complement of an arrangement of hyperplanes (see Section 3 and 5). So, instead of the Satake-Baily-Borel compactifications, we will consider the Looijenga compactifications constructed in [L2]. The Looijenga compactifications are coming from arithmetic arrangements of hyperplanes. We can regard them as interpolations between the Satake-Baily-Borel compactifications and toroidal compactifications. Their properties are essential for our construction of modular forms. By the way, there exists a double covering of every member of $\mathfrak{F}_{0}$, which is a $K3$ surface also. We obtain the family $$\overline{\varpi_{j}}:\mathfrak{G}_{j}\rightarrow\mathfrak{A}_{j}\quad(j\in\{0,1,2,3\})$$ whose members are such double coverings and we have the following diagram: (0.15) Here, $\varphi_{j}$ is a correspondence given by the double covering. The second purpose of this paper is to determine the transcendental lattice for $\mathfrak{G}_{0}.$ The family $\mathfrak{G}_{0}$ has interesting features. For example, it naturally contains the famous family of Kummer surfaces coming from principally polarized Abelian surfaces. Moreover, $\mathfrak{G}_{0}$ is a natural extension of the family studied by Matsumoto-Sasaki-Yoshida [MSY], whose periods are solutions of the hypergeometric equation of type $(3,6)$. In spite of interesting properties of $\mathfrak{G}_{j}$ $(j\in\{0,1,2,3\})$, it is not straightforward to determine the lattices for them. For example, if $j\in\{1,2,3\}$, the lattices for $\mathfrak{G}_{j}$ were determined via precise arguments or heavy calculations (for detail, see Section 6). In the present paper, we will determine the transcendental lattice for $\mathfrak{G}_{0},$ based on the result of $\mathfrak{G}_{1}$ and arithmetic properties of even lattices (Theorem 6.1). As a result, the transcendental lattices ${\bf B}_{j}$ for $\mathfrak{G}_{j}$ are given as follows. $$\displaystyle\begin{cases}&{\bf B}_{0}=U(2)\oplus U(2)\oplus\begin{pmatrix}-2&0&1\\ 0&-2&1\\ 1&1&-4\end{pmatrix},\\ &{\bf B}_{1}={\bf A}_{1}(2)=U(2)\oplus U(2)\oplus A_{2}(-2),\\ &{\bf B}_{2}={\bf A}_{2}(2)=U(2)\oplus U(2)\oplus A_{1}(-2),\\ &{\bf B}_{3}={\bf A}_{3}(2)=U(2)\oplus\begin{pmatrix}4&2\\ 2&-4\end{pmatrix}.\end{cases}$$ (0.16) Especially, we note that ${\bf B}_{0}$ is not just ${\bf A}_{0}(2).$ It is an interesting problem to describe our meromorphic modular forms via the invariants of the group No.34 and explicit special functions (like theta functions). Furthermore, it may be quite meaningful to understand why the period mappings for the sequence (0.8) are related with complex reflection groups. While the methods in this paper and preceding papers [CD], [Na1] and [NS2] are just based on algebro-gemetric properties of $K3$ surfaces and arithmetic properties of modular forms, the author does not know the fundamental reason why the complex reflection groups work effectively as in Table 1. The author expects that there exists an unrevealed principle underlying the relation between our sequence of the families and complex reflection groups. There are indications which support this expectation. For example, Sekiguchi [Se] studies Arnold’s problem based on methods of Frobenius potentials and he obtains a family of rational surfaces. Although his standpoint and methods are widely different from ours, a direct calculation shows that our $K3$ surface of (0.10) is related to Sekiguchi’s rational surface (see Remark 2.2). The author is hoping that a new principle will rationalize our families of $K3$ surfaces from the viewpoint of complex reflection groups in near future, as Brieskorn’s theory enables us to explain the essence of the family of the rational surfaces of (0.1). 1 Arithmetic arrangement of hyperplanes and Looijenga compactification Looijenga constructed compactifications for bounded symmetric domains of type $IV$ derived from arithmetic arrangements of hyperplanes. First, we will survey his result. For detail, see [L2]. Let $V$ be an $(n+2)$-dimensional vector space over $\mathbb{C}$ with a non-degenerated symmetric bilinear form $\varphi.$ Now, we suppose that $(V,\varphi)$ has been defined over $\mathbb{Q}$ and $\varphi$ is of signature $(2,n)$ for the $\mathbb{Q}$-structure. Then, the set $\{[v]\in V|\hskip 2.84526pt\varphi(v,v)=0,\varphi(v,\overline{v})>0\}$ has two connected components $\mathscr{D}$ and $\mathscr{D}_{-}$. We take $\mathscr{D}$ from $\{\mathscr{D},\mathscr{D}_{-}\}$. This is a bounded symmetric domain of type $IV$. For a linear subspace $L$ of $V$, we set $\mathscr{D}_{L}=\mathscr{D}\cap\mathbb{P}(L).$ The orthogonal group $O(\varphi)$ is an algebraic group over $\mathbb{Q}$. Let $\Gamma$ be an arithmetic subgroup of $O(\varphi)$. We set $X=\mathscr{D}/\Gamma$. If a hyperplane $H$ of $V$ is defined over $\mathbb{Q}$ and of signature $(2,n-1)$, it gives a hypersurface $\mathscr{D}_{H}\not=\phi.$ Suppose $\mathscr{H}$ is a $\Gamma$-invariant arrangement of hyperplanes satisfying this property. Such an arrangement is said to be arithmetic if it is given by a finite union of $\Gamma$-orbits. Set $$\mathscr{D}^{\circ}=\mathscr{D}-\bigcup_{H\in\mathscr{H}}\mathscr{D}_{H}.$$ Looijenga [L2] constructs a natural compactification of $X^{\circ}=\mathscr{D}^{\circ}/\Gamma$. Namely, the Looijenga compactification is given by $$\displaystyle\widehat{X^{\circ}}^{\bf L}:=\widehat{\mathscr{D}^{\circ}}^{\bf L}/\Gamma,$$ (1.1) where $$\displaystyle\widehat{\mathscr{D}^{\circ}}^{\bf L}=\mathscr{D}^{\circ}\sqcup\coprod_{L\in{\bf PO}(\mathscr{H}|_{\mathscr{D}})}\pi_{L}(\mathscr{D}^{\circ})\sqcup\coprod_{\sigma\in\Sigma(\mathscr{H})}\pi_{\sigma}(\mathscr{D}^{\circ}).$$ (1.2) The disjoint union of (1.2) admits an appropriate topology and $\mathscr{D}^{\circ}$ is a open and dense set in $\widehat{\mathscr{D}^{\circ}}^{\bf L}.$ We remark that the Looijenga compactification $\widehat{X^{\circ}}^{\bf L}$ coincides with the Satake-Baily-Borel compactification $\widehat{X}^{\bf SBB}$ when $\mathscr{H}=\phi.$ In the following, we will see the meaning of (1.1) and (1.2). Letting $L$ be a subspace of $V$, we have a natural projection $\pi_{L}:\mathbb{P}(V)-\mathbb{P}(L)\rightarrow\mathbb{P}(V/L)$. Let ${\bf PO}(\mathscr{H}|_{\mathscr{D}})$ be a set of subspaces $L$ of $V$ such that there exists $z\in L$ with $\varphi(z,\overline{z})>0$. We note that $L\in{\bf PO}(\mathscr{H}|_{\mathscr{D}})$ if and only if $\mathscr{D}_{L}\not=\phi.$ Also, we remark that ${\bf PO}(\mathscr{H}|_{\mathscr{D}})$ is a partially ordered set (see [L1] Section 2 and 3). For a $\mathbb{Q}$-isotropic line $I$ in $V$, $\varphi$ defines a bilinear form on the $n$-dimensional space $I^{\perp}/I$ of signature $(1,n-1)$. Taking the choice of $\mathscr{D}$ from $\{\mathscr{D},\mathscr{D}_{-}\}$ into account, we have an $n$-dimensional cone $C_{I}$ in $I^{\perp}/I$. Let $C_{I,+}(\subset I^{\perp}/I)$ be the convex hull of $(I^{\perp}/I)\cap\overline{C_{I}}$. If a member $H\in\mathscr{H}$ contains $I$, then it naturally determines a hyperplane $H_{I^{\perp}/I}$ of $I^{\perp}/I$ of signature $(1,n-2).$ The hyperplanes $H_{I^{\perp}/I}$ with $H_{I^{\perp}/I}\cap C_{I,+}\not=\phi$ give a decomposition $\Sigma(\mathscr{H})_{I}$ of the cone $C_{I,+}$ into locally rational cones. We set $\Sigma(\mathscr{H})=\bigcup_{I}\Sigma(\mathscr{H})_{I}$. For a cone $\sigma\in\Sigma(\mathscr{H})_{I}(\subset\Sigma(\mathscr{H})),$ we have the $\Sigma$-support space $V_{\sigma}(\subset I^{\perp})$. Namely, $V_{\sigma}$ contains $I$ and corresponds to the $\mathbb{C}$-span of the cone $\sigma(\subset I^{\perp}/I)$. Hence, $V_{\sigma}$ is given by the intersection of $I^{\perp}$ and the members $H$ of $\mathscr{H}$ such that $H\supset I$. We put $\pi_{\sigma}=\pi_{V_{\sigma}}$. Set $$\displaystyle\mathscr{D}^{\Sigma(\mathscr{H})}=\coprod_{\sigma\in\Sigma(\mathscr{H})}\pi_{\sigma}(\mathscr{D}).$$ (1.3) Then, $X^{\Sigma(\mathscr{H})}=\mathscr{D}^{\Sigma(\mathscr{H})}/\Gamma$ is a normal analytic space. We have a blowing up $\widetilde{X^{\circ}}\rightarrow X^{\Sigma(\mathscr{H})}$, which is coming from the connected components of intersections of members $H\in\mathscr{H}$. The Looijenga compactification $\widehat{X^{\circ}}^{\bf L}$ of (1.1) is equal to a blowing down $\widetilde{X^{\circ}}\rightarrow\widehat{X^{\circ}}^{\bf L}.$ Theorem 1.1. ([L2] Corollary 7.5) Suppose that every $\pi_{\sigma}(\mathscr{D})$ in (1.3) is not $(n-1)$-dimensional. Then, the algebra $$\bigoplus_{k\in\mathbb{Z}}H^{0}(\mathscr{D}^{\circ},\mathcal{O}(\mathscr{L}^{k}))^{\Gamma},$$ where $\mathscr{L}$ is the natural automorphic bundle over $\mathscr{D}$, is finitely generated with positive degree generators. Its Proj gives the Looijenga compactification $\widehat{X^{\circ}}^{\bf L}$ of (1.1). The boundary $\widehat{X^{\circ}}^{\bf L}-X^{\circ}$ is the strict transform of the boundary $X^{\Sigma(\mathscr{H})}-X$. Especially, $${\rm codim}\left(\widehat{X^{\circ}}^{\bf L}-X^{\circ}\right)\geq 2$$ holds. Let $\widehat{\mathscr{L}}$ be an ample line bundle on $\widehat{X^{\circ}}^{\bf L}$ such that $\widehat{\mathscr{L}}|_{X^{\circ}}=\mathscr{L}|_{X^{\circ}}$. It is shown in [L2] that every meromorphic $\Gamma$-invariant automorphic form whose poles are contained in $\mathscr{H}$ is corresponding to a meromorphic section $s$ of $\widehat{\mathscr{L}}$ such that $s|_{X^{\circ}}$ is holomorphic. 1.1 Lattice ${\bf A}$ and arrangement $\mathcal{H}$ We set $$\displaystyle{\bf A}={\bf A}_{0}=U\oplus U\oplus A_{2}(-1)\oplus A_{1}(-1).$$ (1.4) For this lattice ${\bf A}$, we set $$\displaystyle\Gamma=\tilde{O}({\bf A})\cap O^{+}({\bf A}).$$ (1.5) Here, $\tilde{O}({\bf A})$ is the stable orthogonal group: $\tilde{O}({\bf A})={\rm Ker}\left(O({\bf A})\rightarrow{\rm Aut}({\bf A}^{\vee}/{\bf A})\right)$, where ${\bf A}^{\vee}={\rm Hom}({\bf A},\mathbb{Z}).$ Also, $O^{+}({\bf A})$ is the subgroup of $O({\bf A})$ which preserves the connected component $\mathcal{D}$. The lattice ${\bf A}$ satisfies the Kneser conditions in the sense of Gritsenko-Hulek-Sankaran [GHS]. Therefore, we have the following result. Proposition 1.1. Let $\Delta({\bf A})$ be the set of vectors $v\in{\bf A}$ such that $(v\cdot v)=-2$. The group $\Gamma$ of (1.5) is generated by reflections $\sigma_{\delta}:z\mapsto z+(z\cdot\delta)\delta$ for $\delta\in\Delta({\bf A})$ and it holds ${\rm Char}(\Gamma)=\{{\rm id},{\rm det}\}$. Here, we note that the intersection number of elements $v_{1}$ and $v_{2}$ of lattices are often denoted by $(v_{1}\cdot v_{2})$ in this paper. Lemma 1.1. The group $\Gamma$ is isomorphic to the projective orthogonal group $PO^{+}({\bf A})$. Proof. Letting $\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ be a system of basis of $A_{2}(-1)\oplus A_{1}(-1)$ with $(\alpha_{j}\cdot\alpha_{j})=-2$ $(j\in\{1,2,3\})$, $(\alpha_{1}\cdot\alpha_{2})=1$ and $(\alpha_{k}\cdot\alpha_{3})=0$ $(k\in\{1,2\})$. Then, $y_{1}=\frac{1}{3}\alpha_{1}+\frac{2}{3}\alpha_{2},y_{2}=\frac{2}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$ and $y_{3}=\frac{1}{2}\alpha_{3}$ generate the discriminant group ${\bf A}^{\vee}/{\bf A}$. It follows that $-id_{O^{+}({\bf A})}\not\in\Gamma$ and $O^{+}({\bf A})/\Gamma\simeq\mathbb{Z}/2\mathbb{Z}$. Hence, the assertion follows. ∎ We consider the case $V={\bf A}\otimes\mathbb{C}$. Let $\{e_{j},f_{j}\}$ ($j\in\{1,2\}$) be the system of basis of $U$. Let $\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ be the system of basis of $A_{2}(-1)\oplus A_{1}(-1)$ as in Lemma 1.1. A vector $v\in V$ is given by the form $$\displaystyle v=\xi_{1}e_{1}+\xi_{2}f_{1}+\xi_{3}e_{2}+\xi_{4}f_{2}+\xi_{5}\alpha_{1}+\xi_{6}\alpha_{2}+\xi_{7}\alpha_{3}.$$ (1.6) Let us consider a hyperplane $H_{0}=\{\xi_{7}=0\}$ in $V$. For $\Gamma$ of (1.5), the $\Gamma$-orbits of $H_{0}$ give an arithmetic arrangement $\mathcal{H}$. Lemma 1.2. The above arrangement $\mathcal{H}$ of hyperplanes satisfies the condition of Theorem 1.1. Proof. We will prove that all members of $\mathcal{H}$ contain a non-zero common subspace of the negative-definite vector space $\langle\alpha_{1},\alpha_{2},\alpha_{3}\rangle_{\mathbb{C}}$. Then, it is guaranteed that our arrangement $\mathcal{H}$ satisfies the condition of Theorem 1.1, as in the argument of [L1] Section 6. The hyperplane $H_{0}=\{\xi_{7}=0\}$ of $V$ is generated by the system $\{e_{1},f_{1},e_{2},f_{2},\alpha_{1},\alpha_{2}\}$ of basis. Using the notation in the proof of Lemma 1.1, ${\bf A}^{\vee}/{\bf A}$ is generated by $y_{1},y_{2}$ and $y_{3}$. We note that every $\gamma\in\Gamma$ fixes $y_{j}\in{\bf A}^{\vee}/{\bf A}$. This implies that we can take a system of basis of the subspace $\gamma H_{0}$ such that this system is an extension of $\{\alpha_{1},\alpha_{2}\}$. Therefore, every member of $\mathcal{H}$ contains the $2$-dimensional subspace $\langle\alpha_{1},\alpha_{2}\rangle_{\mathbb{C}}$ of $\langle\alpha_{1},\alpha_{2},\alpha_{3}\rangle_{\mathbb{C}}$. ∎ Remark 1.1. The condition of Theorem 1.1 is not always satisfied. For example, under the notation (1.6), let us take a hyperplane $H_{0}^{\prime}=\{\xi_{5}-\xi_{6}=0\}$. We can see that the arrangement $\mathcal{H}^{\prime}$ of $\gamma H_{0}^{\prime}$ for $\gamma\in\Gamma$ does not satisfy the condition of Theorem 1.1. This condition is closely related to whether an arrangement gives the zero of a modular form or not. In fact, we can see that $\mathcal{H}$ is not the zero set of a modular form, whereas $\mathcal{H}^{\prime}$ does. For our symmetric space $\mathcal{D}=\mathcal{D}_{0}$, which is a connected component of $\mathcal{D}_{{\bf M}_{0}}$ of (0.9), set $$\displaystyle\mathcal{D}^{\circ}=\mathcal{D}-\bigcup_{H\in\mathcal{H}}\mathcal{D}_{H}.$$ (1.7) Due to Theorem 1.1 and Proposition 1.2, we have the following result for our arrangement $\mathcal{H}$. Proposition 1.2. $${\rm codim}\left(\widehat{\mathcal{D}^{\circ}/\Gamma}^{\bf L}-\mathcal{D}^{\circ}/\Gamma\right)\geq 2.$$ 2 Family $\mathfrak{F}_{0}$ of $K3$ surfaces with Picard number $15$ In the present paper, we will consider the Looijenga compactification coming from the arithmetic arrangement $\mathcal{H}$ in Section 1.1. In order to obtain an explicit model of the compactification, we will introduce a family of elliptic $K3$ surfaces whose transcendental lattice is ${\bf A}$ in (1.4) (see Theorem 2.1). Periods for $K3$ surfaces are very important in our argument. We remark that the period mapping for a family of $K3$ surfaces is essentially related to the arithmetic property of the transcendental lattice for a generic member of the family. For $a=(a_{0},a_{2},a_{4},a_{6},a_{8},a_{10},a_{14})\in\mathbb{C}^{7}-\{0\}=:\mathbb{C}_{a}$, we consider the hypersurface $S_{a}$ defined by an equation $$\displaystyle S_{a}:z^{2}=y^{3}+(a_{0}x^{5}+a_{4}x^{4}w^{4}+a_{8}x^{3}w^{8})y+(a_{2}x^{7}w^{2}+a_{6}x^{6}w^{6}+a_{10}x^{5}w^{10}+a_{14}x^{4}w^{14})$$ (2.1) of weight $30$ in the weighted projective space ${\rm Proj}(\mathbb{C}[x,y,z,w])=\mathbb{P}(4,10,15,1)$. We have a natural action of $\mathbb{C}^{*}$ on $\mathbb{P}(4,10,15,1)$ given by $(x,y,z,w)\mapsto(x,y,z,\lambda^{-1}w)$ and that on $\mathbb{C}_{a}$ given by $a\mapsto\lambda\cdot a=(\lambda^{k}a_{k})=(a_{0},\lambda^{2}a_{2},\lambda^{4}a_{4},\lambda^{6}a_{6},\lambda^{8}a_{8},\lambda^{10}a_{10},\lambda^{14}a_{14})$. By a direct observation as in Section 1 of [Na3], we have the following result. Lemma 2.1. Let $\mathfrak{A}_{0}=\mathfrak{A}$ be the set of parameters $a\in\mathbb{C}_{a}$ such that $S_{a}$ is a $K3$ surface. Then, $\mathfrak{A}_{0}$ is a subset of $\{a\in\mathbb{C}_{a}|\hskip 2.84526pta_{0}\not=0\}\cup\{a\in\mathbb{C}_{a}|\hskip 2.84526pta_{2}\not=0\}$ such that $$\mathbb{C}_{a}-\mathfrak{A}_{0}=\mathcal{C}^{\prime}\sqcup\mathcal{C}^{\prime\prime}$$ where $$\displaystyle\begin{cases}\mathcal{C}^{\prime}&=\{a\in\mathbb{C}_{a}|\hskip 2.84526pta_{0}\not=0,a_{10}=a_{12}=a_{18}=0\}\subset\{a_{0}\not=0\},\\ \mathcal{C}^{\prime\prime}&=\{a\in\mathbb{C}_{a}|\hskip 2.84526pta_{2}\not=0,a_{0}=a_{10}=a_{12}=a_{18}=0\}\subset\{a_{2}\not=0\}.\end{cases}$$ Remark 2.1. The surface $S_{a}$ is degenerated to a rational surface if $a_{0}=a_{2}=0.$ We have a family $$\varpi_{0}:\mathfrak{F}_{0}=\{S_{a}\text{ of }(\ref{SK3})|\hskip 2.84526pta\in\mathfrak{A}_{0}\}\rightarrow\mathfrak{A}_{0}$$ of elliptic $K3$ surfaces. 2.1 Singular fibres The Weierstrass equation (2.1) defines an elliptic surface $\pi_{a}:S_{a}\rightarrow\mathbb{P}^{1}(\mathbb{C}).$ For a generic point $a\in\mathfrak{A}_{0}$, we have singular fibres for $\pi_{a}$ of Kodaira type $$\displaystyle III^{*}+IV^{*}+7I_{1},$$ (2.2) as illustrated in Figure 1. Now, $\pi_{a}^{-1}(\infty)$ ($\pi_{a}^{-1}(0)$, resp.) is a singular fibre of Kodaira type $III^{*}$ ($IV^{*}$, resp.). Each gives an $E_{7}$-singularity and an $E_{6}$-singularity, respectively. Set $x_{0}=\frac{x}{w^{4}}$ and $$\displaystyle g_{2}^{\vee}(x_{0},a)=a_{0}x_{0}^{5}+a_{4}x_{0}^{4}+a_{8}x_{0}^{3},\quad g_{3}^{\vee}(x_{0},a)=a_{2}x_{0}^{7}+a_{6}x_{0}^{6}+a_{10}x_{0}^{5}+a_{14}x_{0}^{4}.$$ Let $r(a)$ be the resultant of $g_{2}^{\vee}(x_{0},a)$ and $g_{3}^{\vee}(x_{0},a)$ in $x_{0}$. Also, let $R(x_{0},a)$ be a polynomial in $x_{0}$ coming from the discriminant of the right hand side of (2.1) in $y$. So, the discriminant of $\frac{1}{x_{0}^{8}}R(x_{0},a)$ in $x_{0}$ is given by $r(a)^{3}d_{84}(a)$, where $d_{84}(a)$ can be calculated as a polynomial in $a$ of weight $84$. The following lemma can be proved by arguments of elliptic surfaces like [Na3] Section 1. Lemma 2.2. Except for the generic case (2.2), the types of the singular fibres for the elliptic surface $\pi_{a}:S_{a}\rightarrow\mathbb{P}^{1}(\mathbb{C})$ $(a\in\mathfrak{A}_{0})$ are given by the following. • If $a\in\mathfrak{A}_{0}$ satisfies $r(a)=0$, there is a new singular fibre of Kodaira type $II$ on the elliptic surface $S_{a}$. Such a singular fibre does not acquire any new singularities. • If $a\in\mathfrak{A}_{0}$ satisfies $d_{84}(a)=0,$ two of singular fibres of Kodaira type $I_{1}$ in (2.2) collapse into a singular fibre of type $I_{2}$: $$III^{*}+IV^{*}+I_{2}+5I_{1}$$ In this case, a new $A_{1}$-singularity appears on the $K3$ surface $S_{a}$. • If $a_{0}=0$, there are the singular fibres of Kodaira type $$II^{*}+IV^{*}+6I_{1}$$ on $S_{a}$. In this situation, the $E_{7}$-singularity of (2.2) terns into an $E_{8}$-singularity. • If $a_{14}=0$, there are the singular fibres of Kodaira type $$III^{*}+III^{*}+6I_{1}.$$ on $S_{a}$. The $E_{6}$-singularity of (2.2) terns into an $E_{7}$-singularity. 2.2 Local period mapping Set $$\displaystyle\tilde{\mathfrak{A}}=\mathfrak{A}-\{a\in\mathbb{C}_{a}\hskip 2.84526pt|\hskip 2.84526pta_{0}a_{14}d_{84}(a)=0\}.$$ (2.3) Let $F$ be a general fibre for the elliptic surface $\pi_{a}$ and $O$ be the zero section. Let $C_{1},\ldots,C_{7}$ ($D_{1},\ldots,D_{6}$, resp.) be nodal curves in the singular fibre of type $III^{*}$ ($IV^{*}$, resp.) indicated Figure 1. For $a\in\tilde{\mathfrak{A}}$, the lattice generated by $$\displaystyle F,O,C_{1},\ldots,C_{7},D_{1},\ldots,D_{6}$$ (2.4) is a sublattice of ${\rm NS}(S_{a})$ whose intersection matrix is $$\displaystyle{\bf M}={\bf M}_{0}=U\oplus E_{7}(-1)\oplus E_{6}(-1).$$ (2.5) Let $L_{K3}$ be the $K3$ lattice : $L_{K3}=II_{3,19}$. The orthogonal complement of ${\bf M}$ with respect to the unimodular lattice $L_{K3}$ is given by ${\bf A}$ of (1.4). We have an isometry $\psi:H_{2}(S_{a},\mathbb{Z})\rightarrow L_{K3}$ such that $$\displaystyle\psi(F)=\gamma_{8},\quad\psi(O)=\gamma_{9},\quad\psi(C_{j})=\gamma_{9+j},\quad\psi(D_{k})=\gamma_{16+k}\quad(j\in\{1,\ldots,7\},k\in\{1,\ldots,6\}).$$ Then, the sublattice $\langle\gamma_{8},\ldots,\gamma_{22}\rangle_{\mathbb{Z}}$ in $L_{K3}$ is isometric to the lattice ${\bf M}$ of (2.5). This is a primitive sublattice, because $|{\rm det}({\bf M})|=6$ is square-free. Hence, we can take $\gamma_{1},\ldots,\gamma_{7}\in L_{K3}$ such that $\{\gamma_{1},\ldots,\gamma_{7},\gamma_{8},\ldots,\gamma_{22}\}$ gives a system of basis of $L_{K3}$. Letting $\{\delta_{1},\ldots,\delta_{22}\}$ be the system of dual basis of $\{\gamma_{1},\ldots,\gamma_{22}\}$ with respect to the unimodular lattice $L_{K3}.$ Then, the intersection matrix of the sublattice $\langle\delta_{1},\ldots,\delta_{7}\rangle_{\mathbb{Z}}$ is equal to the intersection matrix of ${\bf A}$. Proposition 2.1. (The canonical form for $a_{0}\not=0$) If $a\in\mathfrak{A}\cap\{a_{0}\not=0\}$, (2.1) is transformed to the Weierstrass equation $$\displaystyle S(u):z^{2}=y^{3}+(x^{5}+u_{4}x^{4}w^{4}+u_{8}x^{3}w^{8})y+(u_{2}x^{7}w^{2}+u_{6}x^{6}w^{6}+u_{10}x^{5}w^{10}+u_{14}x^{4}w^{14}).$$ (2.6) Proof. By putting $$x\mapsto\frac{x}{a_{0}},\quad y\mapsto\frac{y}{a_{0}^{2}},\quad z\mapsto\frac{z}{a_{0}^{3}},$$ and $$u_{2}=\frac{a_{2}}{a_{0}},\quad u_{4}=a_{4},\quad u_{6}=a_{6},\quad u_{8}=a_{0}a_{8},\quad u_{10}=a_{0}a_{10},\quad u_{14}=a_{0}^{2}a_{14},$$ we obtain (2.6). ∎ Here, we put $u=(u_{k})=(u_{2},u_{4},u_{6},u_{8},u_{10},u_{14})\in\mathbb{C}^{6}-\{0\}=:\mathbb{C}_{u}.$ Let $\mathcal{U}^{*}$ be a subset of $\mathbb{C}_{u}$ of codimension $3$ such that $$\displaystyle\mathbb{C}_{u}-\mathcal{U}^{*}=\{u\in\mathbb{C}_{u}|\hskip 2.84526ptu_{10}=u_{12}=u_{18}=0\}.$$ (2.7) For $\lambda\in\mathbb{C}^{*}$ and $u=(u_{k})\in\mathcal{U}$, set $\lambda\cdot u=(\lambda^{k}u_{k})$. This action induces an isomorphism $\lambda:S(u)\rightarrow S(\lambda\cdot u)$. Letting $[u]=(u_{2}:u_{4}:u_{6}:u_{8}:u_{10}:u_{14})\in\mathbb{P}(2,4,6,8,10,14)$ be the point which is corresponding to $u\in\mathbb{C}_{u}$, set $\mathcal{U}=\{[u]\in\mathbb{P}(2,4,6,8,10,14)|\hskip 2.84526ptu\in\mathcal{U}^{*}\}.$ The above action of $\mathbb{C}^{*}$ on $\mathcal{U}^{*}$, we naturally defines the family $$\displaystyle\{S([u])\hskip 2.84526pt|\hskip 2.84526pt[u]\in\mathcal{U}\}\rightarrow\mathcal{U}.$$ (2.8) Definition 2.1. Let $\pi_{1}:S_{1}\rightarrow\mathbb{P}^{1}(\mathbb{C})$ and $\pi_{2}:S_{2}\rightarrow\mathbb{P}^{1}(\mathbb{C})$ be two elliptic surfaces. Suppose that there exist a biholomorphic mapping $f:S_{1}\rightarrow S_{2}$ and $\varphi\in{\rm Aut}(\mathbb{P}^{1}(\mathbb{C}))$ with $\varphi\circ\pi_{1}=\pi_{2}\circ f$. Then, these two elliptic surfaces are said to be isomorphic as elliptic surfaces. The canonical form (2.6) naturally gives an elliptic surface $\pi_{[u]}:S([u])\rightarrow\mathbb{P}^{1}(\mathbb{C})$. Lemma 2.3. Two elliptic surfaces $\pi_{[u_{1}]}:S([u_{1}])\rightarrow\mathbb{P}^{1}(\mathbb{C})$ and $\pi_{[u_{2}]}:S([u_{2}])\rightarrow\mathbb{P}^{1}(\mathbb{C})$ are isomorphic as elliptic surfaces if and only if $[u_{1}]=[u_{2}]\in\mathbb{P}(2,4,6,8,10,14)$. Proof. We can prove it by an argument which is similar to the proof of [Na3] Lemma 1.1. ∎ Let us take a generic point $a\in\tilde{\mathfrak{A}}$ of (2.3). Since $\tilde{\mathfrak{A}}\subset\mathfrak{A}\cap\{a_{0}\not=0\}$, we obtain the corresponding surface $S([u])$ for a parameter $[u]\in\mathbb{P}(2,4,6,8,10,14)={\rm Proj}(\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}])$, which is given by the canonical form (2.6). We set $$\mathcal{P}(\tilde{\mathfrak{A}})=\{[u]\in\mathbb{P}(2,4,6,8,10,14)|\hskip 2.84526pt\text{there exists }a\in\tilde{\mathfrak{A}}\text{ such that }S_{a}\text{ is identified with }S([u])\text{ of }(\ref{SK3Can})\}.$$ By an argument which is similar to [Na3] Section 1, using Lemma 2.3 also, we can obtain a local period mapping defined on sufficiently small neighborhood around $[u]$ in $\mathcal{P}(\tilde{\mathfrak{A}})$. By gluing the local period mappings, we obtain the period mapping $$\displaystyle\Phi_{1}:\mathcal{P}(\tilde{\mathfrak{A}})\rightarrow\mathcal{D}$$ (2.9) given by $$[u]\mapsto\Big{(}\int_{\psi^{-1}_{[u]}(\gamma_{1})}\omega_{[u]}:\cdots:\int_{\psi^{-1}_{[u]}(\gamma_{7})}\omega_{[u]}\Big{)},$$ where $\omega_{[u]}$ is the unique holomorphic $2$-form on $S([u])$ up to a constant factor and $$\psi_{[u]}:H_{2}(S([u]),\mathbb{Z})\rightarrow L_{K3}$$ is an appropriate isometry, called $S$-marking. We note that the period mapping (2.9) is a multivalued analytic mapping. We call the pair $(S([u]),\psi_{[u]})$ an $S$-marked $K3$ surface. By applying Torelli’s theorem to the above local period mappings, we can show the following theorem as in the proof of [Na3] Theorem 1.1 and Corollary 1.1. Theorem 2.1. For a generic point $a\in\tilde{\mathfrak{A}}$ of (2.3), the Picard number of $S_{a}$ is $15$. The intersection matrix of the Néron-Severi lattice ${\rm NS}(S_{a})$ (the transcendental lattice ${\rm Tr}(S_{a})$, resp.) is equal to ${\bf M}$ of (2.5) (${\bf A}$ of (1.4), resp.). 2.3 Double covering $K_{a}$ of $S_{a}$ The $K3$ surface $S_{a}$ of (2.1) is transformed to $$\displaystyle Z^{2}=Y^{3}+\Big{(}a_{4}+a_{0}X+\frac{a_{8}}{X}\Big{)}Y+\Big{(}a_{6}+a_{2}X+\frac{a_{10}}{X}+\frac{a_{14}}{X^{2}}\Big{)}$$ (2.10) by the birational transformation $$x\mapsto X,\quad y\mapsto X^{2}Y,\quad z\mapsto X^{3}Z.$$ We have a double covering $$\displaystyle K_{a}:Z^{2}=Y^{3}+\Big{(}a_{4}+a_{0}U^{2}+\frac{a_{8}}{U^{2}}\Big{)}Y+\Big{(}a_{6}+a_{2}U^{2}+\frac{a_{10}}{U^{2}}+\frac{a_{14}}{U^{4}}\Big{)}$$ (2.11) of (2.10). There is a Nikulin involution on $K$ given by $$\iota_{K_{a}}:(U,Y,Z)\mapsto(-U,Y,-Z).$$ This means that it satisfies $\iota_{K_{a}}^{*}\omega_{K}=\omega_{K}$, where $\omega_{K}$ is the unique holomorphic $2$-form up to a constant factor. We have a family $\overline{\varpi_{0}}:\mathfrak{G}_{0}\rightarrow\mathfrak{A}_{0}$ of $K3$ surfaces, where $\mathfrak{G}_{0}=\{K_{a}\text{ of }(\ref{KL})|\hskip 2.84526pta\in\mathfrak{A}_{0}\}$. In fact, this surface $K_{a}$ can be regarded as a natural generalization the Kummer surface for a principally polarized Abelian surface. Therefore, we call a member of $\mathfrak{G}_{0}$ a Kummer-like surface in this paper. More precisely, see Section 4. We remark that the double covering $\varphi_{0}$ in (0.15) is given by this $\iota_{K_{a}}$. Remark 2.2. The surface $K_{a}$ of (2.11) has another involution $$\jmath_{K_{a}}:(U,Y,Z)\mapsto(-U,Y,Z).$$ This is not a Nikulin involution. The minimal resolution of the quotient surface $K/\langle\jmath_{K_{a}}\rangle$ is given by the equation $$\Sigma:z^{\prime 2}=y^{\prime 3}+(a_{0}x^{\prime 3}+a_{4}x^{\prime 2}+a_{8}x^{\prime})y^{\prime}+(a_{2}x^{\prime 4}+a_{6}x^{\prime 3}+a_{10}x^{\prime 2}+a_{14}x^{\prime}).$$ This is a rational surface. The surface $\Sigma$ is very similar to a surface appearing in Sekiguchi’s recent work [Se], in which he studies algebraic Frobenius potentials, deformation of singularities and Arnold’s problem. Namely, he obtains the equation in the form $$z^{2}=f_{E_{7}(1)}:=y^{3}+(x^{3}+t_{2}x^{2}+t_{4}x)y+(t_{1}x^{4}+t_{3}x^{3}+t_{5}x^{2}+t_{7}x)+s_{3}y^{2}$$ of a rational surfaces. Putting $s_{3}=0$, we can see an apparent correspondence to our $\Sigma.$ The author is expecting that there is a non-trivial and unrevealed theory connecting our work of the moduli of $K3$ surfaces and Sekiguchi’s result of Frobenius potentials. 3 Moduli space of ${\bf M}$-polarized $K3$ surfaces In this section, letting ${\bf M}$ be the lattice of (2.5) of signature $(1,14)$, we consider the moduli space of ${\bf M}$-polarized $K3$ surfaces. An ${\bf M}$-polarized $K3$ surface is a pair $(S,j)$ of a $K3$ surface $S$ and a primitive embedding $j:{\bf M}\hookrightarrow{\rm NS}(S)$. Two M-polarized $K3$ surfaces $(S_{1},j_{1})$ and $(S_{2},j_{2})$ are said to be isomorphic if there exists an isomorphism $f:S_{1}\rightarrow S_{2}$ of $K3$ surfaces such that $j_{2}=f_{*}\circ j_{1}.$ In this paper, ${\rm NS}(S)$ is often regarded as a sublattice of the homology group $H_{2}(S,\mathbb{Z})$. We note that ${\rm NS}(S)$ is identified with the sublattice $H^{2}(S,\mathbb{Z})\cap H^{1,1}(S,\mathbb{R})$ of the cohomology group $H^{2}(S,\mathbb{Z})$ by the Poincaré duality. This is denoted by the same notation in the discussion below. Let $V(S)^{+}$ be the connected component of $V(S)=\{x\in H^{1,1}_{\mathbb{R}}(S)|\hskip 2.84526pt(x\cdot x)>0\}$ which contains the class of a Kähler form on $S$. Let $\Delta(S)^{+}$ be the subset of effective classes of $\Delta(S)=\{\delta\in{\rm NS}(S)|\hskip 2.84526pt(\delta\cdot\delta)=-2\}.$ Set $C(S)=\{x\in V(S)^{+}|\hskip 2.84526pt(x,\delta)\geq 0,\text{ for all }\delta\in\Delta(S)^{+}\}$. The subset $C(S)^{+}$ of $C(S)$, which is defined by the condition $(x\cdot\delta)>0$, is called the Kähler cone. We set ${\rm NS}(S)^{+}=C(S)\cap H^{2}(S,\mathbb{Z})$ and ${\rm NS}(S)^{++}=C(S)^{+}\cap H^{2}(S,\mathbb{Z}).$ Due to Theorem 2.1, we can take a point $\bar{a}\in\tilde{\mathfrak{A}}$ and an $S$-marking $\psi_{0}:H_{2}(S_{0},\mathbb{Z})\rightarrow L_{K3}$ such that $\psi_{0}^{-1}({\bf M})={\rm NS}(S_{0})$, where $S_{0}=S_{\bar{a}}$ is called a reference surface. Letting $\Delta({\bf M})=\{\delta\in{\bf M}|\hskip 2.84526pt(\delta\cdot\delta)=-2\}$, we set $\Delta({\bf M})^{+}=\{\delta\in\Delta({\bf M})|\hskip 2.84526pt\psi_{0}^{-1}(\delta)\in{\rm NS}(S_{0})\text{ gives an effective class}\}$. The set $V({\bf M})=\{y\in{\bf M}_{\mathbb{R}}|\hskip 2.84526pt(y\cdot y)>0\}$ has two connected components. We suppose the component $V({\bf M})^{+}$ contains $\psi_{0}(x)$ for $x\in V(S_{0})^{+}$. Set $C({\bf M})^{+}=\{y\in V({\bf M})^{+}|\hskip 2.84526pt(y\cdot\delta)>0,\text{ for all }\delta\in\Delta({\bf M})^{+}\}$. An ${\bf M}$-polarized $K3$ surface $(S,j)$ is called a pseudo-ample ${\bf M}$-polarized $K3$ surface if $j(C({\bf M})^{+})\cap{\rm NS}(S)^{+}\not=\phi.$ For a $K3$ surface $S$, let $\psi:H_{2}(S,\mathbb{Z})\rightarrow L_{K3}$ be an isometry of lattices with $\psi^{-1}({\bf M})\subset{\rm NS}(S)$. We call the pair $(S,\psi)$ of such $S$ and $\psi$ is called a marked $K3$ surface. If $(S,\psi^{-1}|_{\bf M})$ is a pseudo-ample ${\bf M}$-polarized $K3$ surface, then $(S,\psi)$ is called a pseudo-ample marked ${\bf M}$-polarized $K3$ surface. For two pseudo-ample marked ${\bf M}$-polarized $K3$ surfaces $(S_{1},\psi_{1})$ and $(S_{2},\psi_{2})$, we suppose $(S_{1},\psi_{1}^{-1}|_{\bf M})$ and $(S_{2},\psi_{2}^{-1}|_{\bf M})$ are isomorphic as ${\bf M}$-polarized $K3$ surfaces. Then, $(S_{1},\psi_{1})$ and $(S_{2},\psi_{2})$ are said to be isomorphic as pseudo-ample ${\bf M}$-polarized $K3$ surfaces. Also, if there is an isomorphism $f:S_{1}\rightarrow S_{2}$ such that $\psi_{1}=\psi_{2}\circ f_{*}$, we say $(S_{1},\psi_{1})$ and $(S_{2},\psi_{2})$ are isomorphic as pseudo-ample marked ${\bf M}$-polarized $K3$ surfaces. By gluing local moduli spaces of marked ${\bf M}$-polarized $K3$ surfaces, we have the fine moduli space $\mathcal{M}_{\bf M}$ of marked ${\bf M}$-polarized $K3$ surfaces. Then, we have the period mapping $$\displaystyle{\rm per}:\mathcal{M}_{\bf M}\rightarrow\mathcal{D}_{\bf M},$$ (3.1) where $\mathcal{D}_{\bf M}$ is given in (0.9). Let $\mathcal{M}_{\bf M}^{\rm pa}(\subset\mathcal{M}_{\bf M})$ be the set of isomorphism classes of pseudo-ample marked M-polarized $K3$ surfaces. By restricting (3.1) to $\mathcal{M}_{\bf M}^{\rm pa}$, we have a surjective mapping $$\displaystyle{\rm per}^{\prime}:\mathcal{M}_{\bf M}^{\rm pa}\rightarrow\mathcal{D}_{\bf M}.$$ (3.2) The group $\Gamma({\bf M})=\{\sigma\in O(L_{K3})|\hskip 2.84526pt\sigma(m)=m\text{ for all }m\in{\bf M}\}$ acts on $\mathcal{M}_{\bf M}$ by $(S,\psi)\mapsto(S,\psi\circ\sigma).$ Then, $\mathcal{M}_{\bf M}^{\rm pa}/\Gamma({\bf M})$ gives the set of isomorphism classes of pseudo-ample ${\bf M}$-polarized $K3$ surfaces. Theorem 3.1. (Dolgachev [D], Section 3) The period mapping (3.2) induces the bijection $$\mathcal{M}_{\bf M}^{\rm pa}/\Gamma({\bf M})\simeq\mathcal{D}_{\bf M}/\tilde{O}({\bf A})=\mathcal{D}/\Gamma,$$ where $\Gamma$ is given in (1.5). Especially, $\mathcal{D}/\Gamma$ gives the set of isomorphism classes of pseudo-ample ${\bf M}$-polarized $K3$ surfaces. Le us take a reference surface $S_{0}=S_{\bar{a}}$ for $\bar{a}\in\mathfrak{A}$ with the divisors (2.4) and the $S$-marking $\psi_{0}:H_{2}(S_{0},\mathbb{Z})\rightarrow L_{K3}$ such that ${\rm NS}(S_{0})=\psi_{0}^{-1}({\bf M}).$ For a pseudo-ample marked ${\bf M}$-polarized $K3$ surface $(S,\psi)$, as in the proof of [Na3] Theorem 2.3, we can show that there is an isometry $\psi:H_{2}(S,\mathbb{Z})\rightarrow L_{K3}$ satisfying the following conditions: (i) $\psi^{-1}({\bf M})\subset{\rm NS}(S)$, (ii) $\psi^{-1}\circ\psi_{0}(F)$,$\psi^{-1}\circ\psi_{0}(O)$,$\psi^{-1}\circ\psi_{0}(C_{j})$ $(j\in\{1,\ldots,7\})$, $\psi^{-1}\circ\psi_{0}(D_{k})$ $(k\in\{1,\ldots,6\})$ are effective divisors, (iii) $\psi^{-1}\circ\psi_{0}(F)$ is a nef divisor. By an argument which is similar to the proof of [Na3] Lemma 2.1, we can prove the following lemma. Lemma 3.1. For any pseudo-ample marked ${\bf M}$-polarized $K3$ surface $(S,\psi)$, there exists $a\in\mathfrak{A}$ such that $(S,\psi)$ is given by the elliptic $K3$ surface $\pi_{a}:S_{a}\rightarrow\mathbb{P}^{1}(\mathbb{C})$ given by the Weierstrass equation (2.1). Especially, the divisor $\psi^{-1}\circ\psi_{0}(F)$, which is effective and nef, gives a general fibre for $\pi_{a}$. Let us take two pseudo-ample marked ${\bf M}$-polarized $K3$ surfaces given by elliptic $K3$ surfaces $\pi_{a}$ and $\pi_{a^{\prime}}$ in the sense of Lemma 3.1. If they are isomorphic, then the types of the singular fibres for $\pi_{a}$ coincide with those for $\pi_{a^{\prime}}$. According to Lemma 2.2, we have the following facts: • for $a\in\{a_{0}\not=0\}$, then $\pi_{a}^{-1}(\infty)$ is of Kodaira type $III^{*}$, • for $a\in\{a_{0}=0\}$, then $\pi_{a}^{-1}(\infty)$ is of Kodaira type $II^{*}$. Hence, $S_{a}$ for $a\in\{a_{0}\not=0\}$ is not isomorphic to $S_{a^{\prime}}$ for $a^{\prime}\in\{a_{0}\not=0\}$ as pseudo-ample ${\bf M}$-polarized $K3$ surfaces. If $a_{0}\not=0$, we have a canonical form given by (2.6). On the other hand, if $a_{0}=0$, we have the following result. Proposition 3.1. (The canonical form for $a_{0}=0$) If $a\in\mathfrak{A}\cap\{a_{0}=0\}$, (2.1) is transformed to the Weierstrass equation $$\displaystyle S_{1}(t):z^{2}=y^{3}+(t_{4}x^{4}w^{4}+t_{10}x^{3}w^{10})y+(x^{7}+t_{6}x^{6}w^{6}+t_{12}x^{5}w^{12}+t_{18}x^{4}w^{18}).$$ (3.3) Proof. By putting $$x\mapsto\frac{x}{a_{2}},\quad y\mapsto\frac{y}{a_{2}^{2}},\quad z\mapsto\frac{z}{a_{2}^{3}},$$ and $$t_{4}=a_{4},\quad t_{6}=a_{6},\quad t_{10}=a_{2}a_{8},\quad t_{12}=a_{2}a_{10},\quad t_{18}=a_{2}^{2}a_{14},$$ we obtain (3.3). ∎ Now, we naturally obtain $S_{1}([t])$ for $[t]\in\mathbb{P}(4,6,10,12,18)={\rm Proj}(\mathbb{C}[t_{4},t_{6},t_{10},t_{12},t_{18}])$. Let $\mathcal{T}$ be a subset of $\mathbb{P}(4,6,10,12,18)$ such that $\mathbb{P}(4,6,10,12,18)-\mathcal{T}=\{t_{10}=t_{12}=t_{18}=0\}$. The above argument guarantees that the set of isomorphism classes of pseudo-ample ${\bf M}$-polarized $K3$ surfaces is given by the disjoint union $\mathcal{U}\sqcup\mathcal{T}.$ Therefore, we have the following result. Theorem 3.2. The period mapping in Theorem 3.1 has an explicit form $$\displaystyle\Phi:\mathcal{U}\sqcup\mathcal{T}\simeq\mathcal{D}/\Gamma,$$ (3.4) where $\Gamma$ is given in (1.5). Especially, the injection $\mathcal{P}(\tilde{\mathfrak{A}})\hookrightarrow\mathcal{D}/\Gamma$, which is induced by the visualized period mapping $\Phi_{1}$ of (2.9), is extended to (3.4). Let $\mathcal{H}$ be the arithmetic arrangement of hyperplanes defined in Section 1.1. By virtue of Lemma 2.2, $\mathcal{H}$ corresponds to pseudo-ample ${\bf M}$-polarized $K3$ surfaces given by the canonical form (3.3). So, from Theorem 3.2, the restriction of the period mapping $\Phi$ of (3.4) gives an isomorphism $$\displaystyle\Phi|_{\mathcal{T}}:\mathcal{T}\simeq\left(\bigcup_{H\in\mathcal{H}}\mathcal{D}_{H}\right)/\Gamma.$$ (3.5) We remark that (3.5) coincides with the period mapping of [Na3] Corollary 2.1. The detailed results of (3.5) will be summarized in Section 4. For $\mathcal{D}^{\circ}$ of (1.7), we have an isomorphism $$\displaystyle\Phi|_{\mathcal{U}}:\mathcal{U}\simeq\mathcal{D}^{\circ}/\Gamma.$$ (3.6) Since $\mathcal{P}(\tilde{\mathfrak{A}})\subset\mathcal{U}$, (3.6) is an extension of $\mathcal{P}(\tilde{\mathfrak{A}})\hookrightarrow\mathcal{D}/\Gamma$, which is derived from $\Phi_{1}$ of (2.9). By abuse of notation, this $\Phi|_{\mathcal{U}}$ will be denoted by $\Phi$ in Section 5. 4 Sequence of families of $K3$ surfaces and complex reflection groups Let $\mathfrak{A}_{1},\mathfrak{A}_{2},\mathfrak{A}_{3}$ be subvarieties of $\mathfrak{A}_{0}$ explicitly given by $$\displaystyle\begin{cases}&\mathfrak{A}_{1}=\{a\in\mathfrak{A}_{0}|\hskip 2.84526pta_{0}=0\},\\ &\mathfrak{A}_{2}=\{a\in\mathfrak{A}_{0}|\hskip 2.84526pta_{0}=a_{14}=0\},\\ &\mathfrak{A}_{3}=\{a\in\mathfrak{A}_{0}|\hskip 2.84526pta_{0}=a_{14}=\mathfrak{M}(a)=0\}.\end{cases}$$ (4.1) Here, $$\mathfrak{M}(a):=\left(a_{10}a_{2}+\frac{a_{4}^{3}}{27}-\frac{a_{6}^{2}}{4}\right)^{2}+\frac{1}{27}a_{4}(a_{4}a_{6}+6a_{2}a_{8})^{2}=0$$ coincides with the modular equation of the Humbert surface for the minimal discriminant which is studied in [NS1] Theorem 5.4 via an appropriate transformation. We have the subfamilies $$\varpi_{j}:\mathfrak{F}_{j}\rightarrow\mathfrak{A}_{j}\quad\quad(j\in\{1,2,3\})$$ of $\varpi_{0}:\mathfrak{F}_{0}=\{S_{a}|\hskip 2.84526pta\in\mathfrak{A}_{0}\}\rightarrow\mathfrak{A}_{0}$. They are indicated in the diagram (0.8). Also, we need another subvariety $\mathfrak{A}_{1}^{\prime}=\{a\in\mathfrak{A}_{0}|\hskip 2.84526pta_{14}=0\}$ and another subfamily $\varpi^{\prime}_{1}:\mathfrak{F}_{1}^{\prime}\rightarrow\mathfrak{A}_{1}^{\prime}$ such that $\mathfrak{A}_{1}\cap\mathfrak{A}_{1}^{\prime}=\mathfrak{A}_{2}$. 4.1 Transcendental lattices for subfamilies of $\mathfrak{F}_{0}$ The above mentioned subfamilies of $\mathfrak{F}_{0}$ are precisely studied in [CD], [Na1], [CMS] and [Na3]. For each case, it is important to determine the transcendental lattice in order to study the moduli space of the corresponding lattice polarized $K3$ surfaces. By surveying the results of those papers, we have the following result. Proposition 4.1. The intersection matrices of a generic member of the subfamilies $\mathfrak{F}_{1},\mathfrak{F}_{1}^{\prime},\mathfrak{F}_{2},\mathfrak{F}_{3}$ of $\mathfrak{F}_{0}$ are given in Table 2. The lattices in (0.2) are based on Proposition 4.1 and Theorem 2.1. 4.2 Transcendental lattices for subfamilies of $\mathfrak{G}_{0}$ of Kummer-like surfaces In Section 2.3, we have the family $\overline{\varpi_{0}}:\mathfrak{G}_{0}=\{K_{a}|\hskip 2.84526pta\in\mathfrak{A}_{0}\}\rightarrow\mathfrak{A}_{0}$ of $K3$ surfaces. This family contains interesting and important families of algebraic $K3$ surfaces. For example, the subfamily $\mathfrak{G}_{2}$ over $\mathfrak{A}_{2}$ coincides with the well-known family of Kummer surfaces derived from principally polarized Abelian surfaces. Also, the subfamily $\mathfrak{G}_{1}^{\prime}$ over $\mathfrak{A}_{1}^{\prime}$ is the family of $K3$ surfaces given by the double covering of $\mathbb{P}^{2}(\mathbb{C})$ branched along six lines, studied by Matsumoto-Sasaki-Yoshida [MSY]. Proposition 4.2. The intersection matrices of a generic member of the subfamilies $\mathfrak{G}_{1},\mathfrak{G}_{1}^{\prime},\mathfrak{G}_{2},\mathfrak{G}_{3}$ of $\mathfrak{G}_{0}$ are given in Table 3. By the way, we are able to determine the transcendental lattices in Table 2 for the subfamilies $\mathfrak{F}_{0}$ by a relatively simple way. However, the proof of Proposition 4.2 is much more complicated. We need a delicate argument for each case. For detail, see the beginning of Section 6. Proposition 4.2 is necessary for the proof of Theorem 6.1, which is the main theorem of Section 6. The latices in (0.16) are based on Proposition 4.2 and Theorem 6.1. 4.3 Relation between modular forms and invariants of complex reflection groups via theta functions For the subfamilies $\mathfrak{F}_{1},\mathfrak{F}_{2}$ and $\mathfrak{F}_{3}$, there is a non-trivial relationship between the period mappings for them and a complex reflection group of rank $r_{j}=6-j$. In each family of $K3$ surfaces in Table 1, we have the period mapping $$\displaystyle\Phi_{j}:\mathcal{P}_{j}\simeq\mathcal{D}_{j}/\Gamma_{j}$$ (4.2) where $\mathcal{P}_{j}$ is a Zariski open set in the weighted projective space whose weights are given in Table 1, $\mathcal{D}_{j}$ is a $(5-j)$-dimensional symmetric domain and $\Gamma_{j}$ is a subgroup of the orthogonal group of the lattice ${\bf A}_{j}$ in Table 2. We note that $\Phi_{j}$ of (4.2) can be obtained as a restriction of (3.4). Finite complex reflection groups are listed by Shepherd-Todd [ST] (see also [LT]). Note that real reflection groups are contained in this list. The group No.33 (No.31, No.23, resp.) has the simplest structure among the complex reflection groups which are of exceptional type, not real reflection groups and of rank $6$ ($5$, $4$, resp.). A complex reflection group of rank $r_{j}$ acts on the polynomial ring $\mathbb{C}[X_{1},\cdots,X_{r_{j}}]$. We can find generators of the ring of invariants for this action. Letting $(w_{1}^{(j)},\ldots,w_{r_{j}}^{(j)})$ $(j\in\{1,2,3\})$ be the set of weights given in Table 1, there is a system $\{g_{\kappa_{j}w_{1}}^{(j)}(X_{1},\cdots,X_{r_{j}}),\ldots,g_{\kappa_{j}w_{r_{j}}}^{(j)}(X_{1},\cdots,X_{r_{j}})\}$ of generators of the ring of invariants. Here, $\kappa_{j}\in\mathbb{Z}$ is the integer in Table 1 and $g^{(j)}_{\kappa_{j}w_{l}}$ is a polynomial of degree $\kappa_{j}w_{l}^{(j)}$. For example, if $j=3$, the invariants for the group No.23 are famous Klein’s icosahedral invariants introduced in [K]. In the preceding papers in Table 4, we have the explicit theta expressions of the inverse of $\Phi_{j}$ of (4.2) via appropriate systems of theta functions. For $j=1,2$, we have simple expressions of the above mentioned results. There exists a system $\{\vartheta_{1}^{(j)}(Z_{j}),\cdots,\vartheta_{r_{j}}^{(j)}(Z_{j})\}$ of theta functions $\mathcal{D}_{j}\ni Z_{j}\mapsto\vartheta_{\ell}^{(k)}(Z_{j})\in\mathbb{C}$ of weight $1/\kappa_{j}$ such that $$\displaystyle\mathcal{D}_{j}\ni Z_{j}\mapsto\left(g_{\kappa_{j}w_{1}}^{(j)}\left(\vartheta_{1}^{(j)}(Z_{j}),\cdots,\vartheta_{r_{j}}^{(j)}(Z_{j})\right):\cdots:g_{\kappa_{j}w_{r_{j}}}^{(j)}\left(\vartheta_{1}^{(j)}(Z_{j}),\cdots,\vartheta_{r_{j}}^{(j)}(Z_{j})\right)\right)\in\mathcal{P}_{j}$$ (4.3) gives a ratio of modular forms on $\mathcal{D}_{j}$ with respect to $\Gamma_{j}$ and it coincides with the inverse of the period mapping $\Phi_{j}$ of (4.2). • If $j=1$, the invariants for the group No.33 are given in [Bu] and the explicit form (4.3) is established in [NS2] Theorem 4.1, using the theta functions of [DK]. In this case, (4.3) is given by a ratio of Hermitian modular forms for the unitary group $U(2,2)$ concerned with the imaginary quadratic field of the simplest discriminant. • If $j=2$, using the invariants for the group No.31 and the theta functions given in [R] Section 4, one can obtain the expression (4.3) by combining the results [CD] Theorem 3.5 and [R] Section 4 (see also, [NS2] Section 4.1). In this case, (4.3) is given by a ratio of well-known Siegel modular forms of degree $2$. Also, refer to Remark 5.1. 5 Meromorphic modular forms Let $\mathcal{D}^{*}$ be the connected component of $\{\xi\in{\bf A}\otimes\mathbb{C}\hskip 2.84526pt|\hskip 2.84526pt(\xi\cdot\xi)=0,(\xi\cdot\overline{\xi})>0\}$ which projects to $\mathcal{D}$. For $\mathcal{D}^{\circ}$ of (1.7), let $(\mathcal{D}^{\circ})^{*}$ be a subset of $\mathcal{D}^{*}$ which projects to $\mathcal{D}^{\circ}$. Based on the fact stated in Theorem 1.1 below, we will use the following terminology. Definition 5.1. A holomorphic function $f:(\mathcal{D}^{\circ})^{*}\rightarrow\mathbb{C}$ given by $Z\mapsto f(Z)$ is called a meromorphic modular form of weight $k\in\mathbb{Z}$ and character $\chi\in{\rm Char}(\Gamma)$ with poles in $\mathcal{H}$, if $f$ satisfies (i) $f(\lambda Z)=\lambda^{-k}f(Z)\quad(\text{for all }\lambda\in\mathbb{C}^{*}),$ (ii) $f(\gamma Z)=\chi(\gamma)f(Z)\quad(\text{for all }\gamma\in\Gamma).$ The vector space of the meromorphic modular forms of weight $k\in\mathbb{Z}$ and $\chi\in{\rm Char}(\Gamma)$ with poles in $\mathcal{H}$ is denoted by $\mathcal{A}^{\circ}_{k}(\Gamma,\chi)$. Then, the ring of the meromorphic modular forms is given by $$\mathcal{A}^{\circ}(\Gamma)=\bigoplus_{k\in\mathbb{Z}}\bigoplus_{\chi\in{\rm Char}(\Gamma)}\mathcal{A}^{\circ}_{k}(\Gamma,\chi).$$ In this section, we will construct the generator of this ring. Recalling Proposition 1.1, we will consider the cases of $\chi={\rm id}$ and $\chi={\rm det}$. The structure of the ring $\mathcal{A}^{\circ}(\Gamma)$ is determined by Theorem 5.1 and 5.2. By the way, in [HU] (see also [Na3]), period mappings for lattice polarized $K3$ surfaces and the canonical orbibundles on the Satake-Baily-Borel compactifications of symmetric spaces are effective to construct holomorphic modular forms. However, the Satake-Baily-Borel compactifications are useless for our purpose, because we want to obtain not holomorphic modular forms but meromorphic modular forms. Accordingly, we will use the Looijenga compactification for $\mathcal{D}^{\circ}$ of (1.7) and $\Gamma$ of (1.5), instead of the Satake-Baily-Borel compactification. Since Proposition 1.2 and (2.7) hold, by Hartogus’s extension theorem, the period mapping $\Phi$ of (3.6) is extended to the isomorphism $$\displaystyle\widehat{\Phi}:\widehat{\mathcal{U}}\simeq\widehat{\mathcal{D}^{\circ}/\Gamma}^{\bf L}$$ (5.1) between the weighted projective space $\widehat{\mathcal{U}}=\mathbb{P}(2,4,6,8,10,14)$ and the Looijenga compactification. Our construction of meromorphic modular forms is based on this period mapping. 5.1 Meromorphic modular forms of character ${\rm id}$ There exists the unique holomorphic $2$-form $\omega_{u}$ on $S(u)$ of (2.6) up to a constant factor. This is explicitly given by $\frac{dx_{0}\wedge dy_{0}}{z_{0}},$ where $x_{0}=\frac{x}{w^{4}},y_{0}=\frac{y}{w^{10}},z_{0}=\frac{z}{w^{15}}.$ The action of $\lambda\in\mathbb{C}^{*}$ given by $S(u)\rightarrow S(\lambda\cdot u)$, which defines the family (2.8), induces the relation $$\displaystyle\lambda^{*}\omega_{\lambda\cdot u}=\lambda^{-1}\omega_{u}.$$ (5.2) Theorem 5.1. The ring $\mathcal{A}^{\circ}(\Gamma,{\rm id})$ of meromorphic modular forms of character ${\rm id}$ is isomorphic to the polynomial ring $\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}].$ Here, a polynomial of weight $k$ defines a modular form of weight $k$. Proof. We have a principal $\mathbb{C}^{*}$-bundle ${\rm pr}:(\mathcal{D}^{\circ})^{*}\rightarrow\mathcal{D}^{\circ}$. The quotient space $Q=\mathcal{D}^{\circ}/\Gamma$ is identified with a Zariski open set $\mathcal{U}$ of the weighted projective space $\widehat{\mathcal{U}}=\mathbb{P}(2,4,6,8,10,14)$ via the period mapping. Since ${\rm pr}$ is equivalent under the action of $\Gamma=\tilde{O}^{+}({\bf A}),$ we have a principal $\mathbb{C}^{*}$-bundle $\overline{{\rm pr}}:(\mathcal{D}^{\circ})^{*}/\Gamma\rightarrow Q.$ Let $\mathcal{O}_{Q}(1)$ be the line bundle over $Q$ associated with $\overline{{\rm pr}}$ and set $\mathcal{O}_{Q}(k)=\mathcal{O}_{Q}(1)^{\otimes k}.$ Recalling the definition of the associated bundle, we can regard a section of $\mathcal{O}_{Q}(k)$ as a holomorphic function $(\mathcal{D}^{\circ})^{*}\ni Z\mapsto s(Z)\in\mathbb{C}$ satisfying $$\displaystyle s(\lambda Z)=\lambda^{-k}s(Z),\quad s(\gamma Z)=s(Z),$$ (5.3) where $\lambda\in\mathbb{C}^{*}$ and $\gamma\in\Gamma.$ From (2.7), $\widehat{\mathcal{U}}-\mathcal{U}$ is an analytic subset such that ${\rm codim}(\widehat{\mathcal{U}}-\mathcal{U})\geq 2.$ So, via Hartogus’s phenomenon, the inclusion $\iota_{\mathcal{U}}:\mathcal{U}\hookrightarrow\widehat{\mathcal{U}}$ induces the isomorphism $$\displaystyle\iota_{\mathcal{U}}^{*}:{\rm Pic}(\widehat{\mathcal{U}})\simeq{\rm Pic}(\mathcal{U}).$$ (5.4) Now, we have ${\rm Pic}(\widehat{\mathcal{U}})\simeq\mathbb{Z}$ and $$\displaystyle\bigoplus_{k\in\mathbb{Z}}H^{0}(\widehat{\mathcal{U}},\mathcal{O}_{\widehat{\mathcal{U}}}(k))=\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}],$$ (5.5) because $\widehat{\mathcal{U}}={\rm Proj}(\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}])$ is a weighted projective space. From (5.4) and (5.5), we have $$\displaystyle\bigoplus_{\mathcal{L}\in{\rm Pic}(Q)}H^{0}(Q,\mathcal{O}_{Q}(\mathcal{L}))\simeq\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}].$$ (5.6) Due to (5.2), the period mapping gives the following diagram: (5.11) From Definition 5.1, together with (5.3) and (5.6), we have the assertion. ∎ The integer $\kappa_{0}=3$ in Table 1 is coming from this theorem. Namely, $(2\kappa_{0},4\kappa_{0},6\kappa_{0},8\kappa_{0},10\kappa_{0},14\kappa_{0})=(6,12,18,24,30,42)$ is equal to the degrees of the group No.34 (see [LT] Appendix D). 5.2 Meromorphic modular forms of character ${\rm det}$ We will study the orbifold $$\mathbb{O}=[(\mathcal{D}^{\circ})^{*}/(\Gamma\times\mathbb{C})]$$ in order to construct modular forms for ${\rm det}$. First, let us observe the action of $\Gamma$ on $\mathcal{D}^{\circ}$ precisely. The action of $\Gamma$ on $\mathcal{D}^{\circ}$ is effective. We set $$\mathfrak{H}_{\mathcal{D}^{\circ}}=\bigcup_{g\in\Gamma}\{[Z]\in\mathcal{D}^{\circ}\hskip 2.84526pt|\hskip 2.84526ptg([Z])=[Z]\}.$$ Also, letting $\Gamma_{[Z]}$ be the stabilize subgroup with respect to $[Z]\in\mathcal{D}^{\circ}$, we set $$\mathfrak{S}_{\mathcal{D}^{\circ}}=\{[Z]\in\mathcal{D}^{\circ}\hskip 2.84526pt|\hskip 2.84526pt\Gamma_{[Z]}\text{ is neither }\{{\rm id}_{\Gamma}\}\text{ nor }\{{\rm id}_{\Gamma},\sigma_{\delta}\}\text{ for }\delta\in\Delta({\bf A})\}.$$ According to Proposition 1.1, $\mathfrak{H}_{\mathcal{D}^{\circ}}$ is a countable union of reflection hypersurfaces and $\mathfrak{S}_{\mathcal{D}^{\circ}}$ is a countable union of analytic subsets of codimension at least $2$. Let $\mathfrak{H}_{Q}$ and $\mathfrak{S}_{Q}$ be the images of $\mathfrak{H}_{\mathcal{D}^{\circ}}$ and $\mathfrak{S}_{\mathcal{D}^{\circ}}$ by the projection $\mathcal{D}^{\circ}\rightarrow Q=\mathcal{D}^{\circ}/\Gamma$, respectively. From Proposition 1.1, it follows that $$\displaystyle\Gamma^{\prime}:=\{\gamma\in\Gamma\hskip 2.84526pt|\hskip 2.84526pt\gamma\text{ is given by a product of reflections of even numbers }\}=\{\gamma\in\Gamma\hskip 2.84526pt|\hskip 2.84526pt{\rm det}(\gamma)=1\}.$$ (5.12) We set $Q_{1}=\mathcal{D}^{\circ}/\Gamma^{\prime}$. The action of $\Gamma^{\prime}$ on $\mathcal{D}^{\circ}-\mathfrak{S}_{\mathcal{D}^{\circ}}$ is free. Let us naturally define $\mathfrak{H}_{Q_{1}}$ and $\mathfrak{S}_{Q_{1}}$, respectively. Recall that the period mapping $\Phi$ gives an identification $\mathcal{U}\simeq Q$ (see (3.6)). Set $\mathfrak{H}_{\mathcal{U}}=\Phi^{-1}(\mathfrak{H}_{Q})$ and $\mathfrak{S}_{\mathcal{U}}=\Phi^{-1}(\mathfrak{S}_{Q})$. Then, $\mathfrak{H}_{\mathcal{U}}$ gives a divisor on the weighted projective space $\widehat{\mathcal{U}}$ and there exists a weighted homogeneous polynomial $\Delta_{\mathcal{U}}(u)\in\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}]$ such that $$\displaystyle\mathfrak{H}_{\mathcal{U}}=\{[u]\in\widehat{\mathcal{U}}=\mathbb{P}(2,4,6,8,10,14)|\hskip 2.84526pt\Delta_{\mathcal{U}}(u)=0\}.$$ (5.13) We have the double covering $\mathcal{U}_{1}$ of $\mathcal{U}-\mathfrak{S}_{\mathcal{U}}$ branched along $\mathfrak{H}_{\mathcal{U}}-\mathfrak{S}_{\mathcal{U}}$: $$\displaystyle\mathcal{U}_{1}=\{([u],s)\in(\mathcal{U}-\mathfrak{S}_{\mathcal{U}})\times\mathbb{C}|\hskip 2.84526pts^{2}=\Delta_{\mathcal{U}}(u)\}.$$ (5.14) We can obtain the lift $\Phi_{Q_{1}}:\mathcal{U}_{1}\rightarrow Q_{1}-\mathfrak{S}_{Q_{1}}$ of $\Phi|_{\mathcal{U}-\mathfrak{S}_{\mathcal{U}}}$ so that $\Phi_{Q_{1}}$ is equivalent under the action of $\Gamma/\Gamma^{\prime}\simeq\mathbb{Z}/2\mathbb{Z}.$ Also, $\Phi_{Q_{1}}$ is lifted to $\Phi_{\mathcal{D^{\circ}}}:\mathcal{U}_{\mathcal{D^{\circ}}}\rightarrow\mathcal{D^{\circ}}-\mathfrak{S}_{\mathcal{D^{\circ}}}$, which is equivalent under the action of $\Gamma$. We can consider the pull-back $\mathcal{U}_{(\mathcal{D}^{\circ})^{*}}\rightarrow\mathcal{U}_{\mathcal{D}^{\circ}}$ of the principal bundle $\mathcal{U}^{*}\rightarrow\mathcal{U}$ by the composition $\mathcal{U}_{\mathcal{D}^{\circ}}\rightarrow\mathcal{U}_{1}\rightarrow\mathcal{U}-\mathfrak{S}_{\mathcal{U}}\hookrightarrow\mathcal{U}.$ Then, we have the lifted period mapping $$\Phi_{(\mathcal{D}^{\circ})^{*}}:\mathcal{U}_{(\mathcal{D}^{\circ})^{*}}\simeq(\mathcal{D}^{\circ})^{*}-\mathfrak{S}_{(\mathcal{D}^{\circ})^{*}},$$ where $\mathfrak{S}_{(\mathcal{D}^{\circ})^{*}}$ is the preimage of $\mathfrak{S}_{\mathcal{D}^{\circ}}$ under the projection. Lemma 5.1. The lifted period mapping $\Phi_{(\mathcal{D}^{\circ})^{*}}$ induces an isomorphism $[\Phi_{(\mathcal{D}^{\circ})^{*}}]:[\mathcal{U}_{(\mathcal{D}^{\circ})^{*}}/(\mathbb{C}^{*}\times\Gamma)]\simeq[((\mathcal{D}^{\circ})^{*}-\mathfrak{S}_{\mathcal{D}^{\circ}})/(\mathbb{C}^{*}\times\Gamma)].$ Proof. The proof is similar to [Na3] Section 3.3. See the diagram (5.23) also. (5.23) ∎ Proposition 5.1. The Picard group ${\rm Pic}(\mathbb{O})$ of the orbifold $\mathbb{O}$ is isomorphic to $\mathbb{Z}\oplus(\mathbb{Z}/2\mathbb{Z}).$ Proof. Set $\mathcal{V}_{1}=\{([u],s)\in\widehat{\mathcal{U}}\times\mathbb{C}|\hskip 2.84526pts^{2}=\Delta_{\mathcal{U}}(u)\}$. Then, $\mathcal{U}_{1}$ of (5.14) satisfies $\mathcal{U}_{1}\subset\mathcal{V}_{1}$ and ${\rm codim}(\mathcal{V}_{1}-\mathcal{U}_{1})\geq 2$, from (2.7). Recall that $\Phi$ in (5.23) is extended to the identification (5.1). This $\widehat{\Phi}$ is lifted to $\widehat{\Phi}_{Q_{1}}:\mathcal{V}_{1}\simeq\widehat{Q}_{1}$, which is equivalent under the $(\mathbb{Z}/2\mathbb{Z})$-action. Here, $\widehat{Q}_{1}$ is a double covering of $\widehat{\mathcal{D}^{\circ}/\Gamma}^{\bf L}.$ We consider the orbifold $\mathbb{V}_{1}=[\mathcal{V}_{1}/(\mathbb{Z}/2\mathbb{Z})]$ with the structure morphism $p_{\mathbb{V}_{1}}:\mathbb{V}_{1}\rightarrow\widehat{\mathcal{U}}.$ Now, the Picard group ${\rm Pic}(\mathbb{V}_{1})$ is generated by $\mathcal{O}_{\mathbb{V}_{1}}(1):=p_{\mathbb{V}_{1}}^{*}\mathcal{O}_{\widehat{\mathcal{U}}}(1)$ and the generator $g$ of $\mathbb{Z}/2\mathbb{Z}$. We remark that this generator $g$ is corresponding to ${\rm det}\in{\rm Char}(\Gamma)$ (see Proposition 1.1 and (5.12)). The divisor $\{\Delta_{\mathcal{U}}(u)=0\}$ is corresponding to the reflection hypersurfaces for our lattice ${\bf A}$ via $\Phi$ in (5.23). Therefore, any element of ${\rm Pic}(\mathbb{V}_{1})$ is given by $\mathcal{O}_{\mathbb{V}_{1}}(1)^{\otimes k}\otimes g^{l}$ for $k\in\mathbb{Z}$ and $l\in\mathbb{Z}/2\mathbb{Z}$. By considering $\left[\widehat{\Phi}_{Q_{1}}\right]:\mathbb{V}_{1}\simeq\left[\widehat{Q}_{1}/(\mathbb{Z}/2\mathbb{Z})\right]$, we obtain ${\rm Pic}\left(\left[\widehat{Q}_{1}/(\mathbb{Z}/2\mathbb{Z})\right]\right)\simeq\mathbb{Z}\oplus(\mathbb{Z}/2\mathbb{Z}).$ Recall that the action of $\Gamma^{\prime}$ on $\mathcal{D}^{\circ}-\mathfrak{S}_{\mathcal{D}^{\circ}}$ is free. From the fact that ${\rm codim}(\widehat{Q}_{1}-Q_{1})\geq 2$, we have $$\displaystyle{\rm Pic}(\mathbb{O})={\rm Pic}([(\mathcal{D}^{\circ})^{*}/(\mathbb{C}\times\Gamma)])={\rm Pic}([Q_{1}/(\mathbb{Z}/2\mathbb{Z})])\simeq{\rm Pic}\left(\left[\widehat{Q}_{1}/(\mathbb{Z}/2\mathbb{Z})\right]\right)\simeq\mathbb{Z}\oplus(\mathbb{Z}/2\mathbb{Z}).$$ ∎ When we consider holomorphic sections of line bundles, analytic subsets of codimension at least $2$ do not affect the results due to Hartogus’s phenomenon. So, in this section, we shall omit such analytic sets. Namely, from now on, we will often omit such sets (like $\mathfrak{S}_{\mathcal{U}}$ or $\mathfrak{S}_{Q}).$ Proposition 5.2. The weight of $\Delta_{\mathcal{U}}(u)$ is equal to $98.$ Proof. Since $\mathcal{U}$ is a Zariski open set in $\widehat{\mathcal{U}}=\mathbb{P}(2,4,6,8,10,14)$, the canonical bundle $\Omega_{\widehat{\mathcal{U}}}$ is calculated as $$\Omega_{\mathcal{U}}\simeq\mathcal{O}_{\mathcal{U}}(-2-4-6-8-10-14)=\mathcal{O}_{\mathcal{U}}(-44).$$ Letting $p_{1}$ be the double covering $\mathcal{U}_{1}\rightarrow\mathcal{U}$ branched along $\mathfrak{H}_{\mathcal{U}}$, we obtain the isomorphism $\Omega_{\mathcal{U}_{1}}\simeq p_{1}^{*}\Omega_{\mathcal{U}}\otimes\mathcal{O}_{\mathcal{U}_{1}}(\mathfrak{H}_{\mathcal{U}_{1}})$, by considering the holomorphic differential forms. So, we have $$\displaystyle\Omega_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}\simeq[p_{1}]^{*}\Omega_{\mathcal{U}}\otimes\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(\mathfrak{H}_{\mathcal{U}_{1}})\simeq[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}(-44)\otimes\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(\mathfrak{H}_{\mathcal{U}_{1}}).$$ From the proof of Proposition 5.1, the orbifold $\mathbb{O}$ is equivalent to $[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]$. So, ${\rm Pic}([\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})])$ is isomorphic to $\mathbb{Z}\oplus(\mathbb{Z}/2\mathbb{Z})$. Let $d$ be the weight of $\Delta_{\mathcal{U}}(u)$. Then, we have $[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}(d)\simeq\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(2\mathfrak{H}_{\mathcal{U}_{1}})$. This implies that the direct summand $\mathbb{Z}/2\mathbb{Z}$ of ${\rm Pic}([\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})])$ is generated by $[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}(-d/2)\otimes\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(\mathfrak{H}_{\mathcal{U}_{1}})$. Here, we do not need to worry about whether $-d/2$ is an integer, for all weights of $\widehat{\mathcal{U}}$ are even numbers. Since $\mathcal{D}^{\circ}$ is a Zariski open set in a quadratic hypersurface in the projective space $\mathbb{P}^{6}(\mathbb{C})$, we apply the adjunction formula to $\mathcal{D}^{\circ}$ and obtain $\Omega_{\mathcal{D}^{\circ}}\simeq\mathcal{O}_{\mathcal{D}^{\circ}}(7-2)=\mathcal{O}_{\mathcal{D}^{\circ}}(5)$, where the weight is concordant with the $\mathbb{C}^{*}$-action indicated in (5.11). This implies that the canonical orbibundle $\Omega_{\mathbb{O}}$ is isomorphic to $\mathcal{O}_{\mathbb{O}}(5)\otimes{\rm det}$. By summarizing the above properties, we have $$\displaystyle\mathcal{O}_{\mathbb{O}}(5)\otimes{\rm det}\simeq\Omega_{\mathbb{O}}\simeq\Omega_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}$$ $$\displaystyle\simeq[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}(-44)\otimes\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(\mathfrak{H}_{\mathcal{U}_{1}})$$ $$\displaystyle\simeq[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}\Big{(}-44+\frac{d}{2}\Big{)}\otimes\Big{(}[p_{1}]^{*}\mathcal{O}_{\mathcal{U}}\Big{(}-\frac{d}{2}\Big{)}\otimes\mathcal{O}_{[\mathcal{U}_{1}/(\mathbb{Z}/2\mathbb{Z})]}(\mathfrak{H}_{\mathcal{U}_{1}})\Big{)}$$ and we obtain $d=98$. ∎ Theorem 5.2. (1) There exist holomorphic functions $s_{7}$ of weight $7$ and $s_{42}$ of weight $42$ on $(\mathcal{D}^{\circ})^{*}$ such that $$s_{7}^{2}=u_{14},\quad\quad s_{42}^{2}=d_{84}(u),$$ where $d_{84}(u)$ is the polynomial studied in Lemma 2.2. (2) The holomorphic function $s_{49}=s_{7}s_{42}$ on $(\mathcal{D}^{\circ})^{*}$ gives a modular form of weight $49$ and character ${\rm det}$. It holds $\mathcal{A}^{\circ}(\Gamma,{\rm det})=s_{49}\mathcal{A}^{\circ}(\Gamma,{\rm id})$. Proof. (1) By the argument in this section, the divisor $\{[u]\in\mathcal{U}|\hskip 2.84526pt\Delta_{\mathcal{U}}(u)=0\}$ corresponds to the union of reflection hyperplanes of $\{H_{\delta}|\hskip 2.84526pt\delta\in\Delta({\bf A})\}-\mathcal{H}$. Here, $H_{\delta}$ is a reflection hyperplane defined by $\delta\in\Delta({\bf A})$ and $\mathcal{H}$ is the arrangement of $\gamma H_{0}$ ($\gamma\in\Gamma$, $H_{0}=\{\xi_{7}=0\}$) as in Section 1.1. Recalling the observation of degenerations of our lattice polarized $K3$ surfaces in Lemma 2.2 and the meaning of the canonical form (2.6), we have $$\displaystyle\{[u]\in\mathcal{U}|\hskip 2.84526ptd_{84}(u)=0\}\cup\{[u]\in\mathcal{U}|\hskip 2.84526ptu_{14}=0\}\subset\{[u]\in\mathcal{U}|\hskip 2.84526pt\Delta_{\mathcal{U}}(u)=0\}.$$ (5.24) The inclusion (5.24) means that $u_{14}d_{84}(u)$ divides $\Delta_{\mathcal{U}}(u)$ in $\mathbb{C}[u_{2},u_{4},u_{6},u_{8},u_{10},u_{14}].$ On the other hand, Proposition 5.2 says that $\Delta_{\mathcal{U}}(u)$ is of weight $84$. Thus, we have the irreducible decomposition $$\displaystyle\Delta_{\mathcal{U}}(u)={\rm const}\cdot u_{14}d_{84}(u).$$ (5.25) Since the double covering $p_{1}:\mathcal{U}_{1}\rightarrow\mathcal{U}$ in the diagram (5.23) is branched along the divisor $\{[u]\in\mathcal{U}|\hskip 2.84526pt\Delta_{\mathcal{U}}(u)=0\}$, (5.25) implies that there is a holomorphic function $s_{7}$ ($s_{42}$, resp.) on $(\mathcal{D}^{\circ})^{*}$ satisfying $s_{7}^{2}=u_{14}$ ($s_{42}^{2}=d_{84}(u)$, resp.). (2) Recall that ${\rm det}\in{\rm Char}(\Gamma)$ is coming from the action of $\Gamma/\Gamma^{\prime}\simeq\mathbb{Z}/2\mathbb{Z}$ which defines the double covering $p_{1}:\mathcal{U}_{1}\rightarrow\mathcal{U}$. By the Definition 5.1 and the meaning of (5.14), every modular form of character ${\rm det}$ vanishes on the preimage of $\mathfrak{H}_{\mathcal{D}^{\circ}}$ by the canonical projection $(\mathcal{D}^{\circ})^{*}\rightarrow\mathcal{D}^{\circ}$. Since this $p_{1}$ is branched along the divisor $\{\Delta_{\mathcal{U}}(u)=0\},$ (5.24) and (5.25) show that every modular form of character ${\rm det}$ is given by a product of $s_{49}:=s_{7}s_{42}$ and a modular form of character ${\rm id}.$ ∎ Remark 5.1. This theorem supports the relation between our sequence of the families and complex reflection groups. The weight $42$ of $s_{42}$ is coming from the discriminant of the right hand side of (2.6). We note that $126=42\kappa_{0}$, where $\kappa_{0}=3$ as in Table 1, is equal to the number of reflections of order $2$ for the group No.34 (see [LT] Appendix D). Such a phenomenon occurs in each case for $\mathfrak{F}_{j}$ ($j\in\{1,2,3\}$). In the case for $j=1$ ($2,3$, resp.), there is a holomorphic function of weight $45$ ($30$, $15$, resp.) coming from the discriminant of the elliptic $K3$ surfaces of [Na3] ([CD], [Na1], resp.). Then, $45=45\kappa_{1}$ ($60=30\kappa_{2}$, $15=15\kappa_{3}$, resp.) is equal to the number of reflections of order $2$ for the group No.33 (No.31, No.23, resp.). Each of these holomorphic functions gives a factor of a modular form of a non-trivial character. Also, there are explicit expressions of them by the theta functions in Table 4. Remark 5.2. The Weierstrass equation of (2.1) is essential for our purpose. We have another expression of elliptic $K3$ surfaces with singular fibres of type (2.2) given by the equation $$\displaystyle z_{1}^{2}=y_{1}^{3}+(b_{1}x_{1}^{5}w_{1}+b_{4}x_{1}^{4}w_{1}^{4}+b_{7}x_{1}^{3}w_{1}^{7})y_{1}+(b_{0}x_{1}^{8}+b_{3}x_{1}^{7}w_{1}^{3}+b_{6}x_{1}^{6}w_{1}^{6}+b_{9}x_{1}^{5}w_{1}^{9})$$ (5.26) of weight $24$, where $(x:y:z:w)\in\mathbb{P}(3,8,12,1)$. However, the expression (5.26) is not appropriate to construct modular forms. Although (5.26) can be birationally transformed to the Weierstrass form (2.1), it is impossible to construct correct modular forms from the parameters in (5.26). It seems that we can obtain modular forms of weight $1,3,4,6,7,9$ on $(\mathcal{D}^{\circ})^{\prime}$ from (5.26), which is a complement of an arrangement corresponding to the condition $b_{0}=0.$ However, we can see that this arrangement does not satisfy the condition of Theorem 1.1. So, the expression (5.26) induces an erroneous use of the theory of the Looijenga compactifications. For example, it seems that we can obtain a modular form of weight $7$ coming from the parameter $b_{7}$. We can see that the zero set of this modular form coincides with the arrangement $\mathcal{H}$ in Lemma 1.2. However, as stated in [L2] (see also [L1] Section 6), if an arrangement gives the zero set of a modular form, then this does not satisfy the condition of Theorem 1.1. This contradicts to Lemma 1.2. Thus, our expression of (2.1) is suitable for the theory of the Looijenga compactifications and effective to construct modular forms. 6 Transcendental lattice for family $\mathfrak{G}_{0}$ of Kummer-like surfaces with Picard number $15$ In Section 2, we determined the lattice structure for the family $\mathfrak{F}_{0}$ via a natural consideration of singular fibres for elliptic surfaces. Also, in fact, the lattices for the subfamilies $\mathfrak{F}_{1},\mathfrak{F}_{2},\mathfrak{F}_{3}$ in Proposition 4.1 can be determined in a similar way. However, as for the family $\mathfrak{G}_{j}$ $(j\in\{1,2,3\})$ and $\mathfrak{G}_{1}^{\prime}$ of the Kummer-like surfaces, it is much harder to determine their lattices correctly. For example, • The family $\mathfrak{G}_{2}$ is the family of the Kummer surfaces for principally polarized Abelian surfaces. For a precise study for the lattice structure of the Kummer surfaces, Nikulin [Ni1] introduces a particular lattice which is called the Kummer lattice. Also, Morrison [Mo] studies an interesting viewpoint called the Shioda-Inose structure for $K3$ surfaces whose Picard numbers are greater than $17$. • In order to determine the transcendental lattice for the family $\mathfrak{G}_{1}^{\prime}$ of Kummer-like surfaces with Picard number $16$, Matsumoto-Sasaki-Yoshida [MSY] (see also [Y]) study hypergeometric integrals of type $(3,6)$ and calculate intersection numbers of the chambers coming from the configuration of six lines on $\mathbb{P}^{2}(\mathbb{C})$ by applying a delicate technique of twisted homologies. • For the transcendental lattice for the family $\mathfrak{G}_{1}$ of Kummer-like surfaces with Picard number $16$, Shiga and the author [NS3] have a geometric construction of $2$-cycles on a generic member of $\mathfrak{G}_{1}$ taking into account the fact that a generic member of $\mathfrak{G}_{1}$ is a double covering of that of $\mathfrak{F}_{1}$. This construction is also based on hard calculations of local monodromies for elliptic surfaces. In this section, we will determine the transcendental lattice for the family $\mathfrak{G}_{0}$. It is a non-trivial problem to determine it. If there were a double covering $S_{a}\rightarrow K_{a}$ for generic members of $\mathfrak{F}_{0}$ and $\mathfrak{G}_{0}$, we could calculate the lattice for $\mathfrak{G}_{0}$ from the lattice of $\mathfrak{F}_{0}$ in Theorem 2.1 via a technique of Nikulin [Ni2] Section 2. However, in practice, we can prove that there is no such a double covering for generic $a\in\mathfrak{A}_{0}$. This proof can be given in a similar way to the proof of [NS3] Theorem 5.2. Now, let us remark the fact that our family $\mathfrak{G}_{0}$ naturally contains the families $\mathfrak{G}_{1}$ and $\mathfrak{G}_{1}^{\prime}.$ We will determine the transcendental lattice for $\mathfrak{G}_{0}$ based on this fact. The main result in this section is indebted to heavy calculations for $\mathfrak{G}_{1}^{\prime}$ in [MSY] and those for $\mathfrak{G}_{1}$ in [NS3]. The following result for even lattices is necessary for our proof. Lemma 6.1. ([Mo], Corollary 2.10) Suppose $12\leq\rho\leq 20$. Let ${\bf T}$ be an even lattice of signature $(2,20-\rho).$ Then, the primitive embedding ${\bf T}\hookrightarrow L_{K3}$ is unique up to isometry. The following theorem is the main result of this section. Theorem 6.1. The transcendental lattice of a generic member of $\mathfrak{G}_{0}$ is given by the intersection matrix $U(2)\oplus U(2)\oplus\begin{pmatrix}-2&0&1\\ 0&-2&1\\ 1&1&-4\end{pmatrix}$. Proof. We identify the $K3$ lattice $L_{K3}=U^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}$ with the $2$-homology group of $K3$ surfaces. Let $e_{j},f_{j}$ $(j\in\{1,2,3\})$ be elements of $U^{\oplus 3}$ satisfying $(e_{j}\cdot e_{k})=(f_{j}\cdot f_{k})=0$ and $(e_{j}\cdot f_{k})=\delta_{j,k}$. Let $p_{j},q_{j}$ $(j\in\{1,\ldots,8\})$ be elements of $E_{8}(-1)^{\oplus 2}$ with the intersection numbers defined by the Dynkin diagram in Figure 2. Also, put $\nu_{j}=p_{j}+q_{j}$. By reference to [Ma], we put $$\displaystyle\begin{cases}&\lambda_{1}=-\nu_{5}+\nu_{7}+2(e_{1}+f_{1}),\quad\mu_{1}=-\nu_{4},\\ &\lambda_{2}=\nu_{7}+\nu_{8}+2(e_{1}+e_{2}+e_{3}+f_{3}),\quad\mu_{2}=\nu_{6}.\end{cases}$$ (6.1) Then, $\{\lambda_{1},\mu_{1},\lambda_{2},\mu_{2}\}$ gives a system of basis of the lattice $U(2)^{\oplus 2}$. This system defines a primitive embedding $U(2)^{\oplus 2}\hookrightarrow L_{K3}$. Extending the system (6.1), we have a primitive embedding of the transcendental lattice ${\bf B}_{1}=U(2)^{\oplus 2}\oplus A_{2}(-2)$ for $\mathfrak{G}_{1}$ to $L_{K3}$ given by the system $$\displaystyle{\bf B}_{1}=\langle\lambda_{1},\mu_{1},\lambda_{2},\mu_{2},\nu_{1},\nu_{2}\rangle_{\mathbb{Z}},$$ (6.2) where the transcendental lattice ${\bf B}_{2}=U(2)^{\oplus 2}\oplus A_{1}(-2)$ for $\mathfrak{G}_{2}$ is a primitive sublattice of $L_{K3}$ defined by the explicit system $$\displaystyle{\bf B}_{2}=\langle\lambda_{1},\mu_{1},\lambda_{2},\mu_{2},\nu_{1}\rangle_{\mathbb{Z}}.$$ (6.3) According to Lemma 6.1, this embedding is unique up to isometry. So, we can fix the embedding given by (6.2) and (6.3) without loss of generality. We remark that $\mathfrak{G}_{1}^{\prime}$ also contains $\mathfrak{G}_{2}$ as a subfamily. Hence, its transcendental lattice ${\bf B}_{1}^{\prime}=U(2)^{\oplus 2}\oplus A_{1}(-1)^{\oplus 2}$ should be an extension of the lattice ${\bf B}_{2}$ of (6.3). So, we have the following explicit system of basis: $$\displaystyle{\bf B}_{1}^{\prime}=\langle\lambda_{1},\mu_{1},\lambda_{2},\mu_{2},p_{1},q_{1}\rangle_{\mathbb{Z}}.$$ (6.4) We note that the expression (6.4) is guaranteed by the fact that the lattice ${\bf B}_{2}$ of (6.3) is invariant under the involution given by interchanging two $A_{1}(-1)$ summands of the lattice ${\bf B_{1}^{\prime}}$ (see [MSY]; see also [Y] Chapter IX). Our family $\mathfrak{G}_{0}$ contains $\mathfrak{G}_{1}$ and $\mathfrak{G}_{1}^{\prime}$. So, the transcendental lattice for a generic member of $\mathfrak{G}_{0}$ is a primitive lattice of $L_{K3}$, of rank $7$ and given by the system which is an extension of (6.2) and (6.4). Such a lattice is given by the explicit system $\langle\lambda_{1},\mu_{1},\lambda_{2},\mu_{2},p_{1},q_{1},\nu_{2}\rangle_{\mathbb{Z}}$ whose intersection matrix $U(2)^{\oplus 2}\oplus\begin{pmatrix}-2&0&1\\ 0&-2&1\\ 1&1&-4\end{pmatrix}.$ ∎ Since we have a double covering $K_{a}\rightarrow S_{a}$ for generic members of $\mathfrak{G}_{0}$ and $\mathfrak{F}_{0}$, we can testify the correctness of Theorem 6.1. Namely, when ${\rm Tr}(K_{a})$ is given, we can calculate the intersection matrix of ${\rm Tr}(S_{a})$. Letting $\Lambda=({\rm Tr}(K_{a})\otimes\mathbb{Q})\cap(U^{\oplus 3}\otimes\frac{1}{2}E_{8}(-2))$, where $E_{8}(-2)$ is the lattice generated by $\nu_{1},\ldots,\nu_{8}$, ${\rm Tr}(S_{a})$ is isometric to the lattice $\Lambda(2)$ (see [Ni2] Section 2). In our case, $\Lambda(2)$ is given by the direct sum of $U^{\oplus 2}$ and $\langle\nu_{1}/2,q_{1},\nu_{2}/2\rangle_{\mathbb{Z}}(2)$. This is isometric to $$U^{\oplus 2}\oplus\begin{pmatrix}-1&-1&1/2\\ -1&-2&1/2\\ 1/2&1/2&-1\end{pmatrix}(2)\simeq U^{\oplus 2}\oplus\begin{pmatrix}-2&-2&1\\ -2&-4&1\\ 1&1&-2\end{pmatrix}\simeq U^{\oplus 2}\oplus\begin{pmatrix}-2&1&0\\ 1&-2&0\\ 0&0&-2\end{pmatrix}={\bf A}.$$ This is concordant with Theorem 2.1. Acknowledgment The author would like to thank Professor Manabu Oura and Professor Jiro Sekiguchi for valuable suggestions from the viewpoint of complex reflection groups. This work is supported by JSPS Grant-in-Aid for Scientific Research (18K13383) and MEXT Leading Initiative for Excellent Young Researchers. References [A] V. I. Arnold, Arnold’s problems, Springer, 2005 [Br] E. Brieskorn, Singular elements of semi-simple algebraic groups, Actes Congrés Internet. Math. 2, 1970, 279-284 [Bu] H. Burkhardt, Untersuchungen aus dem Gebiete der hyperelliptischen Modulfunctionen Zweiter Theil, Math. Ann., 38, 1890, 161-224 [CD] A. Clinger and C. Doran, Lattice polarized $K3$ surfaces and Siegel modular forms, Adv. Math., 231, 2012, 172-212 [CMS] A. Clingher, A. Malmendier, T. Shaska, Six line configurations and string dualities, Commun. Math. Phys. 371 (1) (2019) 159-196 [DK] T. Dern and A. Krieg, Graded rings of Hermitian modular forms of degree $2$, Manuscripta Math., 110, 2003, 251-272 [D] I. Dolgachev, Mirror symmetry for lattice polarized $K3$ surfaces, J. Math. Sci., 81 (3), 1996, 2599-2630 [GHS] V. Gritsenko, K. Hulek and G. K. Sankaran, Abelianisation of orthogonal groups and the fundamental group of modular varieties, J. Alg., 322, 2009, 463-478 [HU] K. Hashimoto and K. Ueda, The ring of modular forms for the even unimodular lattice of signature $(2,10)$, Proc. Amer. Math. Soc., 2021, to appear [H] C. Hertling, Frobenius manifolds and moduli spaces for singularities, Cambridge Univ. Press, 2002 [K] F. Klein, Vorlesungen über das Ikosaeder und die Auflösung der Gleichungen vom fünften Grade, Tauber, 1884 [LT] G. I. Lehrer and D. E. Taylor, Unitary Reflection Groups, Australian Math. Soc., 2009 [L1] E. Looijenga, Compactifications defined by arrangements, I: The ball quotient case, Duke Math. J., 118 (1), 2003, 151-187 [L2] E. Looijenga, Compactifications defined by arrangemants, II: Locally symmetric varieties of type $IV$, Duke Math. J., 119 (3), 2003, 527-588 [MSY] K. Matsumoto, T. Sasaki and M. Yoshida, The monodromy of the period map of a $4$-parameter family of $K3$ surfaces and the hypergeometric function of type $(3,6)$, Internat. J. Math., 3, 1992, 1-164 [Ma] S. Ma, On K3 surfaces which dominate Kummer surfaces, Proc Amer. Math. Soc., 141, 2013, 131-137 [Mo] D. R. Morrison, On $K3$ surfaces with large Picard number, Invent. Math., 75, 1984, 105-121 [Mu] R. Müller Hilbertsche Modulformen und Modulfunctionen zu $\mathbb{Q}(\sqrt{5})$, Arch. Math. 45, 1985, 239-251 [NS1] A. Nagano and H. Shiga, Modular map for the family of abelian surfaces via elliptic $K3$ surfaces, Math. Nachr., 288 (1), 2015, 89-114 [NS2] A. Nagano and H. Shiga, Geometric interpretation of Hermitian modular forms via Burkhardt invariants, preprint, 2020, arXiv:2004.08081 [NS3] A. Nagano and H. Shiga, On Kummer-like surfaces attached to singularity and modular forms, preprint, 2020, arXiv:2012.11954 [Na1] A. Nagano, A theta expression of the Hilbert modular functions for $\sqrt{5}$ via the periods of $K3$ surfaces, Kyoto J. Math., 53 (4), 2013, 815-843 [Na2] A. Nagano, Double integrals on a weighted projective plane and the Hilbert modular functions for $\mathbb{Q}(\sqrt{5})$, Acta Arith., 167 (4), 2015, 327-345 [Na3] A. Nagano, Inverse period mappings of $K3$ surfaces and a construction of modular forms for a lattice with the Kneser conditions, J. Alg., 565, 2021, 33-63 [Ni1] V. V. Nikulin, On Kummer surfaces, Math. USSR. Izv., 9, 1975, 261-275 [Ni2] V. V. Nikulin, On rational maps between $K3$ surfaces, Constantin Carathéodory: an international tribute II, World Sci. Publ., 1991 [R] B. Runge, On Siegel modular forms part I, J. Reine. Angew. Math., 436, 1993, 57-85 [Se] J. Sekiguchi, The Construction Problem of Algebraic Potentials and Reflection Groups, preprint, 2021 [Sl] P. Slodowy, Simple singularities and simple algebraic groups, Springer Lec. Note 815, 1980 [ST] G. C. Shephard and J. A. Todd, Finite unitary reflection groups, Canad. J. Math., 6, 1954, 274-304 [Y] M. Yoshida, Hypergeometric Functions, My Love, Springer, 1997. Atsuhira Nagano Faculty of Mathematics and Physics Institute of Science and Engineering Kanazawa University Kakuma, Kanazawa, Ishikawa 920-1192, Japan (E-mail: atsuhira.nagano@gmail.com)
Relativistic kinematics beyond Special Relativity J.M. Carmona Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, Spain    J.L. Cortés Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, Spain    F. Mercati jcarmona@unizar.es, cortes@unizar.es, flavio.mercati@gmail.com Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, Spain Abstract In the context of departures from Special Relativity written as a momentum power expansion in the inverse of an ultraviolet energy scale $M$, we derive the constraints that the relativity principle imposes between coefficients of a deformed (but rotational invariant) composition law, dispersion relation, and transformation laws, at first order in the power expansion. In particular, we find that, at that order, the consistency of a modification of the energy-momentum composition law fixes the modification in the dispersion relation. We therefore obtain the most generic modification of Special Relativity which is rotational invariant and that preserves the relativity principle at leading order in $1/M$. I Introduction Corrections to Special Relativity (SR) coming from quantum gravity effects have been predicted and theoretically explored by string theory and quantum gravity research since more than one decade qugra . Experimental tests have also been carried out in search of residual effects at low energies of these high-energy violations of SR Mattingly:2005re ; Coleman:1998ti , putting strong constraints to Lorentz violating coefficients in the Standard Model Extension (SME) Colladay:1998fq , a generalization of the Standard Model that allows for violations of Lorentz and CPT symmetry in the framework of a local effective field theory (EFT). The SME, inspired by what happens in the context of string theory, provides a dynamical scenario which assumes that the Lorentz invariance violation (LIV) arises from spontaneous symmetry breaking in a more fundamental theory with Lorentz covariance. This implies that microcausality and the usual conservation laws of energy and momentum are expected to hold in the low energy effective theory. However, since the vacuum breaks Lorentz invariance, this EFT is formulated in a certain system of reference, and the relativistic principle (equality of the dynamical laws derived from this EFT for different inertial observers) does no longer hold. There are many experiments and observations which are sensitive to the existence of a preferred reference frame and that imply very strong constraints on the LIV Kostelecky:2008ts . Doubly Special Relativity (DSR) emerged dsr as an alternative way to consider violations of Lorentz invariance. Lorentz symmetry is deformed instead of broken, in a way that it is still possible to formulate observer-independent laws of physics. In this case it is much more difficult to find observable effects of the deviations from SR. The characteristic deformation of DSR is encoded in an energy-momentum dispersion relation which depends on a high-energy (or short-distance) scale which is invariant in the same sense that the speed of light is a relativistic invariant, together with new (deformed) Lorentz transformations which preserve the form of the dispersion relation. In particular, since the presence of a new energy scale in the modified dispersion relation does not necessarily require to deform the rotations, DSR usually considers a deformation only in the transformation laws under boosts, which, owing to their noncompact character, offer less stringent bounds than the usual rotational invariance.111This is an heuristic argument that is explicitly seen in the constraints one gets for rotational and nonrotational invariant parameters in the SME, see e.g. Kostelecky:2008ts . Up to our knowledge there are no such type of studies in the case of a deformed symmetry, but considering a rotational invariant deformation is a common practice in DSR theories and will prove to be an important algebraic simplification in the analysis presented here. Then, the generators of the deformed Lorentz transformations still satisfy the ordinary Lorentz algebra, but are represented non-linearly in momentum space AmelinoCamelia:2002an . This non-linearity is a source of complication in the multi-particle sectors of DSR theories, since one can no longer define the total momentum of two particles as the sum of the individual momenta if it has to transform under boosts as the momentum of a single particle. This makes necessary the introduction of a deformed composition law which will depend on the new scale of the theory, and evidences that the DSR framework goes beyond the EFT paradigm, where usual composition laws apply. The fact that DSR cannot be embeded in the SME makes it difficult to provide it with a dynamical formulation. Initially, only kinematic considerations in momentum space guided the explorations of DSR effects in quantum gravity phenomenology. In fact the space-time structure underlying the theory was suspected from the start to be rather non-trivial, for example by containing some fundamental noncommutativity KowalskiGlikman:2002jr . More recently, it became clear that nonlocal issues were an important ingredient of DSR models nonlocality . This is another fact reflecting that the DSR proposal goes beyond the EFT framework. Then, in Ref. relativelocality it was introduced the concept of ‘relative locality’ in terms of a non-trivial geometry of momentum space. A spacetime with relative locality seems a natural candidate for a DSR spacetime, but the proposal in Ref. relativelocality was set forth in a rather general way, without implementing a relativity principle. In fact not every geometry of momentum space is compatible with the relativity principle. An attempt in this direction was presented in Ref. locandrelpple , where it was remarked that an additive conservation law describes an interaction which is local in spacetime, and non-linear corrections to this law cause the locality property to be lost for a general observer. This means that DSR cannot be local in the same sense as SR, and in fact the description of interactions that was given in Ref. relativelocality under the relative locality notion can be used for a DSR theory which is implemented in terms of the so-called auxiliary variables Judes:2002bw , which are non-linear mappings between a physical momentum (transforming according to deformed Lorentz boosts) and an auxiliary momentum which transforms linearly locandrelpple . However, the use of auxiliary variables is not enough to describe a generic DSR theory, as we will later clarify in the present paper. This makes the connection between DSR and the possible geometry of relative-locality momentum spaces more involved. Since the latter primarily results in modifications of the on-shell relation and of the law of momentum conservation relativelocality , the problem can be more generally formulated as the connection between a theory which, while including a new high-energy scale, implements a relativistic principle (with deformed Lorentz transformations) and the possible modifications of both the dispersion relation and the composition law when the theory is such that it reduces to SR at energies much lower than the new ultraviolet scale. This was the main viewpoint adopted in Ref. AmelinoCamelia:2011yi , where a ‘golden-rule’ for the ‘DSR-compatibility’ of a given geometry of momentum space (in fact, for the compatibility between the implementation of a relativity principle and a given modified dispersion relation and composition law) was derived. This golden-rule was found by considering two physical processes that cannot occur in a theory beyond SR endowed with a relativity principle: photons cannot decay into electron-positron pairs (since this reaction is forbidden in SR, it would imply the existence of a threshold depending on the ultraviolet scale, which is self-contradictory because the energy of the photon can be tuned above or below this observer-invariant threshold with an appropriate boost) and it must always be possible for a photon of any energy to produce electron-positron pairs in interactions with some sufficiently high-energy photons (if it were not so, this would imply again an energy threshold for the switch-off of this reaction, and then the same argument as before applies). However, Ref. AmelinoCamelia:2011yi expressed doubts whether the obtained golden-rule to be satisfied by a modified dispersion relation and a composition law to be DSR-compatible was not only necessary, but also a sufficient condition. The main result of the present manuscript is a derivation of the constraints that the principle of relativity imposes on the dispersion relation and the composition law in a theory beyond SR. As in Ref. AmelinoCamelia:2011yi , we will work in a scenario of rotational invariant (polynomial) modifications to SR to the leading order in $M$, the scale of new physics. We will consider an implementation of deformed boost transformations such that the ordinary Lorentz algebra is still satisfied. This will allow us to obtain simple relations between the coefficients which parameterize the leading deviations to SR, to which presumably quantum gravity phenomenology will be most sensitive dsrfacts . In doing so, we will re-derive the golden-rule of Ref. AmelinoCamelia:2011yi in a completely different way and will establish that it is not a sufficient condition for the DSR-compatibility of a modified dispersion relation and composition law. The relations we will provide allow one to identify the most generic set of deviations from SR that are still compatible with the relativity principle, at first order in $1/M$. The outline of the paper is as follows. In Section II we will give the general forms of the modified dispersion relation and composition law for a two-particle system both in the case $1+1$ dimensions, which we will treat separately because of its simplicity, and in the more relevant $3+1$ case. Then in Section III we will examine the constraints that the relativity principle imposes on them, deriving the set of relations that must exist between the coefficients in the dispersion relation and in the composition law to be compatible with a relativity principle. We will also check these relations in specific examples. There are some points in common between this work and Ref. asr , where a systematic approach to SR compatible with a relativistic principle was presented also at leading order of Planckian effects, referring to that framework as Asymptotic Special Relativity (ASR). That previous work tried to be more general in the sense of including arbitrary functions of energy or momentum instead of simple polynomials in the corrections to SR, but in fact was more restrictive than the present analysis because it only considered the definition of auxiliary (energy and momentum) variables as a way to go beyond SR. That this is not sufficiently general will be clear in Section IV, which therefore establishes that a criticism of DSR that is often heard (that it is just SR rewritten in another set of variables) is unfounded. Finally, in Section V we will argument how the present work can be generalized beyond the two-particle case and beyond the $M^{-1}$ order and give some concluding remarks. II Modified dispersion relation and composition law We will consider a departure from SR which can be expanded in powers of momenta and the inverse of an ultraviolet scale $M$. If one relates these departures with a quantum spacetime structure associated with quantum gravity fluctuations it is natural to identify the ultraviolet scale $M$ with the Planck mass but more general cases could be considered. We will assume that all the energies are much smaller than the scale $M$ so that the dominant effect of the corrections to SR kinematics are on the first order terms in the $1/M$ expansion. We will also restrict all the discussion to the case where there are no departures from rotational symmetry. We will consider the kinematic analysis of reactions with no more than two particles in the initial or final state. As we will comment at the end of the work, there does not seem to be any obstruction to extend the present analysis to reactions involving more than two particles in the initial or final state and to higher orders terms in the $1/M$ expansion. There are two ingredients in the generalized kinematics: a modification of the energy-momentum relation of a particle (modified dispersion relation) and a modification of the composition law of two momenta (and the associated modification of the energy-momentum conservation law). II.1 $1+1$ dimensional case Let us start by considering the simpler case of a generalized kinematics in $1+1$ dimensions. In order to make easier a comparison with a rotational invariant $3+1$ dimensional generalized kinematics, we will impose in the present case invariance under the parity transformation $p_{0}\to p_{0}$, $p_{1}\to-p_{1}$. With this restriction, the general form for the composition of two momenta $p$, $q$ is $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}+\frac{\beta_{1}}{M}p_{0}q_{0}+\frac% {\beta_{2}}{M}p_{1}q_{1}{\hskip 28.452756pt}\left[p\oplus q\right]_{1}\,=\,p_{% 1}+q_{1}+\frac{\gamma_{1}}{M}p_{0}q_{1}+\frac{\gamma_{2}}{M}p_{1}q_{0}$$ (1) where $\beta_{1}$, $\beta_{2}$, $\gamma_{1}$, $\gamma_{2}$ are dimensionless coefficients and we are implementing the condition $$p\oplus q|_{q=0}\,=\,p{\hskip 28.452756pt}p\oplus q|_{p=0}\,=\,q$$ (2) on the composition law. The general form for the dispersion relation of a particle is given by $$C(p)\,=\,p_{0}^{2}-p_{1}^{2}+\frac{\alpha_{1}}{M}p_{0}^{3}+\frac{\alpha_{2}}{M% }p_{0}p_{1}^{2}\,=\,\mu^{2}\,.$$ (3) The generalization of the SR kinematics is parameterized by the six dimensionless coefficients $\beta_{1}$, $\beta_{2}$, $\gamma_{1}$, $\gamma_{2}$, $\alpha_{1}$, $\alpha_{2}$. We want to determine what are the conditions that these coefficients have to satisfy in order to have a generalized kinematics compatible with the relativity principle (absence of a preferred reference frame). II.2 $3+1$ dimensional case In $3+1$-dimensions the general form of the composition law of two momenta compatible with rotational invariance takes the form $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}+\frac{\beta_{1}}{M}\,p_{0}q_{0}+% \frac{\beta_{2}}{M}\,\vec{p}\cdot\vec{q}{\hskip 28.452756pt}\left[p\oplus q% \right]_{i}\,=\,p_{i}+q_{i}+\frac{\gamma_{1}}{M}\,p_{0}q_{i}+\frac{\gamma_{2}}% {M}\,p_{i}q_{0}+\frac{\gamma_{3}}{M}\,\epsilon_{ijk}p_{j}q_{k}$$ (4) and the particle dispersion relation is given by $$C(p)\,=\,p_{0}^{2}-\vec{p}^{\,2}+\frac{\alpha_{1}}{M}\,p_{0}^{3}+\frac{\alpha_% {2}}{M}\,p_{0}\vec{p}^{\,2}\,=\,\mu^{2}\,.$$ (5) All the difference with the 1+1 dimensional case is that one has an additional (seventh) dimensionless coefficient ($\gamma_{3}$) in the composition law. Note that this term would be absent in a parity invariant generalized kinematics in $3+1$ dimensions: without it, $[p\oplus q]_{i}$ would go to $-[p\oplus q]_{i}$ under the transformation $p_{0}\to p_{0}$, $\vec{p}\to-\vec{p}$, $q_{0}\to q_{0}$, $\vec{q}\to-\vec{q}$. However, we will maintain this term in the $3+1$ dimensional case for a more general discussion. III Relativity principle Let us see how a relativity principle can be consistently implemented when one has a generalized composition law of momenta and a generalized dispersion relation. III.1 $1+1$ dimensional case In the $1+1$ dimensional case all one needs is the transformation under a boost (parameterized by $\xi_{1}$) of a two particle system with momenta $p$, $q$. The main ingredient in the implementation of the relativity principle is that the transformation in general does not act separately on the momenta of each particle. After the transformation the momenta of the particles will be $T^{L}_{q}(p)$, $T^{R}_{p}(q)$ with subindex $q$ on $T^{L}$ indicating that the transformed momentum of the particle that had a momentum $p$ can depend on the momentum of the other particle, and similarly for the subindex $p$ in $T^{R}$. The upper indexes $L$, $R$ indicate the possibility to have a different transformation laws (noncommutativity) for the two momenta. The condition (2) on the composition law allows us to derive the transformation under a boost of a one particle system $p\to T(p)$ from the transformation law of a two particle system $$T(p)\,=\,T^{L}_{0}(p)\,=\,T^{R}_{0}(p)\,,$$ (6) where the equality comes from the assumption that a one-particle system is equivalent to a two-particle system in which one of the particles has zero momentum. The general form of the boost transformation of a one particle system at order $1/M$ can be expressed in terms of three dimensionless parameters $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ as $$\displaystyle\left[T(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle p_{0}+p_{1}\xi_{1}+\frac{\lambda_{1}}{M}\,p_{0}p_{1}\xi_{1}$$ $$\displaystyle\left[T(p)\right]_{1}$$ $$\displaystyle=$$ $$\displaystyle p_{1}+p_{0}\xi_{1}+\frac{\lambda_{2}}{M}\,p_{0}^{2}\xi_{1}+\frac% {\lambda_{1}+2\lambda_{2}+3\lambda_{3}}{M}p_{1}^{2}\xi_{1}$$ (7) where a choice of coefficients has been made in order to make easier the comparison of the $1+1$ and $3+1$ dimensional cases. The invariance of the dispersion relation (3) under a boost transformation requires that $$C(T(p))\,=\,C(p)\,.$$ (8) This fixes the dimensionless coefficients $\alpha_{1}$, $\alpha_{2}$ in the modified dispersion relation in terms of the three parameters $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ of the boost transformation (7): $$\alpha_{1}\,=\,-2\,(\lambda_{1}+\lambda_{2}+2\lambda_{3}){\hskip 28.452756pt}% \alpha_{2}\,=\,2\,(\lambda_{1}+2\lambda_{2}+3\lambda_{3})\,.$$ (9) The general form of the boost transformation of a two particle system at order $1/M$ is given by $$T^{L}_{q}(p)\,=\,T(p)+{\bar{T}}^{L}_{q}(p){\hskip 28.452756pt}T^{R}_{p}(q)\,=% \,T(q)+{\bar{T}}^{R}_{p}(q)$$ (10) with $$\left[{\bar{T}}^{L}_{q}(p)\right]_{0}\,=\,\frac{\eta_{1}^{L}}{M}\,q_{0}p_{1}% \xi_{1}+\frac{\sigma_{1}^{L}}{M}\,q_{1}p_{0}\xi_{1}{\hskip 28.452756pt}\left[{% \bar{T}}^{L}_{q}(p)\right]_{1}\,=\,\frac{\sigma_{2}^{L}}{M}\,q_{0}p_{0}\xi_{1}% +\frac{\sigma_{3}^{L}}{M}\,q_{1}p_{1}\xi_{1}\,,$$ (11) $$\left[{\bar{T}}^{R}_{p}(q)\right]_{0}\,=\,\frac{\eta_{1}^{R}}{M}\,p_{0}q_{1}% \xi_{1}+\frac{\sigma_{1}^{R}}{M}\,p_{1}q_{0}\xi_{1}{\hskip 28.452756pt}\left[{% \bar{T}}^{R}_{p}(q)\right]_{1}\,=\,\frac{\sigma_{2}^{R}}{M}\,p_{0}q_{0}\xi_{1}% +\frac{\sigma_{3}^{R}}{M}\,p_{1}q_{1}\xi_{1}\,.$$ (12) But the invariance of the dispersion relation of each of the particles under a boost transformation requires that $$C\left({\bar{T}}^{L}_{q}(p)\right)\,=\,C\left({\bar{T}}^{R}_{p}(q)\right)\,=\,% 0\,.$$ (13) This implies that $$\sigma_{1}^{L}\,=\,\sigma_{3}^{L}\,=\,\sigma_{1}^{R}\,=\,\sigma_{3}^{R}\,=\,0{% \hskip 28.452756pt}\sigma_{2}^{L}\,=\,\eta_{1}^{L}{\hskip 28.452756pt}\sigma_{% 2}^{R}\,=\,\eta_{1}^{R}$$ (14) and then one has $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,q_{0}p_{1}\xi_{1}{\hskip 28.452756pt}% \left[{\bar{T}}^{L}_{q}(p)\right]_{1}\,=\,\frac{\eta_{1}^{L}}{M}\,q_{0}p_{0}% \xi_{1}$$ $$\displaystyle\left[{\bar{T}}^{R}_{p}(q)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{R}}{M}\,p_{0}q_{1}\xi_{1}{\hskip 28.452756pt}% \left[{\bar{T}}^{R}_{p}(q)\right]_{1}\,=\,\frac{\eta_{1}^{R}}{M}\,p_{0}q_{0}% \xi_{1}$$ (15) so that finally one has only two new dimensionless parameters $\eta_{1}^{L}$, $\eta_{1}^{R}$ in the boost transformation of a two particle system. The last step in the implementation of the relativity principle is the requirement that $$T(p\oplus q)\,=\,T^{L}_{q}(p)\oplus T^{R}_{p}(q)\,.$$ (16) This guarantees the invariance under boosts of the energy-momentum conservation in the decay of one particle into two. It is a straightforward algebraic problem to determine the relations that Eq. (16) implies between the parameters of the boost transformations and the dimensionless coefficients in the composition law of momenta: $$\displaystyle\beta_{1}$$ $$\displaystyle=2\,(\lambda_{1}+\lambda_{2}+2\lambda_{3})$$ $$\displaystyle\beta_{2}$$ $$\displaystyle=-2\lambda_{3}-\eta_{1}^{L}-\eta_{1}^{R}$$ $$\displaystyle\gamma_{1}$$ $$\displaystyle=\lambda_{1}+2\lambda_{2}+2\lambda_{3}-\eta_{1}^{L}$$ $$\displaystyle\gamma_{2}$$ $$\displaystyle=\lambda_{1}+2\lambda_{2}+2\lambda_{3}-\eta_{1}^{R}\,.$$ (17) It is clear then that the boost transformation of the one- and two-particle system, determined by the $\lambda$ and $\eta$ coefficients, fix completely the momentum composition law ($\gamma$ and $\beta$ coefficients). DSR models, for which the deformed boost transformation of the one particle system is well defined, usually contain an ambiguity in the energy-momentum conservation law for particle processes. We see that, in the context of the $1/M$ expansion that we are dealing with, this ambiguity corresponds to the deformation in the boost of the two particle system ($\eta_{1}^{L}$ and $\eta_{1}^{R}$ coefficients) that one may consider. Once the transformation laws for the one particle and two particle systems are given, both the dispersion relation and the composition law are fixed. On the other hand, one could consider the composition law, which gives the conservation laws in an interacting system, as more fundamental than the transformation laws from a physical point of view. Then we see that, for a given composition law ($\beta$ and $\gamma$ coefficients) there is a one-parameter family of deformed transformation laws that can implement a relativity principle. Defining $\lambda=(\lambda_{1}+\lambda_{2})/2$, then the coefficients that define these transformation laws are $$\displaystyle\lambda_{1}$$ $$\displaystyle=\frac{3\beta_{1}}{4}+\frac{\beta_{2}}{2}-\frac{\gamma_{1}}{2}-% \frac{\gamma_{2}}{2}+\lambda\quad\quad\quad\quad\lambda_{2}$$ $$\displaystyle=-\frac{3\beta_{1}}{4}-\frac{\beta_{2}}{2}+\frac{\gamma_{1}}{2}+% \frac{\gamma_{2}}{2}+\lambda\quad\quad\quad\quad\lambda_{3}$$ $$\displaystyle=\frac{\beta_{1}}{4}-\lambda$$ $$\displaystyle\eta_{1}^{L}$$ $$\displaystyle=-\frac{\beta_{1}}{4}-\frac{\beta_{2}}{2}-\frac{\gamma_{1}}{2}+% \frac{\gamma_{2}}{2}+\lambda\quad\quad\quad\quad\eta_{1}^{R}$$ $$\displaystyle=-\frac{\beta_{1}}{4}-\frac{\beta_{2}}{2}+\frac{\gamma_{1}}{2}-% \frac{\gamma_{2}}{2}+\lambda$$ (18) When one combines these relations with the expressions (9) of the coefficients in the modified dispersion relation in terms of the parameters of the boost transformations one gets $$\alpha_{1}\,=\,-\beta_{1}{\hskip 56.905512pt}\alpha_{2}\,=\,\gamma_{1}+\gamma_% {2}-\beta_{2}\,.$$ (19) The relativity principle implemented through (8), (13), (16) fixes the modification in the dispersion relation in terms of the modification in the composition law. This remark has never been made in the DSR literature before and is one of the main results of our work. A consequence of the choice of a modified dispersion relation consistent with a modified composition law is the relation $$\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2}-\gamma_{1}-\gamma_{2}\,=\,0\,.$$ (20) This is just the ‘golden-rule’ derived in Ref. AmelinoCamelia:2011yi as a consequence of the incompatibility of the relativity principle with the existence of an energy threshold for the decay of one particle into two (“no-photon-decay-switch-on constraint”) and with the existence of an energy threshold for the energy of one particle in the production of two particles by two particles (“no-pair-production-switch-off constraint”). In fact in Ref. AmelinoCamelia:2011yi it is conjectured, based on a few examples, that the golden rule (20) is not only a necessary but also a sufficient condition for the compatibility of a departure from SR with the relativity principle. The systematic derivation of the modification in the dispersion relation induced by a modification in the composition law and the compatibility with the relativity principle gives an alternative derivation of the golden rule independent of considerations of thresholds in particular reactions. It also shows that the golden rule (20) is not a sufficient condition for a relativistic kinematics but just a combination of the two relations (19) that fix the modified dispersion relation compatible with a given composition law. III.2 Some examples We can check explicitly that the examples considered in Ref. AmelinoCamelia:2011yi to test the validity of the golden rule and its possible sufficiency are particular cases of the implementation of the relativity principle as discussed in this section. In the first example, know in DSR literature as ‘DSR1’ Bruno:2001mw , one has a boost transformation222We are identifying $1/M$ with the invariant length $\ell$; this is just a convention since all the relations are invariant under a simultaneous rescaling of all the dimensionless coefficients, together with a corresponding rescaling of $M$. $$\left[T(p)\right]_{0}\,=\,p_{0}+p_{1}\xi_{1}{\hskip 28.452756pt}\left[T(p)% \right]_{1}\,=\,p_{1}+p_{0}\xi_{1}+\frac{1}{M}p_{0}^{2}\xi_{1}+\frac{1}{2M}p_{% 1}^{2}\xi_{1}\,.$$ (21) This corresponds in our notation to $$\lambda_{1}\,=\,0{\hskip 28.452756pt}\lambda_{2}\,=\,1{\hskip 28.452756pt}% \lambda_{3}\,=\,-\frac{1}{2}\,.$$ (22) The modified dispersion relation corresponds to $$C(p)\,=\,p_{0}^{2}-p_{1}^{2}+\frac{1}{M}p_{0}p_{1}^{2}\,,$$ (23) that is, $$\alpha_{1}\,=\,0{\hskip 28.452756pt}\alpha_{2}\,=\,1\,.$$ (24) The choice (22) and (24) is compatible with the relations (9). For the composition law one has $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}+\frac{1}{M}p_{1}q_{1}{\hskip 28.452% 756pt}\left[p\oplus q\right]_{1}\,=\,p_{1}+q_{1}+\frac{1}{M}p_{0}q_{1}+\frac{1% }{M}p_{1}q_{0}$$ (25) so that $$\beta_{1}\,=\,0{\hskip 28.452756pt}\beta_{2}\,=\,1{\hskip 28.452756pt}\gamma_{% 1}\,=\,1{\hskip 28.452756pt}\gamma_{2}\,=\,1\,.$$ (26) One can easily check that the relations (19) are satisfied and then one has a modified dispersion relation compatible with the modification in the composition law. One can also check that the relations (17) between dimensionless coefficients in the composition law and parameters in the boost transformation are satisfied with $$\eta_{1}^{L}\,=\,\eta_{1}^{R}\,=\,0\,.$$ (27) Then one has in this case $${\bar{T}}^{L}_{q}(p)\,=\,{\bar{T}}^{R}_{p}(q)\,=\,0{\hskip 28.452756pt}T(p% \oplus q)\,=\,T(p)\oplus T(q)$$ (28) and the boost transformation of the two particle system reduces to an independent boost transformation on each of the particles. The second example considered in Ref. AmelinoCamelia:2011yi corresponds to $$\left[T(p)\right]_{0}\,=\,p_{0}+p_{1}\xi_{1}-\frac{1}{M}p_{0}p_{1}\xi_{1}{% \hskip 28.452756pt}\left[T(p)\right]_{1}\,=\,p_{1}+p_{0}\xi_{1}+\frac{1}{M}p_{% 0}^{2}\xi_{1}+\frac{1}{M}p_{1}^{2}\xi_{1}\,,$$ (29) and then $$\lambda_{1}\,=\,-1{\hskip 28.452756pt}\lambda_{2}\,=\,1{\hskip 28.452756pt}% \lambda_{3}\,=\,0\,.$$ (30) The modified dispersion relation is $$C(p)\,=\,p_{0}^{2}-p_{1}^{2}+\frac{2}{M}p_{0}p_{1}^{2}\,,{\hskip 28.452756pt}% \alpha_{1}\,=\,0{\hskip 28.452756pt}\alpha_{2}\,=\,2\,,$$ (31) and the modified composition composition law in this example is $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}{\hskip 28.452756pt}\left[p\oplus q% \right]_{1}\,=\,p_{1}+q_{1}+\frac{1}{M}p_{0}q_{1}+\frac{1}{M}p_{1}q_{0}\,,$$ (32) so that $$\beta_{1}\,=\,0{\hskip 28.452756pt}\beta_{2}\,=\,0{\hskip 28.452756pt}\gamma_{% 1}\,=\,1{\hskip 28.452756pt}\gamma_{2}\,=\,1\,.$$ (33) Also in this example the relation (19) is satisfied. The relations (17) lead also in this case to $$\eta_{1}^{L}\,=\,\eta_{1}^{R}\,=\,0{\hskip 28.452756pt}{\bar{T}}^{L}_{q}(p)\,=% \,{\bar{T}}^{R}_{p}(q)\,=\,0{\hskip 28.452756pt}T(p\oplus q)\,=\,T(p)\oplus T(q)$$ (34) and then to a trivial boost transformation for the two particle system as in the first example. The other two examples considered in Ref. AmelinoCamelia:2011yi are inspired by studies of the $\kappa$-Poincaré Hopf algebra. The third example has an additive composition law of energies as in the previous example but the composition of momentum is noncommutative: $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}{\hskip 28.452756pt}\left[p\oplus q% \right]_{1}\,=\,p_{1}+q_{1}+\frac{1}{M}p_{0}q_{1}\,,$$ (35) so that $$\beta_{1}\,=\,0{\hskip 28.452756pt}\beta_{2}\,=\,0{\hskip 28.452756pt}\gamma_{% 1}\,=\,1{\hskip 28.452756pt}\gamma_{2}\,=\,0\,.$$ (36) The boost transformation acting on a particle is given by $$\left[T(p)\right]_{0}\,=\,p_{0}+p_{1}\xi_{1}{\hskip 28.452756pt}\left[T(p)% \right]_{1}\,=\,p_{1}+p_{0}\xi_{1}+\frac{1}{M}p_{0}^{2}\xi_{1}+\frac{1}{2M}p_{% 1}^{2}\xi_{1}\,,$$ (37) and then $$\lambda_{1}\,=\,0{\hskip 28.452756pt}\lambda_{2}\,=\,1{\hskip 28.452756pt}% \lambda_{3}\,=\,-\frac{1}{2}\,.$$ (38) The modified dispersion relation compatible with this boost transformation is $$C(p)\,=\,p_{0}^{2}-p_{1}^{2}+\frac{1}{M}p_{0}p_{1}^{2}\,,{\hskip 28.452756pt}% \alpha_{1}\,=\,0{\hskip 28.452756pt}\alpha_{2}\,=\,1$$ (39) and once more these coefficients of the modified dispersion relation are compatible with the relativity principle constraints (19). From the equations (17) which give the modified composition law in terms of the boost transformations we can see that the transformation of a two particle system is given in this case by $$\eta_{1}^{L}\,=\,0{\hskip 28.452756pt}\eta_{1}^{R}\,=\,1$$ (40) and then $$T(p\oplus q)\,=\,T(p)\oplus T^{R}_{p}(q){\hskip 28.452756pt}\left[{\bar{T}}^{R% }_{p}(q)\right]_{0}\,=\,\frac{1}{M}p_{0}q_{1}\xi_{1}{\hskip 28.452756pt}\left[% {\bar{T}}^{R}_{p}(q)\right]_{1}\,=\,\frac{1}{M}p_{0}q_{0}\xi_{1}\,,$$ (41) so that we have a nontrivial transformation of the two particle system. Note that in general one has from (17), $$\gamma_{1}-\gamma_{2}\,=\,\eta_{1}^{L}-\eta_{1}^{R}$$ (42) and then a noncommutativity ($\gamma_{1}\neq\gamma_{2}$) in the momentum composition law automatically implies a nontrivial boost transformation of the two particle system. The last example in Ref. AmelinoCamelia:2011yi corresponds to an unmodified dispersion relation ($\alpha_{1}=\alpha_{2}=0$) and an additive composition law of energies. The constraints from the relativity principle (19) require $\gamma_{1}=-\gamma_{2}$ so that one has $$\left[p\oplus q\right]_{0}\,=\,p_{0}+q_{0}{\hskip 28.452756pt}\left[p\oplus q% \right]_{1}\,=\,p_{1}+q_{1}+\frac{1}{2M}p_{0}q_{1}-\frac{1}{2M}p_{1}q_{0}\,.$$ (43) The boost transformation of a particle state is given by $$\left[T(p)\right]_{0}\,=\,p_{0}+p_{1}\xi_{1}+\frac{1}{2M}p_{0}p_{1}{\hskip 28.% 452756pt}\left[T(p)\right]_{1}\,=\,p_{1}+p_{0}\xi_{1}+\frac{1}{2M}p_{0}^{2}$$ (44) so that $$\lambda_{1}\,=\,\frac{1}{2}{\hskip 28.452756pt}\lambda_{2}\,=\,\frac{1}{2}{% \hskip 28.452756pt}\lambda_{3}\,=\,-\frac{1}{2}$$ (45) and $$\eta_{1}^{L}\,=\,0{\hskip 28.452756pt}\eta_{1}^{R}\,=\,1\,,$$ (46) which corresponds to the same boost transformation for the two particle system as in the previous example. The fact that in all four examples the golden rule (20) is satisfied is directly a consequence of the relativity principle present in all the examples. It is very easy to introduce a modification in the dispersion relation or the composition law in each of the four examples such that the golden rule (20) is still satisfied but not the two relations (19) between the coefficients in the dispersion relation and the coefficients in the composition law: in all these cases one would have a modification of the SR kinematics consistent with the golden rule but not with the relativity principle, signaling the presence of a preferred reference frame. III.3 $3+1$ dimensional case The general form the transformation of one particle at order $1/M$, which now depends on three parameters $\vec{\xi}$, is in this case $$\displaystyle\left[T(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle p_{0}+(\vec{p}\cdot\vec{\xi})+\frac{\lambda_{1}}{M}\,p_{0}(\vec{% p}\cdot\vec{\xi})$$ $$\displaystyle\left[T(p)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle p_{i}+p_{0}\xi_{i}+\frac{\lambda_{2}}{M}\,p_{0}^{2}\xi_{i}+\frac% {\lambda_{3}}{M}\,{\vec{p}}^{\,2}\xi_{i}+\frac{\lambda_{4}}{M}\,p_{i}({\vec{p}% }\cdot{\vec{\xi}})+\frac{\lambda_{5}}{M}\,p_{0}\epsilon_{ijk}p_{j}\xi_{k}\,.$$ (47) In the case of $3+1$ dimensions one has an additional restriction for $T$ to correspond to a boost transformation. In order to reproduce the Lorentz algebra one has a condition on the composition of two transformations $T^{(1)}$, $T^{(2)}$ with parameters ${\vec{\xi}}^{\,(1)}$, ${\vec{\xi}}^{\,(2)}$: $$\displaystyle\left[T^{(2)}\left(T^{(1)}(p)\right)-T^{(1)}\left(T^{(2)}(p)% \right)\right]_{0}\,=\,0$$ $$\displaystyle\left[T^{(2)}\left(T^{(1)}(p)\right)-T^{(1)}\left(T^{(2)}(p)% \right)\right]_{i}\,=\,({\vec{p}}\cdot{\vec{\xi}}^{\,(1)})\,{\xi^{(2)}}_{i}-({% \vec{p}}\cdot{\vec{\xi}}^{\,(2)})\,{\xi^{(1)}}_{i}\,.$$ (48) This requires that $$\lambda_{5}\,=\,0{\hskip 28.452756pt}\lambda_{4}\,=\,\lambda_{1}+2\lambda_{2}+% 2\lambda_{3}$$ (49) and the boost transformation of a one particle system is then $$\displaystyle\left[T(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle p_{0}+(\vec{p}\cdot\vec{\xi})+\frac{\lambda_{1}}{M}\,p_{0}(\vec{% p}\cdot\vec{\xi})$$ $$\displaystyle\left[T(p)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle p_{i}+p_{0}\xi_{i}+\frac{\lambda_{2}}{M}\,p_{0}^{2}\xi_{i}+\frac% {\lambda_{3}}{M}\,{\vec{p}}^{\,2}\xi_{i}+\frac{(\lambda_{1}+2\lambda_{2}+2% \lambda_{3})}{M}\,p_{i}({\vec{p}}\cdot{\vec{\xi}})\,.$$ (50) The invariance of the modified dispersion relation (5) under boosts fixes the coefficients $\alpha_{1}$, $\alpha_{2}$ $$\alpha_{1}\,=\,-2\,(\lambda_{1}+\lambda_{2}+2\lambda_{3}){\hskip 28.452756pt}% \alpha_{2}\,=\,2\,(\lambda_{1}+2\lambda_{2}+3\lambda_{3})$$ (51) which are exactly the expressions found in the $1+1$ dimensional case.333This is the reason for the choice of coefficients in $1+1$ dimensions. For a transformation of a two particle system we have the same general structure of the $1+1$ dimensional case with $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,q_{0}({\vec{p}}\cdot{\vec{\xi}})+\frac{% \sigma_{1}^{L}}{M}\,p_{0}({\vec{q}}\cdot{\vec{\xi}})+\frac{\eta_{2}^{L}}{M}\,(% {\vec{p}}\wedge{\vec{q}})\cdot{\vec{\xi}}$$ $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{3}^{L}}{M}\,q_{i}({\vec{p}}\cdot{\vec{\xi}})+\frac{% \sigma_{2}^{L}}{M}\,p_{i}({\vec{q}}\cdot{\vec{\xi}})+\frac{\eta_{4}^{L}}{M}\,q% _{0}\epsilon_{ijk}p_{j}\xi_{k}+\frac{\sigma_{3}^{L}}{M}\,({\vec{p}}\cdot{\vec{% q}})\xi_{i}+\frac{\sigma_{4}^{L}}{M}\,p_{0}\epsilon_{ijk}q_{j}\xi_{k}+\frac{% \sigma_{5}^{L}}{M}\,p_{0}q_{0}\xi_{i}$$ (52) and similar expresions for ${\bar{T}}^{R}_{p}(q)$. The invariance of the dispersion relation when we replace $p$ by $T^{L}_{q}(p)$ requires that $C({\bar{T}}^{L}_{q}(p))=0$. This implies that $$\sigma_{1}^{L}\,=\,\sigma_{2}^{L}\,=\,0{\hskip 28.452756pt}\sigma_{3}^{L}\,=\,% -\eta_{3}^{L}{\hskip 28.452756pt}\sigma_{4}^{L}\,=\,\eta_{2}^{L}{\hskip 28.452% 756pt}\sigma_{5}^{L}\,=\,\eta_{1}^{L}\,,$$ (53) so that we have $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,q_{0}({\vec{p}}\cdot{\vec{\xi}})+\frac{% \eta_{2}^{L}}{M}\,({\vec{p}}\wedge{\vec{q}})\cdot{\vec{\xi}}$$ $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,p_{0}q_{0}\xi_{i}+\frac{\eta_{2}^{L}}{M}% \,p_{0}\epsilon_{ijk}q_{j}\xi_{k}+\frac{\eta_{3}^{L}}{M}\,\left[q_{i}({\vec{p}% }\cdot{\vec{\xi}})-({\vec{p}}\cdot{\vec{q}})\xi_{i}\right]+\frac{\eta_{4}^{L}}% {M}\,q_{0}\epsilon_{ijk}p_{j}\xi_{k}$$ (54) (and a similar expression for ${\bar{T}}^{R}_{p}(q)$) replacing (15) in $1+1$ dimensions. We have three new coefficients $\eta^{L}_{2}$, $\eta^{L}_{3}$, $\eta^{L}_{4}$ that multiply terms vanishing identically in $1+1$ dimensions (the 1+1 dimensional limit can be taken by putting all the $2$ and $3$ components of $\vec{p}$, $\vec{q}$ and $\vec{\xi}$ to zero). But there is still another condition to have a relativistic kinematics. The transformation $p\to T^{L}_{q}(p)$ has to be consistent with the Lorentz algebra. This last condition implies that $$\eta_{3}^{L}\,=\,\eta_{1}^{L}{\hskip 28.452756pt}\eta_{4}^{L}\,=\,-\eta_{2}^{L% }\,,$$ (55) so that finally we have just two additional coefficients ($\eta_{2}^{L}$, $\eta_{2}^{R}$) in the $3+1$ boost transformations of a two particle system $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,q_{0}({\vec{p}}\cdot{\vec{\xi}})+\frac{% \eta_{2}^{L}}{M}\,({\vec{p}}\wedge{\vec{q}})\cdot{\vec{\xi}}{\hskip 28.452756% pt}\left[{\bar{T}}^{R}_{p}(q)\right]_{0}=\frac{\eta_{1}^{R}}{M}\,p_{0}({\vec{q% }}\cdot{\vec{\xi}})-\frac{\eta_{2}^{R}}{M}\,({\vec{p}}\wedge{\vec{q}})\cdot{% \vec{\xi}}$$ $$\displaystyle\left[{\bar{T}}^{L}_{q}(p)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{L}}{M}\,p_{0}q_{0}\xi_{i}+\frac{\eta_{2}^{L}}{M}% \,\left[p_{0}\epsilon_{ijk}q_{j}\xi_{k}-q_{0}\epsilon_{ijk}p_{j}\xi_{k}\right]% +\frac{\eta_{1}^{L}}{M}\,\left[q_{i}({\vec{p}}\cdot{\vec{\xi}})-({\vec{p}}% \cdot{\vec{q}})\xi_{i}\right]$$ $$\displaystyle\left[{\bar{T}}^{R}_{p}(q)\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta_{1}^{R}}{M}\,p_{0}q_{0}\xi_{i}-\frac{\eta_{2}^{R}}{M}% \,\left[p_{0}\epsilon_{ijk}q_{j}\xi_{k}-q_{0}\epsilon_{ijk}p_{j}\xi_{k}\right]% +\frac{\eta_{1}^{R}}{M}\,\left[p_{i}({\vec{q}}\cdot{\vec{\xi}})-({\vec{p}}% \cdot{\vec{q}})\xi_{i}\right]\,.$$ (56) The last step in the implementation of the relativity principle is to enforce the invariance under boosts of the energy-momentum conservation in the decay of one particle into two $$T(p\oplus q)\,=\,T^{L}_{q}(p)\oplus T^{R}_{p}(q)\,.$$ (57) A straightforward algebra (just more tedious than in the $1+1$ dimensional case) allows to determine the five dimensionless coefficients $\beta_{1}$, $\beta_{2}$, $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$ of the $3+1$ dimensional composition laws (4) in terms of the parameters $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$, $\eta_{1}^{L}$, $\eta_{1}^{R}$, $\eta_{2}^{L}$, $\eta_{2}^{R}$ of the boost transformations (50), (56). The results are $$\displaystyle\beta_{1}$$ $$\displaystyle=2\,(\lambda_{1}+\lambda_{2}+2\lambda_{3})$$ $$\displaystyle\beta_{2}$$ $$\displaystyle=-2\lambda_{3}-\eta_{1}^{L}-\eta_{1}^{R}$$ $$\displaystyle\gamma_{1}$$ $$\displaystyle=\lambda_{1}+2\lambda_{2}+2\lambda_{3}-\eta_{1}^{L}$$ $$\displaystyle\gamma_{2}$$ $$\displaystyle=\lambda_{1}+2\lambda_{2}+2\lambda_{3}-\eta_{1}^{R}$$ $$\displaystyle\gamma_{3}=\eta_{2}^{L}-\eta_{2}^{R}$$ (58) which are the same results obtained in the $1+1$ dimensional case for $\beta_{1}$, $\beta_{2}$, $\gamma_{1}$, $\gamma_{2}$ in terms of $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$, $\eta_{1}^{L}$, $\eta_{1}^{R}$ and an additional expression for the coefficient $\gamma_{3}$ (which does not appear in $1+1$ dimensions) as a function of the new ($3+1$ dimensional) parameters in the boost transformations $\eta_{2}^{L}$, $\eta_{2}^{R}$. Since the expressions for the coefficients $\alpha_{1}$, $\alpha_{2}$ in the dispersion relation as a function of the parameters $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ in the one particle boost transformations are also the same as in $1+1$ dimensions one concludes that the derivation of the modified dispersion relation from a modified conservation law in $1+1$ dimensions is also valid in $3+1$ dimensions without any change. The only additional ingredient when going to $3+1$ dimensions is that there is a new additional source of noncommutativity in the momentum composition law ($\gamma_{3}\neq 0$) together with the one already present in $1+1$ dimensions ($\gamma_{1}-\gamma_{2}\neq 0$). As in the case of $1+1$ dimensions one sees that a noncommutative momentum composition law requires a nontrivial boost transformation of the two particle system (${\bar{T}}^{L}_{q}(p)\neq 0$ or ${\bar{T}}^{R}_{p}(q)\neq 0$). In this case, there is a two-parameter family of transformation laws for the one particle and two particle systems which are compatible with a given composition law in a relativistic theory. IV Generalized kinematics versus SR kinematics Lacking a complete consistent dynamical framework incorporating the previous discussion of the generalized relativistic kinematics, a question arises whether the generalized kinematics reduces to SR in a nontrivial (e.g. nonlinear) choice of energy-momentum variables. In fact it is not clear whether one has a freedom for such a choice of variables or if the variables used in the modified expression of the composition law have a physical dynamical content. Letting aside this crucial question one can formally analyze how the composition law of momenta varies when one makes a general change of energy-momentum variables at order $1/M$. In $1+1$ dimensions one can introduce new variables $P_{0}$, $P_{1}$ through $$p_{0}\,=\,P_{0}+\frac{\delta_{1}}{M}P_{0}^{2}+\frac{\delta_{2}}{M}P_{1}^{2}{% \hskip 28.452756pt}p_{1}\,=\,P_{1}+\frac{\delta_{3}}{M}P_{0}P_{1}\,.$$ (59) At the same time, the previous change of variables allows us to give an interpretation to the symbols $\left[P\oplus Q\right]_{0}$ and $\left[P\oplus Q\right]_{1}$. Since $(p\oplus q)$ is also a momentum, applying Eq. (59) to it gives $$\displaystyle\left[p\oplus q\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle\left[P\oplus Q\right]_{0}+\frac{\delta_{1}}{M}\left[P\oplus Q% \right]_{0}^{2}+\frac{\delta_{2}}{M}\left[P\oplus Q\right]_{1}^{2}\,,$$ $$\displaystyle\left[p\oplus q\right]_{1}$$ $$\displaystyle=$$ $$\displaystyle\left[P\oplus Q\right]_{1}+\frac{\delta_{3}}{M}\left[P\oplus Q% \right]_{0}\left[P\oplus Q\right]_{1}\,.$$ (60) Then the composition law (1), when rewritten in terms of the new variables, takes the form $$\displaystyle\left[P\oplus Q\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle P_{0}+Q_{0}+\frac{(\beta_{1}-2\delta_{1})}{M}P_{0}Q_{0}+\frac{(% \beta_{2}-2\delta_{2})}{M}P_{1}Q_{1}$$ $$\displaystyle\left[P\oplus Q\right]_{1}$$ $$\displaystyle=$$ $$\displaystyle P_{1}+Q_{1}+\frac{(\gamma_{1}-\delta_{3})}{M}P_{0}Q_{1}+\frac{(% \gamma_{2}-\delta_{3})}{M}P_{1}Q_{0}\,.$$ (61) In the case of a commutative momentum composition law ($\gamma_{1}=\gamma_{2}$) it is possible to choose new variables by $$\delta_{1}\,=\,\beta_{1}/2{\hskip 28.452756pt}\delta_{2}\,=\,\beta_{2}/2{% \hskip 28.452756pt}\delta_{3}\,=\,\gamma_{1}\,=\,\gamma_{2}$$ (62) so that the composition law of momenta is additive in the new variables. For the dispersion relation (3), Eq. (59) gives $$\mu^{2}=P_{0}^{2}-P_{1}^{2}+\frac{2\delta_{1}+\alpha_{1}}{M}P_{0}^{3}+\frac{2(% \delta_{2}-\delta_{3})+\alpha_{2}}{M}P_{0}P_{1}^{2}\,.$$ (63) For the new variables with an additive composition law, relations (62) allow us to write $$\mu^{2}=P_{0}^{2}-P_{1}^{2}+\frac{\beta_{1}+\alpha_{1}}{M}P_{0}^{3}+\frac{% \beta_{2}-\gamma_{1}-\gamma_{2}+\alpha_{2}}{M}P_{0}P_{1}^{2}\,,$$ (64) and then the dispersion relation takes the standard unmodified SR form as a consequence of the relativity principle Eq. (19). Therefore, for a commutative momentum composition law, it is possible to change variables to an “auxiliary energy-momentum” which behaves just as the SR energy-momentum. In this case, the possibility of a generalization of SR kinematics is based on the physical dynamical meaning of the choice of energy-momentum variables. However, if one has a noncommutative momentum composition ($\gamma_{1}\neq\gamma_{2}$), then the physical content of the generalized kinematics is manifest. The previous results can be extended to the $3+1$ dimensional case. We have $$p_{0}\,=\,P_{0}+\frac{\delta_{1}}{M}P_{0}^{2}+\frac{\delta_{2}}{M}{\vec{P}}^{% \,2}{\hskip 28.452756pt}p_{i}\,=\,P_{i}+\frac{\delta_{3}}{M}P_{0}P_{i}$$ (65) and $$\displaystyle\left[P\oplus Q\right]_{0}$$ $$\displaystyle=$$ $$\displaystyle P_{0}+Q_{0}+\frac{(\beta_{1}-2\delta_{1})}{M}P_{0}Q_{0}+\frac{(% \beta_{2}-2\delta_{2})}{M}({\vec{P}}\cdot{\vec{Q}})$$ $$\displaystyle\left[P\oplus Q\right]_{i}$$ $$\displaystyle=$$ $$\displaystyle P_{i}+Q_{i}+\frac{(\gamma_{1}-\delta_{3})}{M}P_{0}Q_{i}+\frac{(% \gamma_{2}-\delta_{3})}{M}P_{i}Q_{0}+\frac{\gamma_{3}}{M}\epsilon_{ijk}P_{j}Q_% {k}\,,$$ $$\displaystyle\mu^{2}$$ $$\displaystyle=$$ $$\displaystyle P_{0}^{2}-|\vec{P}|^{2}+\frac{2\delta_{1}+\alpha_{1}}{M}P_{0}^{3% }+\frac{2(\delta_{2}-\delta_{3})+\alpha_{2}}{M}P_{0}|\vec{P}|^{2}\,.$$ (66) In the case of a commutative composition of momenta ($\gamma_{1}-\gamma_{2}=\gamma_{3}=0$) the same choice of new variables as in $1+1$ dimensions (62), together with the relativity principle (19), leads to unmodified composition laws and dispersion relations so that the physical content of the generalization of the kinematics rests on the noncommutativity of the momentum composition or on the dynamical content of the choice of energy-momentum variables. The message of this section can be rephrased in the language of the geometry of momentum space as introduced in Refs. relativelocality : we have basically asked when a composition law can be described by a flat connection on a Minkowski momentum space, but in a generalized coordinate set. The generality of the answer is limited by the choice we have made throughout this paper to work at first order in $M^{-1}$. At this order one is not sensitive to the curvature of momentum space (which can enter only at order $M^{-2}$). There is still space for a torsion and nonmetricity of momentum space, which are first-order effects. Our result was, unsurprisingly, that our momentum space is Minkowski if the torsion is zero (in fact the torsion measures the noncommutativity of the composition law and $\gamma_{1}-\gamma_{2}=\gamma_{3}=0$ imply that it is zero). Since in this case the relativity principle (19) sends the dispersion relation in Eq. (66) to $\mu^{2}=P_{0}^{2}-|\vec{P}|^{2}$, we conclude that the case of a commutative composition law is compatible with a geometry of momentum space which is Minkowski with the metric connection. V Concluding remarks In this work we have studied the constraints that the relativity principle imposes on the dispersion relation and the composition law in a (rotational invariant) theory beyond SR. In particular, we have shown that the composition law fixes completely the dispersion relation but not the transformation laws, so that in $3+1$ dimensions there is a two-parameter family of transformation laws compatible with a given composition law. On the other hand, giving the transformation laws for the one particle and two particle systems, fixes everything. It is easy to see that, at the order $1/M$ for which we have derived these results, there is no further freedom in the definition of the transformation laws and the composition laws for a system of more than two particles. The restrictions $$p\oplus q\oplus r|_{r=0}=p\oplus q\quad\quad p\oplus q\oplus r|_{q=0}=p\oplus r% \quad\quad p\oplus q\oplus r|_{p=0}=q\oplus r$$ (67) fix completely the composition law of three momenta at order $1/M$ in terms of the composition law of two momenta: $$\displaystyle[p\oplus q\oplus r]_{0}$$ $$\displaystyle=p_{0}+q_{0}+r_{0}+\frac{\beta_{1}}{M}(p_{0}q_{0}+p_{0}r_{0}+q_{0% }r_{0})+\frac{\beta_{2}}{M}(p_{1}q_{1}+p_{1}r_{1}+q_{1}r_{1})$$ $${}_{1}$$ $$\displaystyle=p_{1}+q_{1}+r_{1}+\frac{\gamma_{1}}{M}(p_{0}q_{1}+p_{0}r_{1}+q_{% 0}r_{1})+\frac{\gamma_{2}}{M}(p_{1}q_{0}+p_{1}r_{0}+q_{1}r_{0})$$ (68) and also it fixes the implementation of the boost transformation in the three particle system in terms of the boost transformation of the two particle system $$\{p,q,r\}\to\{T_{q,r}^{(1)}(p),T_{p,r}^{(2)}(q),T_{p,q}^{(3)}(r)\}$$ (69) with $$T_{q,r}^{(1)}(p)=T(p)+\bar{T}_{q,r}^{(1)}(p)\quad\quad T_{p,r}^{(2)}(q)=T(q)+% \bar{T}_{p,r}^{(2)}(q)\quad\quad T_{p,q}^{(3)}(r)=T(r)+\bar{T}_{p,q}^{(3)}(r)$$ (70) and $$\bar{T}_{q,r}^{(1)}(p)=\bar{T}_{q}^{L}(p)+\bar{T}_{r}^{L}(p)\quad\quad\bar{T}_% {p,r}^{(2)}(q)=\bar{T}_{p}^{R}(q)+\bar{T}_{r}^{L}(q)\quad\quad\bar{T}_{p,q}^{(% 3)}(r)=\bar{T}_{p}^{R}(r)+\bar{T}_{q}^{R}(r).$$ (71) This result obviously extends to the $3+1$ dimensional case. In terms of the geometric interpretation of the “relative locality” proposal relativelocality , the above results are a consequence of the fact that all the novelty that can enter in three-particle vertices is contained in the possibility of a nonassociativity of the composition law, which encodes the curvature of the connection (its Riemann tensor). But at first order in $1/M$ there is no distinction between a flat and a curved connection, and therefore all the properties of three- (or several-) particle systems are encoded by those of the two-particle system. One can also try to generalize the discussion of the $1/M$ relativistic kinematics to the next order in the power expansion in $1/M$. The number of coefficients for the composition law, dispersion law and transformation laws gets much higher, complicating the algebra, but there is not any obstruction for the analysis. In this case one has a composition law for two particles and a new composition law for three particles which is no longer fixed by the two particle law. In the geometric picture one would say that the (unspecified) components of the curvature enter in the composition law. It could be interesting to establish the correspondence between the present framework for relativistic kinematics based on a modification of the momentum composition law and a framework based on symmetry algebras generalizing the Poincaré symmetry algebra of SR. The derivation of boost transformations of multiparticle systems and modified energy-momentum conservation laws from a $\kappa$-Poincaré Hopf algebra presented in Ref. Gubitosi:2011ej should correspond to a particular case of the framework presented in this paper. A modification of the momentum composition (and conservation) law, requires a new implementation of translational symmetry and a generalization of the SR notion of spacetime. It is an open question how a physically meaningful spacetime should be introduced in a generalized relativistic kinematics. There are even doubts AmelinoCamelia:2012qw on the possible redundancy of such a notion. Finally, in the paper we gave some hints on the interpretation of some of our results in terms of the geometry of momentum space. It would be very interesting to translate all the consistency conditions we found for a dispersion relation and a composition law of momenta to be compatible with the relativity principle into conditions on the geometry of momentum space. The first task would be to calculate the relationship that holds, at first order in $1/M$, between our coefficients $\alpha,\beta,\gamma$ and the components of all the geometric tensors that define momentum space: the metric, the torsion and the nonmetricity (as we already remarked, the curvature is unspecified at that order). Then one would need to “mod out” by diffeomorphisms, or general coordinate transformations. Hopefully one should be able to end up with a simple characterization of the most generic geometry of momentum space that is compatible with the relativity principle. This issue as well as the study of new examples of generalized kinematics constructed from the general framework presented in this work is presently under investigation. Acknowledgments This work is supported by CICYT (grant FPA2009-09638) and DGIID-DGA (grant 2011-E24/2). References (1) D. Amati, M. Ciafaloni and G. Veneziano, Phys. Lett.  B 216, 41 (1989); V. A. Kostelecky and S. Samuel, Phys. Rev. D 39, 683 (1989); L. J. Garay, Int. J. Mod. Phys.  A 10, 145 (1995) [arXiv:gr-qc/9403008]; G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and S. Sarkar, Nature 393, 763 (1998) [arXiv:astro-ph/9712103]; R. Gambini and J. Pullin, Phys. Rev.  D 59, 124021 (1999) [arXiv:gr-qc/9809038]; N. Seiberg and E. Witten, JHEP 9909, 032 (1999) [arXiv:hep-th/9908142]; J. Alfaro, H. A. Morales-Tecotl and L. F. Urrutia, Phys. Rev. Lett.  84, 2318 (2000) [arXiv:gr-qc/9909079]; T. Yoneya, Prog. Theor. Phys.  103, 1081 (2000) [arXiv:hep-th/0004074]; G. Amelino-Camelia and T. Piran, Phys. Rev.  D 64, 036005 (2001) [arXiv:astro-ph/0008107]; T. Jacobson, S. Liberati and D. Mattingly, Phys. Rev.  D 66, 081302 (2002) [arXiv:hep-ph/0112207]; J. Collins, A. Perez, D. Sudarsky, L. Urrutia and H. Vucetich, Phys. Rev. Lett. 93, 191301 (2004) [arXiv:gr-qc/0403053]; T. Jacobson, S. Liberati and D. Mattingly, Annals Phys.  321, 150 (2006) [arXiv:astro-ph/0505267]; G. Amelino-Camelia, arXiv:0806.0339 [gr-qc]; A. Hagar, Studies in History and Philosophy of Modern Physics 40, 259 (2009); P. Horava, Phys. Rev.  D 79, 084008 (2009) [arXiv:0901.3775 [hep-th]]. (2) D. Mattingly, Living Rev. Rel.  8, 5 (2005) [gr-qc/0502097]. (3) S. R. Coleman and S. L. Glashow, Phys. Rev. D 59, 116008 (1999) [hep-ph/9812418]. (4) D. Colladay and V. A. Kostelecky, Phys. Rev. D 58, 116002 (1998) [hep-ph/9809521]. (5) See V. A. Kostelecky and N. Russell, Rev. Mod. Phys.  83 (2011) 11 [arXiv:0801.0287 [hep-ph]] and references therein. (6) G. Amelino-Camelia, Int. J. Mod. Phys.  D 11, 35 (2002) [arXiv:gr-qc/0012051]; Phys. Lett.  B 510, 255 (2001) [arXiv:hep-th/0012238]; Int. J. Mod. Phys.  D 11, 1643 (2002) [arXiv:gr-qc/0210063]; J. Magueijo and L. Smolin, Phys. Rev. Lett. 88, 190403 (2002) [arXiv:hep-th/0112090]. (7) G. Amelino-Camelia, D. Benedetti, F. D’Andrea and A. Procaccini, Class. Quant. Grav.  20, 5353 (2003) [arXiv:hep-th/0201245]. (8) J. Kowalski-Glikman and S. Nowak, Int. J. Mod. Phys.  D 12, 299 (2003) [arXiv:hep-th/0204245]. (9) S. Hossenfelder, Phys. Rev. Lett.  104, 140402 (2010) [arXiv:1004.0418 [hep-ph]]; L. Smolin, Gen. Rel. Grav.  43, 3671 (2011) [arXiv:1004.0664 [gr-qc]]; U. Jacob, F. Mercati, G. Amelino-Camelia and T. Piran, Phys. Rev. D 82, 084021 (2010) [arXiv:1004.0575 [astro-ph.HE]]. S. Hossenfelder, arXiv:1005.0535 [gr-qc]; G. Amelino-Camelia, M. Matassa, F. Mercati and G. Rosati, Phys. Rev. Lett.  106, 071301 (2011) [arXiv:1006.2126 [gr-qc]]; S. Hossenfelder, arXiv:1006.4587 [gr-qc]. (10) G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman and L. Smolin, Phys. Rev. D 84, 084010 (2011) [arXiv:1101.0931 [hep-th]]; G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman and L. Smolin, Gen. Rel. Grav.  43, 2547 (2011) [Int. J. Mod. Phys. D 20, 2867 (2011)] [arXiv:1106.0313 [hep-th]]. (11) J. M. Carmona, J. L. Cortes, D. Mazon and F. Mercati, Phys. Rev. D 84, 085010 (2011) [arXiv:1107.0939 [hep-th]]. (12) S. Judes and M. Visser, Phys. Rev. D 68, 045001 (2003) [gr-qc/0205067]. (13) G. Amelino-Camelia, Phys. Rev. D 85, 084034 (2012) [arXiv:1110.5081 [hep-th]]. (14) G. Amelino-Camelia, Symmetry 2, 230 (2010) [arXiv:1003.3942 [gr-qc]]. (15) J. M. Carmona, J. L. Cortes and D. Mazon, Phys. Rev. D 82, 085012 (2010) [arXiv:1007.3190 [gr-qc]]. (16) N. R. Bruno, G. Amelino-Camelia and J. Kowalski-Glikman, Phys. Lett. B 522, 133 (2001) [hep-th/0107039]. (17) G. Gubitosi and F. Mercati, [arXiv:1106.5710 [gr-qc]]. (18) G. Amelino-Camelia, [arXiv:1205.1636 [gr-qc]].
OUTER ACTIONS OF A DISCRETE AMENABLE GROUP ON APPROXIMATELY FINITE DIMENSIONAL FACTORS I, General Theory Yoshikazu Katayama and Masamichi Takesaki Department of Mathematics, Osaka Ky$\hat{\text{o}}$iku University Osaka, Japan Department of Mathematics, University of California, Los Angeles, California 90095-1555 To each factor ${\eusm M}$, we associate an invariant ${{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})$ to be called the intrinsic modular obstruction as a cohomological invariant which lives in the “third” cohomology group: $$\displaystyle{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}({% \text{\rm Out}}({\eusm M})\times{\mathbb{R}},{{\text{\rm H}}_{\theta}^{1}}({% \mathbb{R}},{\eusm U}({\eusm C})),{\eusm U}({\eusm C}))$$ where $\{{\eusm C},{\mathbb{R}},{\theta}\}$ is the flow of weights on ${\eusm M}$. If ${\alpha}$ is an outer action of a countable discrete group $G$ on ${\eusm M}$, then its modulus ${\text{\rm mod}}({\alpha})\in{\text{\rm Hom}}(G,{\text{\rm Aut}}_{\theta}({% \eusm C}))$, $N={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}(M))$ and the pull back $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})={\alpha}^{*}({{\text{\rm Ob}}_{% \text{\rm m}}}({\eusm M}))\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{% \rm out}}}(G\times{\mathbb{R}},N,{\eusm U}({\eusm C})))$$ to be called the modular obstruction of ${\alpha}$ are invariants of the outer conjugacy class of the outer action ${\alpha}$. We prove that if the factor ${\eusm M}$ is approximately finite dimensional and $G$ is amenable, then the invariants uniquely determine the outer conjugacy class of ${\alpha}$ and the every invariant occurs as the invariant of an outer action ${\alpha}$ of $G$ on ${\eusm M}$. In the case that ${\eusm M}$ is a factor of type III${}_{\lambda}$, $0<{\lambda}\leq 1$, the modular obstruction group ${{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,{\eusm U}({\eusm C}))$ and the modular obstruction ${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})$ take simpler forms. These together with examples will be discussed in the forthcoming paper, [KtT2]. ††support: This research is support in part by the NSF Grant: DMS-9801324 and DMS-0100883 and also by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research (C), 14540206, 2002. §0. Introduction With the successful completion of the cocycle conjugacy classification of amenable discrete group actions on AFD factors by many hands over more than two decades, [C3, J1, JT, O, ST1, ST2, KwST, KtST1], it is only naturally to consider the outer conjugacy classification of amenable discrete group outer actions on AFD factors. In fact, the work on the program has been already started by the pioneering works of Connes, [Cnn 3, 4, 6], Jones [J1] and Ocneanu [Ocn]. In this article, we complete the outer conjugacy classification of discrete amenable group outer actions on AFD factors. The cases of type I, II${}_{1}$ and II${}_{\infty}$ with additional technical assumption were already completed by Jones, [J1], and Ocneanu, [Ocn], so the case of type III will be mainly considered although the technical assumption in the case of type II${}_{\infty}$ placed in the work of Ocneanu [Ocn] must be removed. As in the case of the cocycle conjugacy classification, we first associate invariants which are intrinsic to any factor ${\eusm M}$, the flow of weights, the modulus, the characteristic square and the modular obstruction ${{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})$. Then the outer conjugacy invariants are given by the pull back of these intrinsic quantities of the factor by the outer action. To be more precise, let ${\eusm M}$ be a separable factor. Associated with ${\eusm M}$ is the characteristic square: $${\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>{\eusm U}({\eusm C})@>{\partial}>{}>{\text{\rm B}% }_{\theta}^{1}@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\eusm U}({\eusm M})@>{}>{}>{\widetilde{{\eusm U}}}({\eusm M})@>{% \widetilde{\partial}}>{}>{\text{\rm Z}}_{\theta}^{1}@>{}>{}>1\\ @V{}V{{\text{\rm Ad}}}V@V{}V{\widetilde{\text{\rm Ad}}}V@V{}V{}V\\ 1@>{}>{}>{\text{\rm Int}}({\eusm M})@>{}>{}>{{\text{\rm Cnt}}_{\text{\rm r}}}(% {\eusm M})@>{\dot{\partial}}>{}>{\text{\rm H}}_{\theta}^{1}@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\end{CD}}$$ which is equivariant under ${\text{\rm Aut}}({\eusm M})\times{\mathbb{R}}$. The middle vertical exact sequence is the source of the intrinsic invariant: $$\Theta({\eusm M})\in{\Lambda}_{{\text{\rm mod}}\times{\theta}}({\text{\rm Aut}% }({\eusm M})\times{\mathbb{R}},{{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}),{% \eusm U}({\eusm C})).$$ To avoid heavy notations and to see the essential mechanism governing the above exact characteristic square, let us consider the situation that a group $H$ equipped with a distinguished pair of normal subgroups $M\subset L\subset H$ which acts on the ergodic flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$, i.e., the action ${\alpha}$ of $H$ on ${\eusm C}$ commutes with the flow ${\theta}$. Assume that the normal subgroup $L$ does not act on ${\eusm C}$, i.e., $L\subset{\text{\rm Ker}}({\alpha})$, so that the action ${\alpha}$ factors through the both quotient groups $Q=H/L$ and $G=H/M$. In the case that $H={\text{\rm Aut}}({\eusm M})$, the groups $L$ and $M$ stand for ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ and ${\text{\rm Int}}({\eusm M})$, therefore $Q={{\text{\rm Out}}_{\tau,{\theta}}}({\eusm M})$ and $G={\text{\rm Out}}({\eusm M})$. Let ${\widetilde{H}},{\widetilde{G}}$ and ${\widetilde{Q}}$ denote respectively the product groups $H\times{\mathbb{R}},G\times{\mathbb{R}}$ and $Q\times{\mathbb{R}}$. We denote the unitary group ${\eusm U}({\eusm C})$ simply by $A$ for the simplicity. In the case that $H={\text{\rm Aut}}({\eusm M})$, then we require appropriate Borelness for mappings. But $Q$ can fail to have a reasonable Borel structure, so we treat $Q$ as a discrete group. On the product group ${\widetilde{Q}}=Q\times{\mathbb{R}}$, we consider the product Borel structure as well as the product topology. In this circumstance, we will see that each characteristic cocycle $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$ gives rise to an ${\widetilde{H}}$-equivariant exact square: $$\eightpoint\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>A@>{{\partial}}>{}>{\text{\rm B}}@>{}>{}>1\\ @V{}V{}V@V{i}V{}V@V{}V{}V\\ 1@>{}>{}>U=E^{\theta}@>{}>{}>E@>{{\partial}_{\theta}}>{}>{\text{\rm Z}}@>{}>{}% >1\\ @V{}V{}V@V{j}V{}V@V{}V{}V\\ 1@>{}>{}>K@>{}>{}>L@>{\dot{\partial}}>{}>{\text{\rm H}}@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\\ \end{CD}$$ with $E=A\times_{\mu}L$. The subgroup $K$ of $L$ is normal in $H$ and depends on the characteristic invariant $\chi=[{\lambda},\mu]\in{\Lambda}_{\alpha}({\widetilde{H}},L,A)$. We denote it by $K(\chi)$ or $K({\lambda},\mu)$ to indicate the dependence of $K$ on $\chi$ or $({\lambda},\mu)$. We then define a subgroup ${\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$ of ${\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$ to be the subgroup consisting of those $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$ such that $M\subset K({\lambda},\mu)$ and ${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ to be $${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)=\{\chi\in{\Lambda}_{\alpha}({% \widetilde{H}},L,A):M\subset K(\chi)\}.$$ In order to study the outer conjugacy class of an outer action ${\alpha}$ of $G$ on a factor ${\eusm M}$, we need to fix a cross-section ${\mathfrak{s}}:Q\mapsto G$ of the quotient map $\pi:G\mapsto Q$ with kernel $N=L/M={\text{\rm Ker}}(\pi)$ and also to restrict the group of $A$-valued 3-cocycles on ${\widetilde{Q}}$ to the group ${{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ of standard cocycles and a smaller coboundary group: $${{\text{\rm B}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)={\partial}_{% \widetilde{Q}}({{\text{\rm B}}_{\alpha}^{3}}(Q,A))$$ and to form the quotient group: $${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)={{\text{\rm Z}% }_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)/{{\text{\rm B}}_{{\alpha},% \text{\rm s}}^{3}}({\widetilde{Q}},A).$$ The cross-section ${\mathfrak{s}}$ gives rise to a link between the group ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ and the group ${\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ of equivariant homomorphisms which in turn allows us to define the fiber product: $${{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,A)={{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{% \mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}).$$ We then show that this group falls in the modified Huesbshmann - Jones - Ratcliffe exact sequence which sits next to the Huebshmann - Jones - Ratcliffe exact sequence: $$\begin{CD}11\\ @V{}V{}V@V{}V{}V\\ {\text{\rm H}}^{1}(Q,{\mathbb{T}})@>{{\text{\rm inf}}}>{}>{\text{\rm H}}^{1}(G% ,{\mathbb{T}})\\ @V{{\text{\rm inf}}}V{}V@V{{\text{\rm inf}}}V{}V\\ {\text{\rm Hom}}(H,{\mathbb{T}})@={\text{\rm Hom}}(H,{\mathbb{T}})\\ @V{{\text{\rm res}}}V{}V@V{{\text{\rm res}}}V{}V\\ \end{CD}$$ $$\begin{CD}{\text{\rm Hom}}_{H}(L,{\mathbb{T}})@={\text{\rm Hom}}_{H}(L,{% \mathbb{T}})\\ @V{{\partial}}V{}V@V{{\partial}}V{}V\\ {\text{\rm H}}^{2}(Q,{\mathbb{T}})@>{{\text{\rm inf}}}>{}>{\text{\rm H}}^{2}(G% ,{\mathbb{T}})\\ @V{{\text{\rm inf}}}V{}V@V{{\text{\rm inf}}}V{}V\\ {\text{\rm H}}^{2}(H,{\mathbb{T}})@={\text{\rm H}}^{2}(H,{\mathbb{T}})\\ @V{{\text{\rm Res}}}V{}V@V{{\text{\rm res}}}V{}V\\ {\Lambda}_{\alpha}({\widetilde{H}},L,M,A)@>{{\text{\rm res}}}>{}>{\Lambda}(H,M% ,{\mathbb{T}})\\ @V{{\delta}}V{}V@V{{{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}}V{}V\\ {{\text{\rm H}}_{\alpha}^{\text{\rm out}}}(G,N,A)@>{{\partial}}>{}>{{\text{\rm H% }}^{3}}(G,{\mathbb{T}})\\ @V{{\text{\rm Inf}}}V{}V@V{{\text{\rm inf}}}V{}V\\ {\text{\rm H}}^{3}(H,{\mathbb{T}})@={\text{\rm H}}^{3}(H,{\mathbb{T}})\\ \end{CD}$$ An action ${\alpha}$ of $H$ on a factor ${\eusm M}$ with $M={\alpha}{{}^{-1}}({\text{\rm Int}}({\eusm M}))$ and $L={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$ gives rise naturally to the modular characteristic invariant ${\chi_{\text{\rm m}}}({\alpha})\in{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ and $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})={\delta}({\chi_{\text{\rm m}}}({% \alpha}))\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)={{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({% \widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{% \theta}^{1}}).$$ The cohomology element ${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})$ will be called the modular obstruction of the outer action ${\alpha}$ of $G$. In the original setting, $H={\text{\rm Aut}}({\eusm M})$, the corresponding ${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})$ will be denoted by ${{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})$ and called the intrinsic modular obstruction of ${\eusm M}$. In this article, we will prove the following outer conjugacy classification: Theorem i) If ${\alpha}$ is an outer action of a group $G$ on a factor ${\eusm M}$, then the pair of the modulus ${\text{\rm mod}}({\alpha})\in{\text{\rm Hom}}(G,{\text{\rm Aut}}_{\theta}({% \eusm C}))$ of ${\alpha}$ and the modular obstruction: $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})\in{{\text{\rm H}}_{{\alpha},{% \mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}},N,A)$$ is an outer conjugacy invariant of ${\alpha}$ with $N={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$. ii) If $G$ is a countable discrete amenable group and ${\eusm M}$ is an approximately finite dimensional factor, then the pair $({{\text{\rm Ob}}_{\text{\rm m}}}({\alpha}),{\text{\rm mod}}({\alpha}))$ is a complete invariant for the outer conjugacy class of ${\alpha}$. ii) With a countable discrete amenable group $G$ and an AFD factor ${\eusm M}$ fixed, every triplet occurs as the invariant of an outer action of $G$ on the ${\eusm M}$. Contrary to the case of the cocycle conjugacy classification, the outer conjugacy classification of outer actions of a countable discrete amenable group on an AFD factor will be carried out by a unified approach without splitting the case base on the type of the base factor. Indeed, the theory is very much cohomological and therefore algebraic. Nevertheless, our classification does not fall in the traditional classification doctrine of Mackey, we will follow the strategy proposed in an earlier work of Katayama - Sutherland - Takesaki, [KtST1]. Namely, we first introduce a standard Borel structure to the space of outer actions of a countable discrete group $G$ on a separable factor ${\eusm M}$ and associate functorially invariants in the Borel fashion. Most of the mathematical work of this article was carried out during the authors’ stay at the Department of Mathematics, University of Rome “La Sapienza”, in the spring of 2000, while the second named author stayed there for the entire academic year of 99/00, we would like to record here our gratitude to Professor S. Doplicher and his colleagues in Rome for their hospitality extended to the authors. The first named author also would like to express his thanks to the National Science Foundation for supporting his visit to Rome from Osaka in Japan. §1. Preliminary and Notations Let ${\eusm M}$ be a separable factor and $G$ a separable locally compact group. We mean by an outer action ${\alpha}$ of $G$ on ${\eusm M}$ a Borel map from $G$ into the group ${\text{\rm Aut}}({\eusm M})$ of automorphisms of ${\eusm M}$ such that $$\displaystyle{\alpha}_{g}\circ{\alpha}_{h}\equiv{\alpha}_{gh}\quad{\text{\rm mod% }}\ {\text{\rm Int}}({\eusm M}),\quad g,h\in G,$$ 1.11.11.1 where ${\text{\rm Int}}({\eusm M})$ means the group of inner automorphisms. If in addition the following happens $$\displaystyle{\alpha}_{g}$$ $$\displaystyle\not\equiv{\text{\rm id}}\quad{\text{\rm mod}}\ {\text{\rm Int}}(% {\eusm M})\quad\text{unless }g=1,$$ then the outer action ${\alpha}$ is called free. Remark. One should not confused an outer action with a free action of $G$ on ${\eusm M}$. A free action of a discrete group $G$ is by definition that a homomorphism ${\alpha}:g\in G\mapsto{\alpha}_{g}\in{\text{\rm Aut}}({\eusm M})$ such that ${\alpha}_{g}\notin{\text{\rm Int}}({\eusm M}),g\neq 1.$ There is no good definition for the freeness of an action ${\alpha}$ of a continuous group $G$. Although one might take the triviality ${\eusm M}^{\prime}\cap{\eusm M}\rtimes_{\alpha}G={\mathbb{C}}$ of the relative commutant of the original factor ${\eusm M}$ in the crossed-product as the definition of the freeness of ${\alpha}$, which is an easy consequence of the freeness of ${\alpha}$ in the discrete case. Let $\{{\widetilde{{\eusm M}}},{\mathbb{R}},{\theta},\tau\}$ be the non-commutative flow of weights on ${\eusm M}$ in the sense of Falcone - Takesaki, [FT1], and $\{{\eusm C},{\mathbb{R}},{\theta}\}$ be the Connes - Takesaki flow of weights, [CT], so that ${\eusm C}$ is the center of ${\widetilde{{\eusm M}}}$ and the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ is the restriction of the non-commutative flow of weights. The von Neumann algebra ${\widetilde{{\eusm M}}}$ is generated by ${\eusm M}$ together with one parameter unitary groups $\{{\varphi}^{it}:{\varphi}\in{\mathfrak{W}}_{0}({\eusm M}),t\in{\mathbb{R}}\}$, where ${\mathfrak{W}}_{0}({\eusm M})$ means the set of all faithful semi-finite normal weights on ${\eusm M}$ and $\{{\varphi}^{it}\}$’s are related by the Connes cocycle derivatives: $${\varphi}^{it}{\psi}^{-it}=(D{\varphi}:D{\psi})_{t},\quad{\varphi},{\psi}\in{% \mathfrak{W}}_{0}({\eusm M}),\ t\in{\mathbb{R}}.$$ 1.21.21.2 The non-commutative flow ${\theta}$ then acts on ${\widetilde{{\eusm M}}}$ by $$\begin{aligned} &\displaystyle{\theta}_{s}(x)=x,\quad x\in{\eusm M};\\ &\displaystyle{\theta}_{s}({\varphi}^{it})=(e^{-s}{\varphi})^{it},\quad{% \varphi}\in{\mathfrak{W}}_{0}({\eusm M}),\end{aligned}\qquad s,t\in{\mathbb{R}}.$$ 1.31.31.3 Associated with the non-commutative flow of weights is the extended unitary group ${\widetilde{{\eusm U}}}({\eusm M})=\{u\in{\eusm U}({\widetilde{{\eusm M}}}):u{% \eusm M}u^{*}={\eusm M}\}$. Each $u\in{\widetilde{{\eusm U}}}({\eusm M})$ gives rise to an automorphism ${\widetilde{\text{\rm Ad}}}(u)={\text{\rm Ad}}(u)|_{\eusm M}$ of ${\eusm M}$. The set of such automorphisms will be denoted by ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ and it is a normal subgroup of ${\text{\rm Aut}}({\eusm M})$. An important property of the non-commutative flow of weights is the relative commutant of ${\eusm M}$ in ${\widetilde{{\eusm M}}}$: $${\eusm M}^{\prime}\cap{\widetilde{{\eusm M}}}={\eusm C}.$$ A continuous one parameter family $\{u_{s}\in{\eusm U}({\widetilde{{\eusm M}}}):s\in{\mathbb{R}}\}$ is called ${\theta}$-one cocycle if $$u_{s+t}=u_{s}{\theta}_{s}(u_{t}),\quad s,t\in{\mathbb{R}}.$$ The set of all ${\theta}$-one cocycles in ${\eusm C}$ form a group relative to the pointwise product in ${\eusm C}$ and is denoted by ${\text{\rm Z}}_{\theta}^{1}$. The action ${\theta}$ on ${\widetilde{{\eusm M}}}$ is known to be stable in the sense that every ${\theta}$-one cocycle $\{u_{s}\}$ is coboundary, i.e., there exists $v\in{\eusm U}({\widetilde{{\eusm M}}})$ such that $$u_{s}={\theta}_{s}(v)v^{*}=({\partial}v)_{s},\quad s\in{\mathbb{R}}.$$ The set $\{{\partial}v:v\in{\eusm U}({\eusm C})\}$ of coboundaries is a subgroup of ${\text{\rm Z}}_{\theta}^{1}$ and denoted by ${\text{\rm B}}_{\theta}^{1}$. The quotient group ${\text{\rm H}}_{\theta}^{1}={\text{\rm Z}}_{\theta}^{1}/{\text{\rm B}}_{\theta% }^{1}$ is an abelian group which is the first cohomology group of the flow of weights. The elements of extended unitary group ${\widetilde{{\eusm U}}}({\eusm M})$ are then characterized by the fact that for $u\in{\eusm U}({\widetilde{{\eusm M}}})$: $$u\in{\widetilde{{\eusm U}}}({\eusm M})\quad\Leftrightarrow\quad({\partial}u)_{% t}\in{\eusm C},t\in{\mathbb{R}}.$$ Therefore the map ${\partial}:v\in{\widetilde{{\eusm U}}}({\eusm M})\mapsto{\partial}v\in{\text{% \rm Z}}_{\theta}^{1}$ is surjective. An important fact about this map is that the exact sequence $$1\ \longrightarrow\ {\eusm U}({\eusm M})\ \longrightarrow\ {\widetilde{{\eusm U% }}}({\eusm M})\ \overset{\partial}\to{\longrightarrow}\ {\text{\rm Z}}_{\theta% }^{1}\ \longrightarrow\ 1$$ splits equivariantly as soon as a faithful semi-finite normal weight ${\varphi}$ is fixed, i.e., to each faithful semi-finite normal weight ${\varphi}$ there corresponds a homomorphism $b_{\varphi}:c\in{\text{\rm Z}}_{\theta}^{1}\mapsto b_{\varphi}(c)\in{% \widetilde{{\eusm U}}}({\eusm M})$ such that $$\displaystyle{\widetilde{\text{\rm Ad}}}(b_{\varphi}(c))={\sigma}_{c}^{\varphi% }\quad\text{if}\ {\varphi}\ \text{is dominant},c\in{\text{\rm Z}}_{\theta}^{1};$$ 1.41.41.4 $$\displaystyle b_{{\alpha}({\varphi})}(c)={\alpha}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}b_{\varphi}{\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}{{}^{-1}}(c),\quad c\in{\text{\rm Z}}_{\theta}^{1},\ {\alpha}% \in{\text{\rm Aut}}({\eusm M});$$ $$\displaystyle(D{\varphi}:D{\psi})_{c}=b_{\psi}(c)b_{\varphi}(c^{*}),\quad{% \varphi},{\psi}\in{\mathfrak{W}}_{0}({\eusm M}).$$ This was proven by Falcone - Takesaki [FT2] among other things. \normalfont\normalfont1\normalfont1\normalfont1The coboundary operation in [FT2] was defined differently as ${\partial}u_{s}=u{\theta}_{s}(u^{*})$, so in our case the map $b_{\varphi}:c\in{{\text{\rm Z}}_{\theta}^{1}}\mapsto b_{\varphi}(c)\in{% \widetilde{{\eusm U}}}({\eusm M})$ behaves as described here. The group ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ of “extended modular” automorphisms is a normal subgroup of ${\text{\rm Aut}}({\eusm M})$ but not closed in the case of type III${}_{0}$. Nevertheless it is a Borel subgroup so that its inverse image $N({\alpha})={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$, denoted simply by $N$ in the case that ${\alpha}$ is fixed, is a normal Borel subgroup of the original group $G$. Thus the quotient group $Q=G/N$ cannot be expected to be a good topological group in general unless $G$ is discrete. Thus we consider mainly discrete groups. Other than the definition of the invariants of ${\alpha}$ we do not have any substantial result on continuous groups any way at the moment. Interested readers are challenged to go further in the direction of the cocycle conjugacy problem of one parameter automorphism groups: clearly the very first step toward the continuous group actions on a factor. §2. Modified Huebschmann Jones Ratcliffe Exact Sequence We recall the Huebschmann - Jones - Ratcliffe exact sequence, [Hb, J1, Rc]: $$\eightpoint\begin{aligned} \displaystyle 1\longrightarrow&\displaystyle\text{% \rm H}^{1}(Q,A)\overset\pi^{*}\to{\longrightarrow}\text{\rm H}^{1}(G,A)% \longrightarrow\text{\rm H}^{1}(N,A)^{G}\longrightarrow\\ &\displaystyle\longrightarrow\text{\rm H}^{2}(Q,A)\longrightarrow{\text{\rm H}% }^{2}(G,A)\longrightarrow{\Lambda}(G,N,A)\overset{\delta}\to{\longrightarrow}% \text{\rm H}^{3}(Q,A)\overset\pi^{*}\to{\longrightarrow}\text{\rm H}^{3}(G,A),% \end{aligned}$$ 2.12.12.1 where either i) $G$ is a separable locally compact group acting on a separable abelian von Neumann algebra ${\eusm C}$ with $A={\eusm U}({\eusm C})$ and $N$ a Borel normal subgroup, or ii) $G$ is a discrete group and $N$ a normal subgroup. We need the second case because the automorphism group ${\text{\rm Aut}}({\eusm M})$ of a separable factor ${\eusm M}$ and the normal subgroup ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ will be taken as the groups $G$ and $N$. If ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ is not closed as in the case of an AFD factor ${\eusm M}$, then the quotient group ${\text{\rm Aut}}({\eusm M})/{{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ does not have a good topological property beyond the discrete group structure. We are interested in the exactness at ${\text{\rm H}}_{\alpha}^{3}(Q,A)$. In particular, we need an explicit construction of $[{\lambda},\mu]\in{\Lambda}(G,N,A)$ such that ${\delta}[{\lambda},\mu]=[c]$ for those $c\in{\text{\rm Z}}_{\alpha}^{3}(Q,A)$ with $\pi^{*}(c)\in{\text{\rm B}}_{\alpha}^{3}(G,A)$ in terms of a cochain $\mu\in{\text{\rm C}}^{2}(G,A)$ with $\pi^{*}(c)={\partial}_{G}^{\alpha}(\mu)$. In the situation where Polish topologies are available on $G,N$ and $Q$, we assume or demand that all cocycles and cochains are Borel and when it is appropriate equalities are considered modulo null sets relative to the relevant measures. This kind of restrictions requires us nailing down several objects explicitly rather than relying on mere existence of the required objects through abstract mechanism. Given a cocycle $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}(G,N,A)$, we have a $G$-equivariant exact sequence: $$\begin{CD}E:1@>{}>{}>A@>{i_{A}}>{}>E=A\times_{\mu}N@>{j_{E}}>{\underset{{% \mathfrak{s}}_{\!\scriptscriptstyle E}}\to{\longleftarrow}}>N@>{}>{}>1\end{CD}$$ along with a cross-section ${{\mathfrak{s}}_{\!\scriptscriptstyle E}}$ such that $$\displaystyle{{\mathfrak{s}}_{\!\scriptscriptstyle E}}(m){{\mathfrak{s}}_{\!% \scriptscriptstyle E}}(n)=\mu(m,n){{\mathfrak{s}}_{\!\scriptscriptstyle E}}(mn% ),\quad m,n\in N;$$ $$\displaystyle{\alpha}_{g}({{\mathfrak{s}}_{\!\scriptscriptstyle E}}(g{{}^{-1}}% mg))={\lambda}(m,g){{\mathfrak{s}}_{\!\scriptscriptstyle E}}(m),\text{or equivalently}$$ $$\displaystyle{\alpha}_{g}({{\mathfrak{s}}_{\!\scriptscriptstyle E}}(m))={% \lambda}(gmg{{}^{-1}},g){{\mathfrak{s}}_{\!\scriptscriptstyle E}}(gmg{{}^{-1}}% ),\quad g\in G.$$ Choose a cross-section ${{\mathfrak{s}}_{\pi}}$ of $\pi$: $$\begin{CD}1@>{}>{}>N@>{}>{}>G@>{\pi}>{\underset{{\mathfrak{s}}_{\pi}}\to{% \longleftarrow}}>Q@>{}>{}>1,\end{CD}$$ which generates the cocycle ${{\mathfrak{n}}_{N}}\in{\text{\rm Z}}(Q,N)$: $${{\mathfrak{s}}_{\pi}}(p){{\mathfrak{s}}_{\pi}}(q)={{\mathfrak{n}}_{N}}(p,q){{% \mathfrak{s}}_{\pi}}(pq),\quad p,q\in Q.$$ Then we get the associated three cocycle $c_{E}\in{\text{\rm Z}}_{\alpha}^{3}(Q,A)$ given by the following: $$\displaystyle c_{E}$$ $$\displaystyle(p,q,r)=({\partial}_{Q}({{\mathfrak{s}}_{\!\scriptscriptstyle E}}% {\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{\mathfrak{n}}_{N}}))(p,q,r)$$ $$\displaystyle={\alpha}_{{{\mathfrak{s}}_{\pi}}(p)}({{\mathfrak{s}}_{\!% \scriptscriptstyle E}}({{\mathfrak{n}}_{N}}(q,r))){{\mathfrak{s}}_{\!% \scriptscriptstyle E}}({{\mathfrak{n}}_{N}}(p,qr))({{\mathfrak{s}}_{\!% \scriptscriptstyle E}}({{\mathfrak{n}}_{N}}(p,q)){{\mathfrak{s}}_{\!% \scriptscriptstyle E}}(pq,r))){{}^{-1}},$$ which is expressed in terms of $({\lambda},\mu)$ and ${{\mathfrak{n}}_{N}}$ directly: $$\displaystyle c^{{\lambda},\mu}$$ $$\displaystyle(p,q,r)={\lambda}({{\mathfrak{s}}_{\pi}}(p){{\mathfrak{n}}_{N}}(q% ,r){{\mathfrak{s}}_{\pi}}(p){{}^{-1}},{{\mathfrak{s}}_{\pi}}(p))$$ 2.22.22.2 $$\displaystyle\hskip 72.27pt\times\mu({{\mathfrak{s}}_{\pi}}(p){{\mathfrak{n}}_% {N}}(q,r){{\mathfrak{s}}_{\pi}}(p){{}^{-1}},{{\mathfrak{n}}_{N}}(p,qr))$$ $$\displaystyle\hskip 72.27pt\times\mu({{\mathfrak{n}}_{N}}(p,q),{{\mathfrak{n}}% _{N}}(p,qr)){{}^{-1}}.$$ This can be shown by a direct computation from the definition, which we leave to the reader. We denote the cohomology class $[c_{E}]\in{\text{\rm H}}_{\alpha}^{3}(Q,A)$ of $c_{E}$ by ${\delta}([{\lambda},\mu])$, which does not depends on the choices of the cross-sections ${{\mathfrak{s}}_{\pi}}$ and ${{\mathfrak{s}}_{\!\scriptscriptstyle E}}$ but only on the cohomology class of $({\lambda},\mu)$. Lemma 2.1 The image ${\delta}({\Lambda}(G,N,A))$ in ${{\text{\rm H}}^{3}}(Q,A)$ consists of precisely those $[\xi]\in{{\text{\rm H}}_{\alpha}^{3}}(Q,A)$ such that $\pi^{*}([\xi])=1$ in ${{\text{\rm H}}_{\alpha}^{3}}(G,A)$. More precisely if a cochain $\mu\in{\text{\rm C}}_{\alpha}^{2}(G,A)$ gives ${{\partial}_{G}}\mu=\pi^{*}(\xi)$, then $$\displaystyle{\lambda}(m,g)=\mu(g,g{{}^{-1}}mg)\mu(m,g){{}^{-1}},\quad m\in N,% g\in G.$$ 2.32.32.3 together with the restriction $(i_{N})_{*}(\mu)$ gives an element of ${\text{\rm Z}}(G,N,A)$ such that $[\xi]={\delta}[{\lambda},\mu]$ where $i_{N}$ is the embedding of $N$ into $G$, i.e., $$\begin{CD}1@>{}>{}>N@>{i_{N}}>{}>G@>{\pi}>{}>Q@>{}>{}>1.\end{CD}$$ The cochain $f\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ given by $$f(p,q)=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\mu({{\mathfrak% {n}}_{N}}(p,q),{{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}\in A,$$ 2.42.42.4 relates the original cocycle $\xi\in{{\text{\rm Z}}_{\alpha}^{3}}(Q,A)$ and the new cocycle $c^{{\lambda},\mu}$ in the following way: $$\xi=({\partial}_{Q}f)c^{{\lambda},\mu}.$$ Demonstration Proof First we construct a $G$-equivariant exact sequence: $$\begin{CD}E:\quad 1@>{}>{}>A@>{}>{}>E@>{}>{}>N@>{}>{}>1\end{CD}$$ from the data $\pi^{*}(\xi)={\partial}_{G}(\mu)\in{\text{\rm B}}_{a}^{3}(G,A)$ with $\mu\in{\text{\rm C}}_{\alpha}^{2}(G,A)$. Let $B=A^{Q}$ be the abelian group of all $A$-valued, (Borel if applicable), functions on $Q$ on which $Q$ acts by: $$({\alpha}_{p}(b))(q)={\alpha}_{p}(b(qp)),\quad p,q\in Q,\ b\in B=A^{Q}.$$ Viewing $A$ as the subgroup of $B$ consisting of all constant functions, we get an exact sequence: $$\begin{CD}1@>{}>{}>A@>{i}>{}>B@>{j}>{}>C@>{}>{}>1.\end{CD}$$ By the cocycle identity, we have $$\displaystyle\xi(q,r,s)={{\alpha}_{p}^{-1}}\bigg{(}\xi(pq,r,s)\xi(p,qr,s){{}^{% -1}}\xi(p,q,rs)\xi(p,q,r){{}^{-1}}\bigg{)}$$ gives the following with $\eta(p,q,r)={{\alpha}_{p}^{-1}}(\xi(p,q,r))$ viewed as an element of ${\text{\rm C}}^{2}(Q,B)$ as a function of $p$: $$\displaystyle\xi(q,r,s)$$ $$\displaystyle={\alpha}_{q}(\eta(pq,r,s))\eta(p,qr,s){{}^{-1}}\eta(p,q,rs)\eta(% p,q,r){{}^{-1}}$$ $$\displaystyle=\text{Constant in}\ p.$$ Hence $j_{*}({\partial}_{Q}\eta)={\partial}_{Q}(j_{*}(\eta))=1$, so that $j_{*}(\eta)\in{\text{\rm Z}}^{2}(Q,C)$ and therefore we get an exact sequence based on $j_{*}(\eta{{}^{-1}})$: $$\begin{CD}1@>{}>{}>C@>{}>{}>D@>{{\sigma}}>{\underset{\mathfrak{s}}_{\sigma}\to% {\longleftarrow}}>Q@>{}>{}>1,\end{CD}$$ with $D=C\rtimes_{{\alpha},j_{*}(\eta{{}^{-1}})}Q$ and a cross-section ${\mathfrak{s}}_{\sigma}:p\in Q\mapsto(1,p)\in D$ such that $${\mathfrak{s}}_{\sigma}(p){\mathfrak{s}}_{\sigma}(q)=j(\eta(p,q){{}^{-1}}){% \mathfrak{s}}_{\sigma}(pq),\quad p,q\in Q.$$ With $\mu\in{\text{\rm C}}^{2}(G,A)$ such that $\pi^{*}(\xi)={{\partial}_{G}}\mu$, we get $$\displaystyle\xi(\pi(g),$$ $$\displaystyle\pi(h),\pi(k))={\alpha}_{g}(\mu(h,k))\mu(gh,k){{}^{-1}}\mu(g,hk)% \mu(g,h){{}^{-1}}$$ $$\displaystyle={\alpha}_{g}(\eta(p\pi(g),\pi(h),\pi(k)))\eta(p,\pi(gh),\pi(k)){% {}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times\eta(p,\pi(g),\pi(hk))\eta(p,\pi(g),\pi(h)){% {}^{-1}}.$$ Thus, ${\zeta}=i_{*}(\mu)\pi^{*}(\eta{{}^{-1}})\in{\text{\rm Z}}^{2}(G,B)$, which allows us to create an exact sequence: $$\begin{CD}1@>{}>{}>B@>{}>{}>F@>{\tilde{\sigma}}>{\underset{\mathfrak{s}}_{% \tilde{\sigma}}\to{\longleftarrow}}>G@>{}>{}>1\end{CD}$$ with $F=B\rtimes_{{\alpha},{\zeta}}G$ and the cross-section ${\mathfrak{s}}_{\tilde{\sigma}}$, given by ${\mathfrak{s}}_{\tilde{\sigma}}(g)=(1,g),g\in G,$ such that $${\mathfrak{s}}_{{\tilde{\sigma}}}(p){\mathfrak{s}}_{\tilde{\sigma}}(q)=j(\zeta% (p,q)){\mathfrak{s}}_{\tilde{\sigma}}(pq).$$ Since $j_{*}({\zeta})=j_{*}(\eta{{}^{-1}})$, the next diagram is commutative: $$\begin{CD}1@>{}>{}>B@>{}>{}>F@>{{\tilde{\sigma}}}>{}>G@>{}>{}>1\\ @V{j}V{}V@V{(j\times\pi)}V{}V@V{\pi}V{}V\\ 1@>{}>{}>C@>{}>{}>D@>{{\sigma}}>{}>Q@>{}>{}>1\end{CD}$$ With $E={\text{\rm Ker}}(j\times\pi)$, we get the expanded commutative diagram: $$\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>A@>{}>{}>E@>{}>{}>N@>{}>{}>1\\ @V{i}V{}V@V{}V{}V@V{i_{N}}V{}V\\ \end{CD}$$ $$\begin{CD}1@>{}>{}>B@>{}>{}>F@>{\tilde{\sigma}}>{\underset{\mathfrak{s}}_{% \tilde{\sigma}}\to{\longleftarrow}}>G@>{}>{}>1\\ @V{j}V{}V@V{(j\times\pi)}V{}V@V{\pi}V{\uparrow{{\mathfrak{s}}_{\pi}}}V\\ 1@>{}>{}>C@>{}>{}>D@>{{\sigma}}>{\underset{\mathfrak{s}}_{\sigma}\to{% \longleftarrow}}>Q@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\\ \end{CD}$$ The construction of the above diagram came equipped with cross-sections, ${\mathfrak{s}}_{\tilde{\sigma}}$ and ${\mathfrak{s}}_{\sigma}$ such that $${\mathfrak{s}}_{\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}\pi=(j% \times\pi){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\mathfrak{s}}_{% \tilde{\sigma}}.$$ We now look at the extension $E$, which is the kernel ${\text{\rm Ker}}(j\times\pi)$. An element $(b,g)\in F$ belongs to $E$ if and only if $j(b)=1$ and $\pi(g)=1$; if and only if $(b,g)\in A\times N$. For $m,n\in N$ we have $${\mathfrak{s}}_{\tilde{\sigma}}(m){\mathfrak{s}}_{\tilde{\sigma}}(n)=\mu(m,n)% \eta(\pi(m),\pi(n)){{}^{-1}}{\mathfrak{s}}_{\tilde{\sigma}}(mn)=\mu(m,n){% \mathfrak{s}}_{\tilde{\sigma}}(mn),$$ so that we get $$E=A\times_{\mu}N.$$ As $E$ is a normal subgroup of $F$, each ${\mathfrak{s}}_{\tilde{\sigma}}(g),g\in G,$ normalizes $E$. Since the value of the two-cocycle ${\zeta}$ belongs to $B$ each element of which commutes with $A$ and ${\mathfrak{s}}_{\tilde{\sigma}}(N)$, the restriction of ${\alpha}_{g}={\text{\rm Ad}}({\mathfrak{s}}_{\tilde{\sigma}}(g))$ to $E$ gives rise to an honest action of $G$ which is consistent with the original action of $G$ on $A$. Thus we obtain a $G$-equivariant exact sequence: $$\begin{CD}E:\quad 1@>{}>{}>A@>{}>{}>E@>{{\tilde{\sigma}}|_{E}}>{\underset{% \mathfrak{s}}_{\tilde{\sigma}}|_{N}\to{\longleftarrow}}>N@>{}>{}>1.\end{CD}$$ Now we compare the original $[\xi]$ and $[c_{E}]$ in ${\text{\rm H}}^{3}(Q,A)$. The cross-section ${\mathfrak{s}}_{\tilde{\sigma}}$ takes $N$ into $E$ so that its restriction ${{\mathfrak{s}}_{\tilde{\sigma}}}|_{N}$ is a cross-section for ${\tilde{\sigma}}|_{E}$. The associated three cocycle $c_{E}\in{\text{\rm Z}}^{3}(Q,A)$ is obtained by: $$\displaystyle c_{E}$$ $$\displaystyle(p,q,r)={\partial}_{Q}({\mathfrak{s}}_{\tilde{\sigma}}{\lower-1.2% 9pt\hbox{{$\scriptscriptstyle\circ$}}}{{\mathfrak{n}}_{N}})(p,q,r).$$ Consider the map ${\mathfrak{s}}={\mathfrak{s}}_{\tilde{\sigma}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{\pi}}:Q\mapsto E$ and compute $$\displaystyle({\partial}_{Q}$$ $$\displaystyle{\mathfrak{s}})(p,q)={\mathfrak{s}}_{\tilde{\sigma}}({{\mathfrak{% s}}_{\pi}}(p)){{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi}}(q)){{% \mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}$$ $$\displaystyle=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\eta(p,q% ){{}^{-1}}{{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi}}(p){{% \mathfrak{s}}_{\pi}}(q)){{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi% }}(pq)){{}^{-1}}$$ $$\displaystyle=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\eta(p,q% ){{}^{-1}}{{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{n}}_{N}}(p,q){{% \mathfrak{s}}_{\pi}}(pq)){{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{% \pi}}(pq)){{}^{-1}}$$ $$\displaystyle=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\eta(p,q% ){{}^{-1}}\mu({{\mathfrak{n}}_{N}}(p,q),{{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}% \eta(\pi({{\mathfrak{n}}_{N}}(p,q)),pq)$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}_{\tilde{\sigma}}}({{% \mathfrak{n}}_{N}}(p,q)){{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi% }}(pq)){{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}$$ $$\displaystyle=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\mu({{% \mathfrak{n}}_{N}}(p,q),{{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}\eta(p,q){{}^{-1}}% {{\mathfrak{s}}_{\tilde{\sigma}}}({{\mathfrak{n}}_{N}}(p,q)).$$ Thus with $$f(p,q)=\mu({{\mathfrak{s}}_{\pi}}(p),{{\mathfrak{s}}_{\pi}}(q))\mu({{\mathfrak% {n}}_{N}}(p,q),{{\mathfrak{s}}_{\pi}}(pq)){{}^{-1}}\in A,$$ 2.42.42.4 we get $$\displaystyle 1$$ $$\displaystyle=({\partial}_{Q}{{\partial}_{Q}}{\mathfrak{s}})(p,q,r)={\partial}% _{Q}f(p,q,r){\partial}_{Q}\eta(p,q,r){{}^{-1}}{\partial}_{Q}({{\mathfrak{s}}_{% \tilde{\sigma}}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{\mathfrak{n% }}_{N}})(p,q,r)$$ $$\displaystyle={\partial}_{Q}f(p,q,r)\xi(p,q,r){{}^{-1}}c_{E}(p,q,r),$$ so that $$[\xi]=[c_{E}]\in{\delta}({\Lambda}(G,N,A)).$$ Now we compute the associated characteristic cocycle $({\lambda},\mu)\in{\text{\rm Z}}(G,N,A)$: $$\displaystyle{\lambda}(m,g)$$ $$\displaystyle{{\mathfrak{s}}_{\tilde{\sigma}}}(m)={\alpha}_{g}({{\mathfrak{s}}% _{\tilde{\sigma}}}(g{{}^{-1}}mg))={{\mathfrak{s}}_{\tilde{\sigma}}}(g){{% \mathfrak{s}}_{\tilde{\sigma}}}(g{{}^{-1}}mg){{\mathfrak{s}}_{\tilde{\sigma}}}% (g){{}^{-1}}$$ $$\displaystyle=\mu(g,g{{}^{-1}}mg)\eta(\pi(g),\pi(g{{}^{-1}}mg)){{}^{-1}}{{% \mathfrak{s}}_{\tilde{\sigma}}}(mg){{\mathfrak{s}}_{\tilde{\sigma}}}(g){{}^{-1}}$$ $$\displaystyle=\mu(g,g{{}^{-1}}mg)\mu(m,g){{}^{-1}}\eta(\pi(m),\pi(g)){{% \mathfrak{s}}_{\tilde{\sigma}}}(m){{\mathfrak{s}}_{\tilde{\sigma}}}(g){{% \mathfrak{s}}_{\tilde{\sigma}}}(g){{}^{-1}}$$ $$\displaystyle=\mu(g,g{{}^{-1}}mg)\mu(m,g){{}^{-1}}{{\mathfrak{s}}_{\tilde{% \sigma}}}(m),$$ which proves (2.3). As we will need only the construction of a $G$-equivariant short exact sequence from the cochain $\mu\in{\text{\rm C}}_{\alpha}^{2}(G,A)$ with $\pi^{*}(\xi)={{\partial}_{G}}\mu$, we leave the proof for the converse to the reader. It is a direct computation. $\heartsuit$ In the sequel, the group $G$ appears as the quotient group of another group $H$ by a normal subgroup $M$, i.e., $G=H/M$. Let ${\pi\!_{\scriptscriptstyle G}}$ be the quotient map ${\pi\!_{\scriptscriptstyle G}}:H\mapsto G$. Set $L=\pi{{}^{-1}}(N)$ and $${\widetilde{H}}=H\times{\mathbb{R}},\quad{\widetilde{G}}=G\times{\mathbb{R}},% \quad\text{and }\quad{\widetilde{Q}}=Q\times{\mathbb{R}}.$$ 2.52.52.5 Whenever an action ${\alpha}$ of ${\widetilde{H}}$ on a group $E$ is given, we denote the restriction of ${\alpha}$ to ${\mathbb{R}}$ by ${\theta}$. When an action ${\alpha}$ of the group $H$ is given and the cross-sections ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}\!:g\in G\mapsto{{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(g)\in H$ for ${\pi\!_{\scriptscriptstyle G}},\ {\mathfrak{s}}\!:p\in Q\mapsto{\mathfrak{s}}(% p)\in G$ for the quotient map $\pi\!:g\in G\mapsto\pi(g)=gN\in Q$ and ${\dot{\mathfrak{s}}}\!:p\in Q\mapsto{\dot{\mathfrak{s}}}(p)={{\mathfrak{s}}\!_% {\scriptscriptstyle H}}({\mathfrak{s}}(p))\in H$ for the map ${\dot{\pi}}=\pi{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\pi\!_{% \scriptscriptstyle G}}$ are specified, we use the abbreviated notations: $${\alpha}_{g}={\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\quad g% \in G;\quad{\alpha}_{p}={\alpha}_{{\dot{\mathfrak{s}}}(p)},\quad p\in Q,$$ which satisfy: $$\displaystyle{\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{h}$$ $$\displaystyle={\alpha}_{{{\mathfrak{n}}_{M}}(g,h)}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{gh},\quad g,h\in G;$$ 2.62.62.6 $$\displaystyle{\alpha}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{q}$$ $$\displaystyle={\alpha}_{{{\mathfrak{n}}_{L}}(p,q)}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{pq},\quad p,q\in Q,$$ where $$\displaystyle{{\mathfrak{n}}_{M}}(g,h)$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(h){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(gh){{}^{-1}% }\in M,\quad g,h\in G;$$ 2.72.72.7 $$\displaystyle{{\mathfrak{n}}_{L}}(p,q)$$ $$\displaystyle={\dot{\mathfrak{s}}}(p){\dot{\mathfrak{s}}}(q){\dot{\mathfrak{s}% }}(pq){{}^{-1}}\in L,\quad p,q\in Q.$$ We examine the last half of the HJR-exact sequence: $$\begin{CD}{{\text{\rm H}}_{\alpha}^{2}}({\widetilde{H}},A)@>{{\text{\rm res}}}% >{}>{\Lambda}_{\alpha}({\widetilde{H}},L,A)@>{{\delta}_{\scriptscriptstyle% \text{HJR}}}>{}>{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)@>{{\text{\rm inf% }}}>{}>{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{H}},A).\end{CD}$$ First we show: Lemma 2.2 For each $\mu^{\prime}\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$, there is an element $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ such that $\mu^{\prime}$ and $\mu$ are cohomologous and $\mu$ satisfies the condition: $$\mu(\tilde{h},\tilde{k})=\mu_{H}(h,k){\alpha}_{h}(d_{\mu}(s;k)),\quad\tilde{h}% =(h,s),\tilde{k}=(k,t)\in{\widetilde{H}}=H\times{\mathbb{R}},$$ 2.82.82.8 where $$\displaystyle\mu_{H}\in{{\text{\rm Z}}_{\alpha}^{2}}$$ $$\displaystyle(H,A),\quad d(\ \cdot\ ;h)\in{{\text{\rm Z}}_{\theta}^{1}}({% \mathbb{R}},A);$$ 2.92.92.9 $$\displaystyle{\theta}_{s}(\mu_{H}(h,k))$$ $$\displaystyle\mu_{H}(h,k)^{*}=d(s;h){\alpha}_{h}(d(s;k))d(s;hk)^{*};$$ equivalently $${\partial}_{\theta}{\mu\!_{\scriptscriptstyle H}}={\partial}_{H}d.^{\prime}$$ 2.92.92.9 Demonstration Proof We recall ${\text{\rm H}}_{\theta}^{2}({\mathbb{R}},A)=\{1\}$. So we may and do assume that $\mu^{\prime}(s,t)=1,s,t\in{\mathbb{R}}$. Consider the group extension: $$\begin{CD}1@>{}>{}>A@>{i}>{}>F=A\times_{\mu^{\prime}}{\widetilde{H}}@>{j}>{}>{% \widetilde{H}}@>{}>{}>1.\end{CD}$$ The assumption on the restriction $\mu^{\prime}|_{{\mathbb{R}}\times{\mathbb{R}}}$ allows us to find a one parameter subgroup $\{u(s)\!:s\in{\mathbb{R}}\}$ of $F$ with $j(u(s))=s,s\in{\mathbb{R}}$. Choose a cross-section ${\mathfrak{s}}_{j}^{\prime}\!:h\in H\mapsto{\mathfrak{s}}_{j}^{\prime}(h)\in F$ of the map $j$ such that $${\mathfrak{s}}_{j}^{\prime}(h){\mathfrak{s}}_{j}^{\prime}(k)=\mu^{\prime}(h,k)% {\mathfrak{s}}_{j}^{\prime}(hk),\quad h,k\in H.$$ Now set $${{\mathfrak{s}}_{j}}(h,s)={\mathfrak{s}}_{j}^{\prime}(h)u(s),\quad(h,s)\in{% \widetilde{H}}.$$ Now we compute the associated 2-cocycle $\mu\!:$ $$\displaystyle\mu(h,s;k,t)$$ $$\displaystyle={{\mathfrak{s}}_{j}}(h,s){{\mathfrak{s}}_{j}}(k,t){{\mathfrak{s}% }_{j}}(hk,s+t){{}^{-1}}$$ $$\displaystyle={\mathfrak{s}}_{j}^{\prime}(h)u(s){\mathfrak{s}}_{j}^{\prime}(k)% u(t)\{{\mathfrak{s}}_{j}^{\prime}(hk)u(s+t)\}{{}^{-1}}$$ $$\displaystyle={\mathfrak{s}}_{j}^{\prime}(h)\mu^{\prime}(s;k){\mathfrak{s}}_{j% }^{\prime}(k,s)u(t)\{{\mathfrak{s}}_{j}^{\prime}(hk)u(s+t)\}{{}^{-1}}$$ $$\displaystyle={\mathfrak{s}}_{j}^{\prime}(h)\mu^{\prime}(s;k)\mu^{\prime}(k;s)% {{}^{-1}}{\mathfrak{s}}_{j}^{\prime}(k)u(s)u(t)\{{\mathfrak{s}}_{j}^{\prime}(% hk)u(s+t)\}{{}^{-1}}$$ $$\displaystyle={\alpha}_{h}\Big{(}\mu^{\prime}(s;k)\mu^{\prime}(k;s){{}^{-1}}% \Big{)}{\mathfrak{s}}_{j}^{\prime}(h){\mathfrak{s}}_{j}^{\prime}(k){\mathfrak{% s}}_{j}^{\prime}(hk){{}^{-1}}$$ $$\displaystyle={\alpha}_{h}\Big{(}\mu^{\prime}(s;k)\mu^{\prime}(k;a){{}^{-1}}% \Big{)}\mu^{\prime}(h;k)$$ for each $(h,s),(k,t)\in{\widetilde{H}}$. Setting $$\mu_{H}=\mu^{\prime}|_{H}\quad\text{and}\quad d(s;h)=\mu^{\prime}(s;h)\mu^{% \prime}(h;s)^{*}$$ we obtain the first formula and also $$d(s+t;h)=d(s;h){\theta}_{s}(d(t;h)),\quad s,t\in{\mathbb{R}},h\in H.$$ We next check the second identity which follows from the cocycle identity for $\mu$ as seen below: $$\displaystyle 1$$ $$\displaystyle={\alpha}_{\tilde{g}}(\mu(\tilde{h},\tilde{k}))\mu(\tilde{g}% \tilde{h},\tilde{k})^{*}\mu(\tilde{g},\tilde{h}\tilde{k})\mu(\tilde{g},\tilde{% h})^{*},$$ $$\displaystyle\hskip 108.405pt\tilde{g}=(g,s),\tilde{h}=(h,t),\tilde{k}=(k,u)% \in{\widetilde{H}},$$ $$\displaystyle={\alpha}_{\tilde{g}}\Big{(}{\alpha}_{h}(d_{\mu}(s;k)){\mu\!_{% \scriptscriptstyle H}}(h,k)\Big{)}{\alpha}_{gh}(d_{\mu}(s+t;k)^{*})$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}(gh,k)^{*}{% \alpha}_{g}(d_{\mu}(s;hk)){\mu\!_{\scriptscriptstyle H}}(g,hk){\alpha}_{g}(d_{% \mu}(s;h)^{*}){\mu\!_{\scriptscriptstyle H}}(g,h)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}({\alpha}_{h}(d_{\mu}(s;k)){\mu\!% _{\scriptscriptstyle H}}(h,k)){\alpha}_{h}(d_{\mu}(s+t;k)^{*})\Big{)}{\mu\!_{% \scriptscriptstyle H}}(gh,k)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(d_{\mu}(s;hk)){\mu\!_{% \scriptscriptstyle H}}(g,hk){\alpha}_{g}(d_{\mu}(s;h)^{*}){\mu\!_{% \scriptscriptstyle H}}(g,h)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}({\alpha}_{h}(d_{\mu}(s;k)){\mu\!% _{\scriptscriptstyle H}}(h,k)){\alpha}_{h}(d_{\mu}(s;k)^{*}{\theta}_{s}(d_{\mu% }(t;k)^{*}))\Big{)}{\mu\!_{\scriptscriptstyle H}}(gh,k)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(d_{\mu}(s;hk)){\mu\!_{% \scriptscriptstyle H}}(g,hk){\alpha}_{g}(d_{\mu}(s;h)^{*}){\mu\!_{% \scriptscriptstyle H}}(g,h)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}({\mu\!_{\scriptscriptstyle H}}(h% ,k)){\alpha}_{h}(d_{\mu}(s;k))^{*}\Big{)}{\mu\!_{\scriptscriptstyle H}}(gh,k)^% {*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(d_{\mu}(s;hk)){\mu\!_{% \scriptscriptstyle H}}(g,hk){\alpha}_{g}(d_{\mu}(s;h)^{*}){\mu\!_{% \scriptscriptstyle H}}(g,h)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}({\mu\!_{\scriptscriptstyle H}}(h% ,k)){\mu\!_{\scriptscriptstyle H}}(h,k)^{*}{\alpha}_{h}(d_{\mu}(s;k)^{*})\Big{% )}{\alpha}_{g}({\mu\!_{\scriptscriptstyle H}}(h,k)){\mu\!_{\scriptscriptstyle H% }}(gh,k)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(d_{\mu}(s;hk)){\mu\!_{% \scriptscriptstyle H}}(g,hk){\alpha}_{g}(d_{\mu}(s;h)^{*}){\mu\!_{% \scriptscriptstyle H}}(g,h)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}({\mu\!_{\scriptscriptstyle H}}(h% ,k)){\mu\!_{\scriptscriptstyle H}}(h,k)^{*}{\alpha}_{h}(d_{\mu}(s;k)^{*})d_{% \mu}(s;hk)d_{\mu}(s;h)^{*}\Big{)};$$ $$\displaystyle\hskip 14.454pt{\theta}_{s}({\mu\!_{\scriptscriptstyle H}}(h,k)){% \mu\!_{\scriptscriptstyle H}}(h,k)^{*}=d_{\mu}(s;h){\alpha}_{h}(d(s;k))d(s;hk)% ^{*}.$$ This proves the lemma. $\heartsuit$ Definition 2.3. A cocycle $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ of the form (2.8) will be called standard and $d_{\mu}$ and $\mu_{H}$ in (1) will be called naturally the ${\mathbb{R}}$-part and the $H$-part of the cocycle $\mu$. Lemma 2.4 i) If $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ is standard, then the $({\lambda}_{\mu},\mu)={\text{\rm res}}(\mu)\in{\text{\rm Z}}_{\alpha}({% \widetilde{H}},L,A)$ is given by the following: $$\displaystyle{\lambda}_{\mu}(m;\tilde{g})$$ $$\displaystyle={\alpha}_{g}(d_{\mu}(s;g{{}^{-1}}mg)){\mu\!_{\scriptscriptstyle H% }}(g,g{{}^{-1}}mg)\mu_{H}(m;g)^{*},$$ 2.102.102.10 $$\displaystyle\hskip 108.405pt\tilde{g}=(g,s)\in{\widetilde{H}},\ m\in L.$$ ii) If $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$, then $c=c_{{\lambda},\mu}={\delta}_{\scriptscriptstyle\text{\rm HJR}}({\lambda},\mu)$ is given by: $$\begin{aligned} \displaystyle c({\tilde{p}},{\tilde{q}},{\tilde{r}})&% \displaystyle={\alpha}_{p}\Big{(}{\lambda}({{\mathfrak{n}}_{L}}(q,r);s)\Big{)}% {\lambda}({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r)){\dot{\mathfrak{s}}% }(p){{}^{-1}},{\dot{\mathfrak{s}}}(p))\\ &\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_% {L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}},{{\mathfrak{n}}_{L}}(p,qr))\\ &\displaystyle\hskip 36.135pt\times\Big{\{}\mu({{\mathfrak{n}}_{L}}(p,q),{{% \mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}\end{aligned}^{\prime}$$ 2.22.22.2 for each triplet ${\tilde{p}}=(p,s),{\tilde{q}}=(q,t),{\tilde{r}}=(r,u)\in{\widetilde{Q}}$. iii) If $({\lambda},\mu)=({\lambda}_{\mu},\mu)={\text{\rm res}}(\mu)$ with $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{Q}},A)$ standard, then the $3$-cocycle $$c=c_{\mu}={{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}({\lambda}_{\mu},\mu)$$ is cobounded by $f\in{{\text{\rm C}}_{\alpha}^{2}}({\widetilde{Q}},A)$ given by: $$\displaystyle f({\tilde{p}},{\tilde{q}})$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{\mathfrak{s}}}({% \tilde{q}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}({\tilde{p}}% {\tilde{q}}))$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}(p),s;{\dot{\mathfrak{s}}}(q),t)^{*}\mu(% {{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}(pq),s+t)$$ $$\displaystyle={\alpha}_{p}(d_{\mu}(s;{\dot{\mathfrak{s}}}(q))^{*}){\mu\!_{% \scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p),{\dot{\mathfrak{s}}}(q))^{*}{% \mu\!_{\scriptscriptstyle H}}({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}(% pq))\in A,$$ where ${\dot{\mathfrak{s}}}$ is a cross-section of the quotient homomorphism $\dot{\pi}\colon\ H\mapsto Q=H/L$. Demonstration Proof i) The 2-cocyle $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ gives rise to the following commutative diagram of exact sequences equipped with cross-sections ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ and ${{\mathfrak{s}}_{j}}$: $$\begin{CD}1@>{}>{}>A@>{}>{}>F=A\times_{\mu}{\widetilde{H}}@>{{\tilde{j}}}>{% \underset\tilde{\mathfrak{s}}_{H}\to{\longleftarrow}}>{\widetilde{H}}@>{}>{}>1% \\ \Big{\|}@A{\bigcup}A{}A@A{\bigcup}A{}A\\ 1@>{}>{}>A@>{}>{}>E=A\times_{\mu}L@>{j}>{\underset{{\mathfrak{s}}_{j}}\to{% \longleftarrow}}>L@>{}>{}>1\end{CD}$$ The action ${\alpha}$ of ${\widetilde{H}}$ on $E$ is given by ${\alpha}_{\tilde{g}}={\text{\rm Ad}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}% (\tilde{g}))|_{E},\tilde{g}=(g,s)\in H,$ viewing $E$ as a submodule of $F$, where the cross-section ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ is given by $${{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)=(1,g)\in F=A\times_{\mu}{% \widetilde{H}},\quad g\in H.$$ The action ${\theta}$ of ${\mathbb{R}}$ on $E$ is given by: $${\theta}_{s}(a,m)=({\theta}_{s}(a){\lambda}_{\mu}(m;s),m),\quad s\in{\mathbb{R% }},(a,m)\in E=A\times_{\mu}L.$$ Hence ${\theta}_{s}({{\mathfrak{s}}_{j}}(m))={\lambda}_{\mu}(m;s){{\mathfrak{s}}_{j}}% (m),m\in L,s\in{\mathbb{R}}$. Now the cocycle $${\text{\rm res}}(\mu)=({\lambda}_{\mu},\mu)\in{\text{\rm Z}}({\widetilde{H}},L% ,A)$$ is given by $$\displaystyle{\lambda}_{\mu}(m,\tilde{g})$$ $$\displaystyle{{\mathfrak{s}}_{j}}(m)={\alpha}_{g}({{\mathfrak{s}}_{j}}(\tilde{% g}{{}^{-1}}m\tilde{g}))={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(\tilde{g}){{% \mathfrak{s}}_{j}}(\tilde{g}{{}^{-1}}m\tilde{g}){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(\tilde{g}){{}^{-1}}$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(\tilde{g}){{\mathfrak{% s}}\!_{\scriptscriptstyle H}}(\tilde{g}{{}^{-1}}m\tilde{g}){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(\tilde{g}){{}^{-1}}=\mu(\tilde{g},\tilde{g}{{}^{-1}}m% \tilde{g}){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(m\tilde{g}){{\mathfrak{s}}% \!_{\scriptscriptstyle H}}(\tilde{g}){{}^{-1}}$$ $$\displaystyle=\mu(\tilde{g},\tilde{g}{{}^{-1}}m\tilde{g})\mu(m,\tilde{g}){{}^{% -1}}{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(m){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(\tilde{g}){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(% \tilde{g}){{}^{-1}}$$ $$\displaystyle=\mu(\tilde{g},\tilde{g}{{}^{-1}}m\tilde{g})\mu(m,\tilde{g}){{}^{% -1}}{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(m);$$ $$\displaystyle{\lambda}_{\mu}(m,\tilde{g})=\mu(\tilde{g},\tilde{g}{{}^{-1}}m% \tilde{g})\mu(m,\tilde{g}){{}^{-1}}$$ for $m\in L,\tilde{g}=(g,s)\in{\widetilde{H}}$. As $\mu$ is standard, we get further simplification: $$\displaystyle{\lambda}_{\mu}(m;g,s)$$ $$\displaystyle={\alpha}_{g}(d(s;g{{}^{-1}}mg)){\mu\!_{\scriptscriptstyle H}}(g,% g{{}^{-1}}mg){\mu\!_{\scriptscriptstyle H}}(m,g)^{*},\ (g,s)\in{\widetilde{H}}% ,m\in L.$$ ii) Now suppose $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$. The associated $A$-valued 3-cocycle $c=c_{{\lambda},\mu}$ is given by (2.2) and the formula (2.2${}^{\prime}$) follows from (2.2) and the cocycle identity, see [ST2: (1.7), page 411] for ${\lambda}$: $$\displaystyle{\lambda}({\dot{\mathfrak{s}}}({\tilde{p}})$$ $$\displaystyle{{\mathfrak{n}}_{L}}({\tilde{q}},{\tilde{r}}){\dot{\mathfrak{s}}}% ({\tilde{p}}){{}^{-1}};{\dot{\mathfrak{s}}}({\tilde{p}}))={\lambda}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{% \dot{\mathfrak{s}}}(p),s)$$ $$\displaystyle={\alpha}_{{\dot{\mathfrak{s}}}(p)}({\lambda}({{\mathfrak{n}}_{L}% }(q,r));s)){\lambda}({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{% \mathfrak{s}}}(p){{}^{-1}};{\dot{\mathfrak{s}}}(p))$$ for ${\tilde{p}}=(p,s),{\tilde{q}}=(q,t),{\tilde{r}}=(r,u)\in{\widetilde{Q}}$ because ${\dot{\mathfrak{s}}}({\tilde{p}})=({\dot{\mathfrak{s}}}(p),s)$. iii) Now assume $({\lambda},\mu)=({\lambda}_{\mu},\mu)$ with $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ standard. First we compute the associated cocycle $c=c_{\mu}$: $$\displaystyle c_{\mu}$$ $$\displaystyle({\tilde{p}},{\tilde{q}},{\tilde{r}})={\alpha}_{p}({\lambda}_{\mu% }({{\mathfrak{n}}_{L}}(q,r);s)){\lambda}_{\mu}({\dot{\mathfrak{s}}}(p){{% \mathfrak{n}}_{L}}(q,r)){\dot{\mathfrak{s}}}(p){{}^{-1}},{\dot{\mathfrak{s}}}(% p))$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{% L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}},{{\mathfrak{n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}\mu({{\mathfrak{n}}_{L}}(p,q),{{% \mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}\mu(s;{{\mathfrak{n}}_{L}}(q,r))\mu({{% \mathfrak{n}}_{L}}(q,r);s)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}(p);{\dot{\mathfrak{% s}}}(p){{}^{-1}}{\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{% \mathfrak{s}}}(p){{}^{-1}}{\dot{\mathfrak{s}}}(p))$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{% L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{\dot{\mathfrak{s}}}(p))^{*}$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{% L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{{\mathfrak{n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}\mu({{\mathfrak{n}}_{L}}(p,q);{{% \mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r))\Big{)}{% \mu\!_{\scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p);{{\mathfrak{n}}_{L}}(q,r))$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{% \dot{\mathfrak{s}}}(p))^{*}$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{{% \mathfrak{n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}{\mu\!_{\scriptscriptstyle H}}({{% \mathfrak{n}}_{L}}(p,q);{{\mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}.$$ We now compute the coboundary of $f$: $$\displaystyle{\partial}_{\widetilde{Q}}f({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})={\alpha}_{\tilde{p}}(f({\tilde{q}},{% \tilde{r}}))f({\tilde{p}},{\tilde{q}}{\tilde{r}})\{f({\tilde{p}},{\tilde{q}})f% ({\tilde{p}}{\tilde{q}},{\tilde{r}})\}^{*}$$ $$\displaystyle={\alpha}_{{\tilde{p}}}\Big{(}\mu({\dot{\mathfrak{s}}}({\tilde{q}% });{\dot{\mathfrak{s}}}({\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(q,r);{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\Big{)}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,qr);{% \dot{\mathfrak{s}}}(pqr))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% );{\dot{\mathfrak{s}}}({\tilde{q}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{% q}});{\dot{\mathfrak{s}}}({\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(pq,r);{% \dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))\Big{\}}^{*}$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{\mathfrak{s}}}({% \tilde{q}}){\dot{\mathfrak{s}}}({\tilde{r}}))\mu({\dot{\mathfrak{s}}}({\tilde{% p}}),{\dot{\mathfrak{s}}}({\tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}% }){\dot{\mathfrak{s}}}({\tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% );{{\mathfrak{n}}_{L}}(q,r))\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n% }}_{L}}(q,r);{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\Big{\}}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,qr);{% \dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% );{\dot{\mathfrak{s}}}({\tilde{q}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}(pq);{\dot{\mathfrak% {s}}}(r))^{*}\mu({{\mathfrak{n}}_{L}}(pq,r);{\dot{\mathfrak{s}}}(pqr))\Big{\}}% ^{*}$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}});{{\mathfrak{n}}_{L}}(q,r){% \dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\mu({\dot{\mathfrak{s}}}({\tilde{p% }});{\dot{\mathfrak{s}}}({\tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% ){\dot{\mathfrak{s}}}({\tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% );{{\mathfrak{n}}_{L}}(q,r))\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n% }}_{L}}(q,r);{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\Big{\}}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,qr);{% \dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({\dot{\mathfrak{s}}}({\tilde{p}}% );{\dot{\mathfrak{s}}}({\tilde{q}}))\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{% q}});{\dot{\mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(pq,r);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}\Big{\}}$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}}){\dot{\mathfrak{s}}}({% \tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}}))^{*}\Big{\{}\mu({\dot{\mathfrak{% s}}}({\tilde{p}});{{\mathfrak{n}}_{L}}(q,r))\mu({\dot{\mathfrak{s}}}({\tilde{p% }}){{\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\Big% {\}}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,qr);{% \dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}% }{\tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(pq,r)% ;{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}\Big{\}}.$$ We compute some terms below: $$\displaystyle\mu({\dot{\mathfrak{s}}}({\tilde{p}})$$ $$\displaystyle{{\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde% {r}}))=\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n}}_{L}}(q,r){\dot{% \mathfrak{s}}}({\tilde{p}}){{}^{-1}}{\dot{\mathfrak{s}}}({\tilde{p}});{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n}}_{L}}(q,r){% \dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{\dot{\mathfrak{s}}}({\tilde{p}}))^{*}$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{\dot{% \mathfrak{s}}}({\tilde{p}}){\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\mu({% \dot{\mathfrak{s}}}({\tilde{p}});{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n}}_{L}}(q,r){% \dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{\dot{\mathfrak{s}}}({\tilde{p}}))^{% *}\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{\mathfrak{s}}}({\tilde{q}}{% \tilde{r}}))$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{{\mathfrak{% n}}_{L}}(p,qr){\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle=\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n}}_{L}}(q,r){% \dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{\dot{\mathfrak{s}}}({\tilde{p}}))^{% *}\mu({\dot{\mathfrak{s}}}({\tilde{p}});{\dot{\mathfrak{s}}}({\tilde{q}}{% \tilde{r}}))\mu({{\mathfrak{n}}_{L}}(p,qr);{\dot{\mathfrak{s}}}({\tilde{p}}{% \tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}};{{\mathfrak{% n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}}{{\mathfrak{n% }}_{L}}(q,r);{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}));$$ $$\displaystyle\mu({\dot{\mathfrak{s}}}({\tilde{p}})$$ $$\displaystyle{\dot{\mathfrak{s}}}({\tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}% }))=\mu({{\mathfrak{n}}_{L}}(p,q){\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}});% {\dot{\mathfrak{s}}}({\tilde{r}}))$$ $$\displaystyle=\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}({\tilde{p}}{% \tilde{q}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}({\tilde{p}}% {\tilde{q}}){\dot{\mathfrak{s}}}({\tilde{r}}))\mu({\dot{\mathfrak{s}}}({\tilde% {p}}{\tilde{q}});{\dot{\mathfrak{s}}}({\tilde{r}}))$$ $$\displaystyle=\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}({\tilde{p}}{% \tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}});{\dot{% \mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(p,q);{{\mathfrak{n}}_{L}}% (pq,r){\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle=\mu({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak{s}}}({\tilde{p}}{% \tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}});{\dot{% \mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(pq,r);{\dot{\mathfrak{s}}% }({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 36.135pt\times\mu({{\mathfrak{n}}_{L}}(p,q);{{\mathfrak{n}% }_{L}}(pq,r))\mu({{\mathfrak{n}}_{L}}(p,q){{\mathfrak{n}}_{L}}(pq,r);{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}})).$$ We then substitute the above expression in the original calculation: $$\displaystyle{\partial}_{\widetilde{Q}}f$$ $$\displaystyle({\tilde{p}},{\tilde{q}},{\tilde{r}})=\mu({\dot{\mathfrak{s}}}({% \tilde{p}}){\dot{\mathfrak{s}}}({\tilde{q}}),{\dot{\mathfrak{s}}}({\tilde{r}})% )^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}}),{{\mathfrak{n}}_{L}}(q,r))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r),{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\mu({\dot% {\mathfrak{s}}}({\tilde{p}}),{\dot{\mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}% \mu({{\mathfrak{n}}_{L}}(p,qr),{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{% \tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({{\mathfrak{n}}_{L}}(p,q),{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}% }{\tilde{q}}),{\dot{\mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(pq,r)% ,{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle=\mu({{\mathfrak{n}}_{L}}(p,q),{\dot{\mathfrak{s}}}({\tilde{p}}{% \tilde{q}}))\mu({\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}),{\dot{\mathfrak{s% }}}({\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({{\mathfrak{n}}_{L}}(pq,r),{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(p,q% ),{{\mathfrak{n}}_{L}}(pq,r))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({{\mathfrak{n}}_{L}}(p,q){{\mathfrak{n}}% _{L}}(pq,r),{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}),{{% \mathfrak{n}}_{L}}(q,r))\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{\mathfrak{n}}_{% L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}})^{-1},{\dot{\mathfrak{s}}}({\tilde{p}% }))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}),{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(p,qr),{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}},{{\mathfrak{% n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}}{{\mathfrak{n% }}_{L}}(q,r),{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}),{\dot{% \mathfrak{s}}}({\tilde{q}}{\tilde{r}}))^{*}\mu({{\mathfrak{n}}_{L}}(p,qr),{% \dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({{\mathfrak{n}}_{L}}(p,q),{\dot{% \mathfrak{s}}}({\tilde{p}}{\tilde{q}}))^{*}\mu({\dot{\mathfrak{s}}}({\tilde{p}% }{\tilde{q}}),{\dot{\mathfrak{s}}}({\tilde{r}}))\mu({{\mathfrak{n}}_{L}}(pq,r)% ,{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}}{\tilde{r}}))^{*}$$ $$\displaystyle=\mu({{\mathfrak{n}}_{L}}(p,q),{{\mathfrak{n}}_{L}}(pq,r))^{*}\mu% ({\dot{\mathfrak{s}}}({\tilde{p}}),{{\mathfrak{n}}_{L}}(q,r))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}},{\dot{% \mathfrak{s}}}({\tilde{p}}))$$ $$\displaystyle\hskip 14.454pt\times\mu({\dot{\mathfrak{s}}}({\tilde{p}}){{% \mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}({\tilde{p}}){{}^{-1}},{{\mathfrak{% n}}_{L}}(p,qr))$$ $$\displaystyle={\alpha}_{p}\Big{(}d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r))\Big{)}{% \mu\!_{\scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p);{{\mathfrak{n}}_{L}}(q,r))$$ $$\displaystyle\hskip 14.454pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{% \dot{\mathfrak{s}}}(p))^{*}$$ $$\displaystyle\hskip 14.454pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{{% \mathfrak{n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}{\mu\!_{\scriptscriptstyle H}}({{% \mathfrak{n}}_{L}}(p,q);{{\mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}$$ $$\displaystyle=c_{\mu}({\tilde{p}},{\tilde{q}},{\tilde{r}})$$ for each triplet ${\tilde{p}}=(p,s),{\tilde{q}},{\tilde{r}}\in{\widetilde{Q}}$. This completes the proof. $\heartsuit$ Lemma 2.5 i) Every cohomology class $[c]\in{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)$ can be represented by a cocycle $c$ of the form: $$\displaystyle c({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})={\alpha}_{p}(d_{c}(s;q,r))c_{Q}(p,q,r),$$ 2.112.112.11 $$\displaystyle\hskip 36.135pt{\tilde{p}}=(p,s),{\tilde{q}}=(q,t),{\tilde{r}}=(r% ,u)\in{\widetilde{Q}},$$ where $c_{Q}\in{{\text{\rm Z}}_{\alpha}^{3}}(Q,A)$ and $d_{c}(\cdot,q,r)\in{{\text{\rm Z}}_{\theta}^{1}}$. ii) Given a function $d:{\mathbb{R}}\times Q^{2}\mapsto A$ and $c_{Q}\in{{\text{\rm Z}}_{\alpha}^{3}}(Q,A)$, the function $c$ given by: $$c({\tilde{p}},{\tilde{q}},{\tilde{r}})={\alpha}_{p}(d(s;q,r))c_{Q}(p,q,r)$$ is an element of ${{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},A)$ if and only if iii) For a cocycle $c\in{\text{\rm Z}}_{\alpha}^{3}({\widetilde{Q}},A)$ of the form (2.11) the following are equivalent: Demonstration Proof The assertion (i) follows from the fact that the additive real line ${\mathbb{R}}$ has trivial second and third cohomologies. Every 3-cocyle we encounter in this paper will be of this form without perturbation anyway. So we omit the proof. ii) This follows directly from the cocycle identity for $c$. We omit the detail. iii) This equivalence again follows from a direct easy computation. $\heartsuit$ Definition 2.6. A cocycle $c\in{{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},{\mathbb{R}})$ of the form (2.11) will be called standard. We will concentrate on the subgroup ${{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ of all standard cocyles in ${{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},A)$. The index “s” stands for “standard”. We then set $${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)={{\text{\rm Z}% }_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)/{\partial}_{{\widetilde{Q}}}% ({{\text{\rm C}}_{\alpha}^{2}}(Q,A)).$$ The coboundary group ${{\text{\rm B}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)={\partial}_{{% \widetilde{Q}}}({{\text{\rm C}}_{\alpha}^{2}}(Q,A))$ is a subgroup of the usual third coboundary group ${{\text{\rm B}}_{\alpha}^{3}}({\widetilde{Q}},A)={\partial}_{\widetilde{Q}}({{% \text{\rm C}}_{\alpha}^{2}}({\widetilde{Q}},A))$, so that we have a natural surjective homomorphism: $$\begin{CD}{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)@>{}>% {}>{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)\end{CD}$$ The fixed cross-section ${\mathfrak{s}}\!:Q\mapsto G$ allows us to consider the fiber product $${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}% }{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$$ consisting of those pairs $([c],\nu)\in{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)% \times{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ such that $$\displaystyle=\nu({{\mathfrak{n}}_{N}}(q,r))\quad\text{in }\ {{\text{\rm H}}_{% \theta}^{1}},\quad q,r\in Q.$$ The group ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}% }{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ will be denoted by ${{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,A)$ for short. The suffix “${\mathfrak{s}}$” is placed to indicate that this fiber product depends heavily on the cocyle ${{\mathfrak{n}}_{N}}$ hence on the cross-section ${\mathfrak{s}}$. As mentioned earlier, the invariant for outer actions of $G$ must respect the cross-section ${\mathfrak{s}}$ because a change in the cross-section results an alteration on the outer conjugacy class from the original outer conjugacy class. Before stating the main theorem of the section, we still need some preparation. Theorem 2.7 Suppose that $\{{\eusm C},{\mathbb{R}},{\theta}\}$ is an ergodic flow and a homomorphism ${\alpha}$: $g\in H\mapsto{\alpha}_{g}\in{\text{\rm Aut}}_{\theta}({\eusm C})$, the group of automorphisms of ${\eusm C}$ commuting with ${\theta}$. Assume the following: Set ${\widetilde{H}}=H\times{\mathbb{R}},{\widetilde{G}}=G\times{\mathbb{R}}$ and ${\widetilde{Q}}=Q\times{\mathbb{R}}$. Let A denote the unitary group ${\eusm U}({\eusm C})$ of ${\eusm C}.$ Under the above setting, there is a natural exact sequence which sits next to the Huebshmann-Jones-Ratcliffe exact sequence: $$\begin{CD}{{\text{\rm H}}^{2}}(H,{\mathbb{T}})@={{\text{\rm H}}^{2}}(H,{% \mathbb{T}})\\ @V{{\text{\rm Res}}}V{}V@V{{\text{\rm res}}}V{}V\\ {\Lambda}({\widetilde{H}},L,M,A)@>{}>{}>{\Lambda}_{\alpha}(H,M,{\mathbb{T}})\\ @V{{\delta}}V{}V@V{{{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}}V{}V\\ {{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,A)@>{{\partial}}>{}>{{\text{\rm H}}^{3}}(G,{\mathbb{T}})\\ @V{{\text{\rm Inf}}}V{}V@V{{\text{\rm inf}}}V{}V\\ {{\text{\rm H}}^{3}}(H,{\mathbb{T}})@={{\text{\rm H}}^{3}}(H,{\mathbb{T}})\end% {CD}$$ 2.132.132.13 We need some preparation. Lemma 2.8 To each characteristic cocycle $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$, there corresponds uniquely an ${\widetilde{H}}$-equivariant exact square: $$\eightpoint\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>A@>{{\partial}_{\theta}}>{}>{\text{\rm B}}@>{}>{}% >1\\ @V{}V{}V@V{i}V{}V@V{}V{}V\\ 1@>{}>{}>U=E^{\theta}@>{}>{}>E@>{{\widetilde{\partial}}_{\theta}}>{}>{\text{% \rm Z}}@>{}>{}>1\\ @V{}V{}V@V{j}V{}V@V{{\pi\!_{\scriptscriptstyle{\text{\rm Z}}}}}V{}V\\ 1@>{}>{}>K@>{}>{}>L@>{\dot{\partial}}>{}>{\text{\rm H}}@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\end{CD}$$ 2.142.142.14 with $E=A\times_{\mu}L$. Demonstration Proof The cocycle $({\lambda},\mu)$ gives an ${\widetilde{H}}$-equivariant exact sequence: $$\begin{CD}E:1@>{}>{}>A@>{i}>{}>E=A\times_{\mu}L@>{j}>{}>L@>{}>{}>1.\end{CD}$$ With $U=E^{\theta}$, the fixed point subgroup of $E$ under the action ${\theta}$ of ${\mathbb{R}}$, we set $$\displaystyle\quad{\text{\rm B}}=A/{\mathbb{T}}\cong{{\text{\rm B}}_{\theta}^{% 1}}({\mathbb{R}},A);\quad{\text{\rm Z}}=E/U;$$ $$\displaystyle K=K(E)=j(U)\cong U/{\mathbb{T}};\quad{\text{\rm H}}={\text{\rm Z% }}/{\text{\rm B}}.$$ As the real line ${\mathbb{R}}$ does not act on the group $H$, we have $$\displaystyle j({\theta}_{s}(x)x{{}^{-1}})$$ $$\displaystyle=j(x)j(x){{}^{-1}}=1,\quad x\in E,\ s\in{\mathbb{R}};$$ $$\displaystyle{\theta}_{s}(x)x{{}^{-1}}=({\partial}_{\theta}x)_{s}\in A,$$ and $a:s\in{\mathbb{R}}\mapsto a_{s}=({\partial}_{\theta}x)_{s}\in A$ is a cocycle, a member of ${{\text{\rm Z}}_{\theta}^{1}}({\mathbb{R}},A)$. Thus ${\text{\rm Z}}\subset{{\text{\rm Z}}_{\theta}^{1}}({\mathbb{R}},A)$ and naturally ${\text{\rm H}}\subset{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)$. The map ${\partial}_{\theta}$ can be viewed either as the quotient map: $E\mapsto{\text{\rm Z}}$ or the coboundary map described above. Now it is clear that these groups ${\mathbb{T}},A,\cdots,{\text{\rm H}}$ form a commutative exact sequare of (2.14) on which ${\widetilde{H}}$ acts. $\heartsuit$ We will denote the subgroup $K$ of $L$ in (2.14) by $K({\lambda},\mu)$ or $K(\chi)$ to indicate the dependence of $K$ on the cocycle $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$ or the charactieristic invariant $\chi=[{\lambda},\mu]\in{\Lambda}_{\alpha}({\widetilde{H}},L,A)$. We then define the subgroups: $$\displaystyle{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$$ $$\displaystyle=\{({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)% :K({\lambda},\mu)\supset M\};$$ 2.152.152.15 $$\displaystyle{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$$ $$\displaystyle=\{\chi\in{\Lambda}_{\alpha}({\widetilde{H}},L,A):\ K(\chi)% \supset M\}.$$ A cocycle $({\lambda},\mu)\in{\text{\rm Z}}({\widetilde{H}},L,A)$ belongs to the subgroup ${\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$ if and only if the cocycle $({\lambda},\mu)$ satisfies the conditions: $$({\lambda}|_{M\times{\widetilde{H}}},\mu|_{M\times M})\in{\text{\rm Z}}({% \widetilde{H}},M,{\mathbb{T}});\quad{\lambda}(m;s)=1,\quad s\in{\mathbb{R}},m% \in M.$$ Let ${\text{\rm pr}}_{H}:{\widetilde{H}}\mapsto H$ be the projection map from ${\widetilde{H}}=H\times{\mathbb{R}}$ to the $H$-component and $i_{A,{\mathbb{T}}}:{\mathbb{T}}\mapsto A$ be the canonical embedding of ${\mathbb{T}}$ to $A$. Finally, let $i_{L,M}:M\mapsto L$ be the embedding of $M$ into $L$. Then we have naturally: $$\begin{CD}{\Lambda}_{\alpha}({\widetilde{H}},L,A)@>{i^{*}_{L,M}}>{}>{\Lambda}(% {\widetilde{H}},M,A)\\ \Big{\|}\\ {\Lambda}(H,M,{\mathbb{T}})@>{{\text{\rm pr}}_{H}^{*}}>{}>{\Lambda}({% \widetilde{H}},M,{\mathbb{T}})@>{(i_{A,{\mathbb{T}}})_{*}}>{}>{\Lambda}({% \widetilde{H}},M,A)\end{CD}$$ In terms of these maps, we can restate the subgroup ${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ in the following way: $${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)=(i_{L,M}^{*}){{}^{-1}}((i_{A,{% \mathbb{T}}})_{*}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm pr% }}_{H}^{*}({\Lambda}(H,M,{\mathbb{T}})).$$ The above maps also generates the following chain: $$\begin{CD}{{\text{\rm H}}^{2}}(H,{\mathbb{T}})@>{{\text{\rm pr}}_{H}^{*}}>{}>{% {\text{\rm H}}^{2}}({\widetilde{H}},{\mathbb{T}})@>{(i_{A,{\mathbb{T}}})_{*}}>% {}>{{\text{\rm H}}_{\alpha}^{2}}({\widetilde{H}},A)@>{{\text{\rm res}}}>{}>{% \Lambda}({\widetilde{H}},L,A)\end{CD}$$ and the range of the composed map $${\text{\rm Res}}={\text{\rm res}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}(i_{A,{\mathbb{T}}})_{*}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \text{\rm pr}}_{H}^{*}:{{\text{\rm H}}^{2}}(H,{\mathbb{T}})\mapsto{\Lambda}_{% \alpha}({\widetilde{H}},L,A)$$ is contained in the group ${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ defined above, which generates the maps: $$\begin{CD}{{\text{\rm H}}^{2}}(H,{\mathbb{T}})@>{{\text{\rm Res}}}>{}>{\Lambda% }_{\alpha}({\widetilde{H}},L,M,A).\end{CD}$$ Coming back to the orignial situation that $H={\text{\rm Aut}}({\eusm M}),M={\text{\rm Int}}({\eusm M})$ and $L={{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$, we know that each element of ${\text{\rm Res}}({{\text{\rm H}}^{2}}(H,{\mathbb{T}}))$ gives a perturbation of the action of ${\text{\rm Aut}}({\eusm M})$ on ${\eusm M}$ differ by ${\text{\rm Int}}({\eusm M})$. Hence we must be concerned with the quotient group $${\Lambda}_{\alpha}({\widetilde{H}},L,M,A)/{\text{\rm Res}}({{\text{\rm H}}^{2}% }(H,{\mathbb{T}})).$$ The map ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}={\delta}$ in the the Huebschmann - Jones - Ratcliffe exact sequence: $$\eightpoint\begin{aligned} \displaystyle 1\longrightarrow&\displaystyle\text{% \rm H}^{1}({\widetilde{Q}},A)\overset\pi^{*}\to{\longrightarrow}\text{\rm H}^{% 1}({\widetilde{H}},A)\longrightarrow\text{\rm H}^{1}(L,A)^{\widetilde{H}}% \longrightarrow\\ &\displaystyle\longrightarrow\text{\rm H}^{2}({\widetilde{Q}},A)% \longrightarrow{\text{\rm H}}^{2}({\widetilde{H}},A)\longrightarrow{\Lambda}({% \widetilde{H}},L,A)\overset{\delta}\to{\longrightarrow}\text{\rm H}^{3}({% \widetilde{Q}},A)\overset\pi^{*}\to{\longrightarrow}\text{\rm H}^{3}({% \widetilde{H}},A),\end{aligned}$$ to be abbreviated the HJR-exact sequence, [Hb, J1, Rc], gives a natural map ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}{\!:\ }\ {\Lambda}_{\alpha}({% \widetilde{H}},L,M,A)\mapsto{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)$. The map ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}$ will be called the HJR map and the modified HJR map ${\delta}$ relevant to our discussion will be constructed along with the other two maps: $$\displaystyle{\partial}:$$ $$\displaystyle{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)\longrightarrow{{\text{\rm H}}^{3}}(G,{\mathbb{T}});$$ $$\displaystyle{\text{\rm Inf}}:$$ $$\displaystyle{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)\longrightarrow{{\text{\rm H}}^{3}}(H,{\mathbb{T}}).$$ Construction of the modified HJR-map ${\delta}$: First we fix a cocycle $({\lambda},\mu)\in{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ and consider the corresponding crossed extension $E$: $$\begin{CD}1@>{}>{}>A@>{i}>{}>E@>{j}>{}>L@>{}>{}>1.\end{CD}$$ As $M\subset K({\lambda},\mu)$, with $V=M\times_{\mu}{\mathbb{T}}$ and $F=E/V$ we have an ${\widetilde{H}}$-equivariant exact square: $$\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>A@>{{\partial}_{\theta}}>{}>{\text{\rm B}}@>{}>{}% >1\\ @V{}V{}V@V{i}V{}V@V{}V{}V\\ 1@>{}>{}>V@>{}>{}>E@>{}>{}>F@>{}>{}>1\\ @V{}V{}V@V{j}V{\big{\uparrow}{{\mathfrak{s}}_{j}}}V@V{\pi_{N}}V{}V\\ 1@>{}>{}>M@>{}>{}>L@>{{\pi\!_{\scriptscriptstyle G}}|_{L}}>{\underset{{% \mathfrak{s}}_{G}}\to{\longleftarrow}}>N@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\end{CD}$$ As $M\subset K({\lambda},\mu)$, we get a $G$-equivariant homomorphism $\nu_{\chi}:N\mapsto{\text{\rm H}}\subset{{\text{\rm H}}_{\theta}^{1}}({\mathbb% {R}},A)$, where $G=H/M$, i.e., $\nu_{\chi}\in{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}}% ,A)).$ Lemma 2.9 Fix $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$. i) For the cross-section ${{\mathfrak{s}}_{j}}\!:m\in L\mapsto(1,m)\in E=A\times_{\mu}L$ of the map $j\!:E\mapsto L$ associated with the cocycle $({\lambda},\mu)$, the cocycle $c=c^{{\lambda},\mu}$ given by (2.2${}^{\prime}$) is standard with $$\displaystyle d_{c}(s;q,r)$$ $$\displaystyle={\lambda}({{\mathfrak{n}}_{L}}(q,r);s),\quad q,r\in Q,s\in{% \mathbb{R}};$$ $$\displaystyle c_{Q}(p,q,r)$$ $$\displaystyle={\lambda}({\dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{% \mathfrak{s}}}(p){{}^{-1}};{\dot{\mathfrak{s}}}(p))\mu({\dot{\mathfrak{s}}}(p)% {{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}},{{\mathfrak{n}}_{L}}% (p,qr))$$ $$\displaystyle\hskip 72.27pt\times\{\mu({{\mathfrak{n}}_{L}}(p,q),{{\mathfrak{n% }}_{L}}(pq,r)\}^{*},\quad p,q,r\in Q.$$ ii) $$\displaystyle([c_{\chi}],\nu_{\chi})\in{{\text{\rm H}}_{{\alpha},\text{\rm s}}% ^{3}}({\widetilde{Q}},A)*_{\text{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{% \theta}^{1}}({\mathbb{R}},A));$$ Demonstration Proof i) This is obvious from the formula (2.2${}^{\prime}$). ii) The cross-section ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ gives an $M$-valued 2-cocycle: $${{\mathfrak{n}}_{M}}(g,h)={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g){{% \mathfrak{s}}\!_{\scriptscriptstyle H}}(h){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(gh){{}^{-1}}\in M,\quad g,h\in G,$$ which allows us to relate ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\mathfrak{n}}_{N}}(q,r))$ and ${{\mathfrak{n}}_{L}}(q,r)$: $$\displaystyle{\pi\!_{\scriptscriptstyle G}}({{\mathfrak{n}}_{L}}(q,r))$$ $$\displaystyle={\pi\!_{\scriptscriptstyle G}}\Big{(}\dot{\mathfrak{s}}(q)\dot{% \mathfrak{s}}(r)\dot{\mathfrak{s}}(qr){{}^{-1}}\Big{)}$$ $$\displaystyle={\mathfrak{s}}(q){\mathfrak{s}}(r){\mathfrak{s}}(qr){{}^{-1}}={{% \mathfrak{n}}_{N}}(q,r).$$ Hence ${{\mathfrak{n}}_{L}}(q,r)\equiv{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \mathfrak{n}}_{N}}(q,r))\ {\text{\rm mod}}\ M$. As for each $m\in M,\ell\in L$ and $s\in{\mathbb{R}}$ we have $$\displaystyle{\lambda}(m\ell;s)$$ $$\displaystyle={\theta}_{s}(\mu(m;\ell)^{*})\mu(m;\ell){\lambda}(m;s){\lambda}(% \ell;s)$$ $$\displaystyle={\theta}_{s}(\mu(m;\ell)^{*})\mu(m;\ell){\lambda}(\ell;s),$$ we get $$[{\lambda}(m\ell;\ \cdot)]=[{\lambda}(\ell;\ \cdot)]\quad\text{in}\ {{\text{% \rm H}}_{\theta}^{1}}({\mathbb{R}},A)\quad\text{for every}\ m\in M,\ell\in L.$$ Thus $[{\lambda}({{\mathfrak{n}}_{L}}(q,r);\ \cdot)]=\nu_{\chi}({{\mathfrak{n}}_{N}}% (q,r))\in{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)$, which precisely means that $([c^{{\lambda},\mu}],\nu)\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{% \rm out}}}(G\times{\mathbb{R}},N,A)$. $\heartsuit$ Thus we obtain an element: $${\delta}(\chi)=([c^{{\lambda},\mu}],\nu_{\chi})\in{{\text{\rm H}}_{{\alpha},{% \mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}},N,A),$$ and therefore the map $${\delta}:{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)\mapsto{{\text{\rm H}}_{{% \alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}},N,A).$$ We will call ${\delta}$ the modified HJR-map. To distinguish this modified HJR map from the original HJR map, we denote the original one by ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}$ and the modified one simply by ${\delta}$. Note that the map ${\delta}$ does not depend on the choice of the section ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}\,{\!:\ }G\longmapsto H,$ but depends on the choice of the section ${\mathfrak{s}}\,{\!:\ }Q\longmapsto G.$ We now begin the proof of Theorem 2.7. $\boldsymbol{{\text{\rm Ker}}({\delta})=\text{Im}({\text{\rm Res}})}$: First assume that $$\mu\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})\quad\text{and}\quad\chi={\text{\rm Res% }}([\mu])\in{\Lambda}({\widetilde{H}},L,M,A).$$ Then we have a commutative diagram of exact sequences: $$\begin{CD}1@>{}>{}>{\mathbb{T}}@>{}>{}>F={\mathbb{T}}\times_{\mu}H@>{j_{H}}>{% \underset{\mathfrak{s}}\!_{F}\to{\longleftarrow}}>H@>{}>{}>1\\ @V{}V{}V@V{\bigcap}V{}V\Big{\|}\\ 1@>{}>{}>A@>{}>{}>\widetilde{F}=A\times_{\mu}H@>{{\tilde{j}}}>{\underset\tilde% {\mathfrak{s}}\!_{F}\to{\longleftarrow}}>H@>{}>{}>1\\ \Big{\|}@A{\bigcup}A{}A@A{\bigcup}A{}A\\ 1@>{}>{}>A@>{}>{}>E=A\times_{\mu}L@>{j}>{\underset{{\mathfrak{s}}_{j}}\to{% \longleftarrow}}>L@>{}>{}>1\end{CD}$$ The action ${\alpha}$ of $H$ on $E$ is given by ${\alpha}_{h}={\text{\rm Ad}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}(h))|_{E% },h\in H,$ viewing $E$ as a submodule of $\tilde{F}$, where the cross-section ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ is given by $${{\mathfrak{s}}\!_{\scriptscriptstyle H}}(h)=(1,h)\in\tilde{F}=A\times_{\mu}H,% \quad h\in H.$$ The action ${\theta}$ of ${\mathbb{R}}$ on $E$ is given by: $${\theta}_{s}(a,m)=({\theta}_{s}(a),m),\quad s\in{\mathbb{R}},(a,m)\in E=A% \times_{\mu}L.$$ Hence ${\theta}_{s}({{\mathfrak{s}}_{j}}(m))={{\mathfrak{s}}_{j}}(m),m\in L,s\in{% \mathbb{R}}$. Now ${\text{\rm res}}(\mu)=({\lambda}_{\mu},\mu)\in{\text{\rm Z}}({\widetilde{H}},L% ,A)$ is given by (2.10). As $\mu$ takes values in ${\mathbb{T}}$, we have $\mu={\mu\!_{\scriptscriptstyle H}}$, i.e., $d_{\mu}=1$. Consequently, ${\lambda}_{\mu}(m;s)=1,m\in L,s\in{\mathbb{R}}$ which entails $${\lambda}_{\mu}({{\mathfrak{n}}_{L}}(p,q);s)=1,p,q\in Q=H/L,s\in{\mathbb{R}}.$$ By Lemma 2.4.(iii), the associated 3-cocyle $c_{\mu}=c_{{\lambda}_{\mu},\mu}\in{{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}% },A)$ is co-bounded by $f$ of the form: $$f(p,q)=\mu({\dot{\mathfrak{s}}}(p),{\dot{\mathfrak{s}}}(q))^{*}\mu({{\mathfrak% {n}}_{L}}(p,q),{\dot{\mathfrak{s}}}(pq))\in{\mathbb{T}}.$$ This shows that ${\text{\rm Im (Res)}}\subset{\text{\rm Ker}}({\delta})$. We are now moving to show the reversed inclusion: $\text{\rm Im (Res)}\supset{\text{\rm Ker}}({\delta})$. We first compare the original HJR-exact sequence and our modified HJR sequence. To this end, we recall that the cohomology group ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ is obtained as the quotient group of a subgroup ${{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ of ${{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},A)$ by a subgroup ${{\text{\rm B}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ of ${{\text{\rm B}}_{\alpha}^{3}}({\widetilde{Q}},A)$. Thus we have a natural map: ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)\mapsto{{\text{% \rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)$. Consequently, the above HJR-exact sequence applied to our context yields the following commutative diagram: $$\eightpoint\begin{CD}{\text{\rm H}}_{\alpha}^{2}({\widetilde{H}},A)@>{{\text{% \rm res}}}>{}>{\Lambda}_{\alpha}({\widetilde{H}},L,A)@>{{{\delta}_{% \scriptscriptstyle{\text{\rm HJR}}}}}>{}>{\text{\rm H}}_{\alpha}^{3}({% \widetilde{Q}},A)@>{{\text{\rm inf}}}>{}>{\text{\rm H}}_{\alpha}^{3}({% \widetilde{H}},A)\\ @A{}A{(i_{A,{\mathbb{T}}})_{*}}A@A{}A{}A@A{}A{}A\\ {{\text{\rm H}}^{2}}(H,{\mathbb{T}})@>{{\text{\rm Res}}}>{}>{\Lambda}_{\alpha}% ({\widetilde{H}},L,M,A)@>{{\delta}}>{}>{{\text{\rm H}}_{{\alpha},\text{\rm s}}% ^{3}}({\widetilde{Q}},A)\end{CD}$$ Suppose $\chi=[{\lambda},\mu]\in{\text{\rm Ker}}({\delta})\subset{\Lambda}_{\alpha}({% \widetilde{H}},L,M,A)$. Then $$1={\delta}(\chi)=([c_{\chi}],\nu_{\chi})\quad\text{ in }\quad{{\text{\rm H}}_{% {\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}% _{G}(N,{{\text{\rm H}}_{\theta}^{1}}).$$ The above assumption also means ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(\chi)=1$. Hence the HJR-exact sequence guarantees that the 2-cocyle $\mu$ on $L$ can be extended to ${\widetilde{H}}$ as an $A$-valued 2-cocyle over ${\widetilde{H}}$ which we denote by $\mu$ again so that ${\lambda}={\lambda}_{\mu}$. To proceed further, we need the following: Lemma 2.10 If a 2-cocycle $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ is standard, and if $$\chi={\text{\rm res}}([\mu])=[{\lambda}_{\mu},\mu]\in{\Lambda}_{\alpha}({% \widetilde{H}},L,A)$$ generates trivial $\nu_{\chi}=1$ of ${\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A))$, then there exists a standard ${\tilde{\mu}}\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ such that Demonstration Proof Let ${\pi\!_{\scriptscriptstyle{\text{\rm Z}}}}$ be the quotient map: $c\in{\text{\rm Z}}\mapsto[c]\in{\text{\rm H}}={\text{\rm Z}}/{\text{\rm B}}$. The condition $\nu_{\chi}(n)=1,n\in N,$ implies that $$\displaystyle\pi_{\text{\rm Z}}({{\partial}_{\theta}}({{\mathfrak{s}}_{j}}(m)))$$ $$\displaystyle={{\dot{\partial}}_{\theta}}(m)=\nu_{\chi}({\pi\!_{% \scriptscriptstyle G}}(m))=1\in{{\text{\rm H}}_{\theta}^{1}},\quad m\in L;$$ $$\displaystyle{{\partial}_{\theta}}({{\mathfrak{s}}_{j}}(m))\in{{\text{\rm B}}_% {\theta}^{1}},$$ so that for each $m\in L$ there exists $a(m)\in A$ such that $$\displaystyle{\lambda}(m;s)={\theta}_{s}({{\mathfrak{s}}_{j}}(m)){{\mathfrak{s% }}_{j}}(m){{}^{-1}}={{\partial}_{\theta}}({{\mathfrak{s}}_{j}}(m))_{s}={\theta% }_{s}(a(m))a(m)^{*}.$$ Extending the function $a:L\mapsto A$ to the entire ${\widetilde{H}}$ in such a way that $$a(g,s)=a(g),\quad(g,s)\in{\widetilde{H}}=H\times{\mathbb{R}},$$ we define a new 2-cocycle: $${\tilde{\mu}}(\tilde{g},\tilde{h})=a(g)^{*}{\alpha}_{\tilde{g}}(a(h)^{*})\mu(% \tilde{g},\tilde{h})a(gh),\quad\tilde{g},\tilde{h}\in{\widetilde{H}},$$ where $g$ and $h$ are the $H$-component of $\tilde{g}$ and $\tilde{h}$ respectively. We then examine if ${\tilde{\mu}}$ remains standard: $$\displaystyle{\tilde{\mu}}(g,s$$ $$\displaystyle;h,t)=a(g)^{*}{\theta}_{s}({\alpha}_{g}(a(h)^{*}){\alpha}_{g}(d_{% \mu}(s;h)){\mu\!_{\scriptscriptstyle H}}(g,h)a(gh)$$ $$\displaystyle={\alpha}_{g}({\theta}_{s}(a(h))^{*}a(h)){\alpha}_{g}(d_{\mu}(s;h% ))a(g)^{*}{\alpha}_{g}(a(h)^{*}){\mu\!_{\scriptscriptstyle H}}(g,h)a(gh)$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}(a(h))^{*}a(h))d_{\mu}(s;h)\Big{)% }a(g)^{*}{\alpha}_{g}(a(h)^{*}){\mu\!_{\scriptscriptstyle H}}(g,h)a(gh).$$ Therefore with $$\displaystyle d_{\tilde{\mu}}(s;h)$$ $$\displaystyle={\theta}_{s}(a(h)^{*})a(h)d_{\mu}(s;h),\quad s\in{\mathbb{R}},h% \in H;$$ $$\displaystyle{\tilde{\mu}}(g,h)$$ $$\displaystyle=a(g)^{*}{\alpha}_{g}(a(h)^{*}){\mu\!_{\scriptscriptstyle H}}(g,h% )a(gh),\quad g,h\in H,$$ we confirm that ${\tilde{\mu}}$ is standard. Now the corresponding characteristic cocycle have the form: $$\displaystyle{\lambda}_{\tilde{\mu}}(m$$ $$\displaystyle;g,s)={\tilde{\mu}}(g,s;g{{}^{-1}}mg)){\tilde{\mu}}(m;g,s)^{*}$$ $$\displaystyle={\alpha}_{g}(d_{\tilde{\mu}}(s;g{{}^{-1}}mg)){\tilde{\mu}}_{H}(g% ;g{{}^{-1}}mg){\tilde{\mu}}_{H}(m;g)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}(a(g{{}^{-1}}mg)^{*})a(g{{}^{-1}}% mg)d_{\mu}(s;g{{}^{-1}}mg)\Big{)}$$ $$\displaystyle\hskip 36.135pt\times a(g)^{*}{\alpha}_{g}(a(g{{}^{-1}}mg)^{*})a(% mg)a(m)a(g)a(mg)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}(g;g{{}^{-1}}% mg){\mu\!_{\scriptscriptstyle H}}(m;g)^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}(a(g{{}^{-1}}mg)^{*})d_{\mu}(s;g{% {}^{-1}}mg)\Big{)}$$ $$\displaystyle\hskip 36.135pt\times a(m){\mu\!_{\scriptscriptstyle H}}(g;g{{}^{% -1}}mg){\mu\!_{\scriptscriptstyle H}}(m;g)^{*}$$ $$\displaystyle={\alpha}_{g,s}(a(g{{}^{-1}}mg)^{*})a(m){\lambda}_{\mu}(m;g,s).$$ With $g=1$, we get $$\displaystyle{\lambda}_{\tilde{\mu}}(m;s)={\theta}_{s}(a(m)^{*})a(m){\lambda}(% m;s)=1,\quad m\in L,g\in H,s\in{\mathbb{R}}.$$ This completes the proof. $\heartsuit$ So we replace the original characteristic cocycle $({\lambda},\mu)$ by the modified one $({\lambda}_{\tilde{\mu}},{\tilde{\mu}})$ by Lemma 2.10 so that $${\lambda}={\lambda}_{\mu}\quad\text{and}\quad d_{\mu}(s;m)=1,\quad m\in L,s\in% {\mathbb{R}},$$ and $\mu\in{{\text{\rm Z}}_{\alpha}^{2}}({\widetilde{H}},A)$ is standard. Now we use the fact that the HJR map ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}$ pushes $({\lambda}_{\mu},\mu)$ to $c_{\mu}\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)% \subset{{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},A)$ which is cobounded by $f$ of Lemma 2.4 (iii): $$f({\tilde{p}},{\tilde{q}})={\alpha}_{p}(d_{\mu}(s;{\dot{\mathfrak{s}}}(q))^{*}% ){\mu\!_{\scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p),{\dot{\mathfrak{s}}}(q% ))^{*}{\mu\!_{\scriptscriptstyle H}}({{\mathfrak{n}}_{L}}(p,q);{\dot{\mathfrak% {s}}}(pq))\in A.$$ We examine ${\partial}_{\theta}(f|_{Q})$ by making use of the relation between $d_{\mu}$ and ${\mu\!_{\scriptscriptstyle H}}$ in the formula (2.9): $$\displaystyle{\theta}_{s}(f(q,r))$$ $$\displaystyle f(q,r)^{*}$$ $$\displaystyle={\theta}_{s}\Big{(}{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(q),{\dot{\mathfrak{s}}}(r))^{*}{\mu\!_{\scriptscriptstyle H}}({% {\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}}(qr))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\{{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(q),{\dot{\mathfrak{s}}}(r))^{*}{\mu\!_{\scriptscriptstyle H}}({% {\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}}(qr))\}^{*}$$ $$\displaystyle={\theta}_{s}\Big{(}{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(q),{\dot{\mathfrak{s}}}(r))^{*}\Big{)}{\mu\!_{% \scriptscriptstyle H}}({\dot{\mathfrak{s}}}(q),{\dot{\mathfrak{s}}}(r))$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}\Big{(}{\mu\!_{% \scriptscriptstyle H}}({{\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}}(qr)\Big{% )}{\mu\!_{\scriptscriptstyle H}}({{\mathfrak{n}}_{L}}(q,r);{\dot{\mathfrak{s}}% }(qr))^{*}$$ $$\displaystyle=d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q}(d_{\mu}(s;{\dot{% \mathfrak{s}}}(r)))d_{\mu}(s;{\dot{\mathfrak{s}}}(q){\dot{\mathfrak{s}}}(r))^{*}$$ $$\displaystyle\hskip 36.135pt\times\{d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r))d_{\mu% }(s;{\dot{\mathfrak{s}}}(qr))d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r){\dot{% \mathfrak{s}}}(qr))^{*}\}^{*}$$ $$\displaystyle=d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q}(d_{\mu}(s;{\dot{% \mathfrak{s}}}(r)))\{d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r))(d_{\mu}(s;{\dot{% \mathfrak{s}}}(qr)))\}^{*}.$$ Next we compare this with ${{\partial}_{Q}}f$ computed in the proof of Lemma 2.4. (iii). Substituting $p,q,r$ in place of ${\tilde{p}},{\tilde{q}},{\tilde{r}}$ in the last expression of ${\partial}_{\widetilde{Q}}f$, we obtain $$\displaystyle({{\partial}_{Q}}f)(p$$ $$\displaystyle,q,r)$$ $$\displaystyle={\mu\!_{\scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p);{{% \mathfrak{n}}_{L}}(q,r)){\mu\!_{\scriptscriptstyle H}}({\dot{\mathfrak{s}}}(p)% {{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{\dot{\mathfrak{s}}}% (p))^{*}$$ $$\displaystyle\hskip 21.681pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{{% \mathfrak{n}}_{L}}(p,qr))\Big{\{}{\mu\!_{\scriptscriptstyle H}}({{\mathfrak{n}% }_{L}}(p,q);{{\mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}.$$ Combining the above two coboundary calculations, we obtain: $$\displaystyle({\partial}_{\widetilde{Q}}(f|_{Q}))({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})$$ $$\displaystyle={\alpha}_{p}({\theta}_{s}(f(q,r))f(q,r)^{*})({{\partial}_{Q}}(f|% _{Q}))(p,q,r)$$ $$\displaystyle={\alpha}_{p}\Big{(}d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q% }(d_{\mu}(s;{\dot{\mathfrak{s}}}(r)))$$ $$\displaystyle\hskip 36.135pt\times\{d_{\mu}(s;{{\mathfrak{n}}_{L}}(q,r))d_{\mu% }(s;{\dot{\mathfrak{s}}}(qr))\}^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p);{{\mathfrak{n}}_{L}}(q,r)){\mu\!_{\scriptscriptstyle H}}({% \dot{\mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}% };{\dot{\mathfrak{s}}}(p))^{*}$$ $$\displaystyle\hskip 36.135pt\times{\mu\!_{\scriptscriptstyle H}}({\dot{% \mathfrak{s}}}(p){{\mathfrak{n}}_{L}}(q,r){\dot{\mathfrak{s}}}(p){{}^{-1}};{{% \mathfrak{n}}_{L}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}{\mu\!_{\scriptscriptstyle H}}({{% \mathfrak{n}}_{L}}(p,q);{{\mathfrak{n}}_{L}}(pq,r))\Big{\}}^{*}.$$ Comparing this with $c_{\mu}$, we conclude $$\displaystyle({\partial}_{\widetilde{Q}}(f|_{Q}))({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})={\alpha}_{p}\Big{(}d_{\mu}(s;{\dot{% \mathfrak{s}}}(q)){\alpha}_{q}(d_{\mu}(s;{\dot{\mathfrak{s}}}(r)))d_{\mu}(s;{% \dot{\mathfrak{s}}}(qr))^{*}\Big{)}c_{\mu}({\tilde{p}},{\tilde{q}},{\tilde{r}}).$$ Now we use the assumption that ${\delta}({\lambda},\mu)=c_{\mu}\in{{\text{\rm B}}_{{\alpha},\text{\rm s}}^{3}}% ({\widetilde{Q}},A)$, which means the existence of a new cochain $\xi\in{\text{\rm C}}_{\alpha}^{2}(Q,A)$ such that $$\displaystyle c_{\mu}({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})={\alpha}_{\tilde{p}}(\xi(q,r))\xi(p,qr)% \{\xi(p,q)\xi(pq,r)\}^{*}.$$ Therefore, we get $$\displaystyle{\alpha}_{\tilde{p}}(f($$ $$\displaystyle q,r))f(p,qr)\{f(p,q)f(pq,r)\}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q% }(d_{\mu}(s;{\dot{\mathfrak{s}}}(r)))d_{\mu}(s;{\dot{\mathfrak{s}}}(qr))^{*}% \Big{)}$$ $$\displaystyle\hskip 14.454pt\times{\alpha}_{\tilde{p}}(\xi(q,r))\xi(p,qr)\{\xi% (p,q)\xi(pq,r)\}^{*},$$ equivalently $$\displaystyle{\alpha}_{\tilde{p}}((\xi^{*}f)($$ $$\displaystyle q,r))(\xi^{*}f)(p,qr)\{(\xi^{*}f)(p,q)(\xi^{*}f)(pq,r)\}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q% }(d_{\mu}(s;{\dot{\mathfrak{s}}}(r)))d_{\mu}(s;{\dot{\mathfrak{s}}}(qr))^{*}% \Big{)}.$$ Setting $s=0$, we obtain ${{\partial}_{Q}}(\xi^{*}f|_{Q})=1$. With $p=1$, we get $$\displaystyle{\theta}_{s}((\xi^{*}f)($$ $$\displaystyle q,r))(\xi^{*}f)(q,r))^{*}$$ 2.162.162.16 $$\displaystyle=d_{\mu}(s;{\dot{\mathfrak{s}}}(q)){\alpha}_{q}(d_{\mu}(s;{\dot{% \mathfrak{s}}}(r)))d_{\mu}(s;{\dot{\mathfrak{s}}}(qr))^{*}.$$ We now use the formula (2.9), which states that $d_{\mu}$ gives rise to an element $[d_{\mu}]\in{\text{\rm Z}}_{\alpha}^{1}(H,{{\text{\rm H}}_{\theta}^{1}})$. The assumption $\nu_{\chi}=1$ entails that the cocycle $[d_{\mu}]$ factors through $Q$, i.e., there exists a map $a\colon\ (m,h)\in L\times H\mapsto a(m,h)\in A$ such that $$\displaystyle d_{\mu}(s;mh)$$ $$\displaystyle={\theta}_{s}(a(m,h))a(m,h)^{*}d_{\mu}(s;h),\quad m\in L,h\in H.$$ 2.172.172.17 We write $H$ in term of the cross-section ${\dot{\mathfrak{s}}}$ and the cocycle ${{\mathfrak{n}}_{L}}$: $H=L\rtimes_{{\mathfrak{n}}_{L}}Q$. Writing $g={\text{\rm m}}_{L}(g){\dot{\mathfrak{s}}}({\dot{\pi}}(g)),h\in H,$ with $$b(g)=a({\text{\rm m}}_{L}(g),{\dot{\mathfrak{s}}}({\dot{\pi}}(g))\in A,$$ 2.182.182.18 we obtain $$\displaystyle d_{\mu}(s$$ $$\displaystyle;g)={\theta}_{s}(b(g))b(g)^{*}d_{\mu}(s;{\dot{\mathfrak{s}}}({% \dot{\pi}}(g))),\quad g\in H.$$ 2.192.192.19 Then the right hand side of the formula (2.9) becomes: $$\displaystyle d_{\mu}(s$$ $$\displaystyle;g){\alpha}_{g}(d_{\mu}(s;h))d_{\mu}(s;gh)^{*}$$ $$\displaystyle={\theta}_{s}(b(g))b(g)^{*}d_{\mu}(s;{\dot{\mathfrak{s}}}({\dot{% \pi}}(g))){\alpha}_{g}\Big{(}{\theta}_{s}(b(h))b(h)^{*}d_{\mu}(s;{\dot{% \mathfrak{s}}}({\dot{\pi}}(h)))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}{\theta}_{s}(b(gh))b(gh)^{*}d_{\mu}(s% ;{\dot{\mathfrak{s}}}({\dot{\pi}}(gh)))\Big{)}^{*}$$ $$\displaystyle={\theta}_{s}(b(g))b(g)^{*}{\alpha}_{g}\Big{(}{\theta}_{s}(b(h))b% (h)^{*}\Big{)}\Big{(}{\theta}_{s}(b(gh)^{*})b(gh)\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}\Big{(}(\xi^{*}f)({\dot{\pi}}(g)% ,{\dot{\pi}}(h))\Big{)}(\xi^{*}f)({\dot{\pi}}(g),{\dot{\pi}}(h)))^{*}$$ $$\displaystyle={\theta}_{s}\Big{(}b(g){\alpha}_{g}(b(h))(\xi^{*}f)({\dot{\pi}}(% g),{\dot{\pi}}(h))b(gh)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}b(g){\alpha}_{g}(b(h))(\xi^{*}f)({% \dot{\pi}}(g),{\dot{\pi}}(h))b(gh)^{*}\Big{)}^{*}.$$ Equating this to the left hand side of (2.9), we get $$\left.\begin{aligned} \displaystyle{\theta}_{s}({\mu\!_{\scriptscriptstyle H}}% (g&\displaystyle,h)){\mu\!_{\scriptscriptstyle H}}(g,h)^{*}\\ &\displaystyle={\theta}_{s}\Big{(}b(g){\alpha}_{g}(b(h))(\xi^{*}f)({\dot{\pi}}% (g),{\dot{\pi}}(h))b(gh)^{*}\Big{)}\\ &\displaystyle\hskip 36.135pt\times\Big{(}b(g){\alpha}_{g}(b(h))(\xi^{*}f)({% \dot{\pi}}(g),{\dot{\pi}}(h))b(gh)^{*}\Big{)}^{*}\end{aligned}\right\}\quad g,% h\in H.$$ Hence $\mu_{0}={\dot{\pi}}^{*}(\xi f^{*})({\partial}_{H}b^{*}){\mu\!_{% \scriptscriptstyle H}}\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})$. Finally we compare $${\text{\rm Res}}(\mu_{0})=({\lambda}_{\mu_{0}},\mu_{0}|_{L})$$ and $({\lambda}_{\mu},\mu)$. First, we compare the $\mu$-components of the characteristic cocycle and obtain $$\mu_{0}(m,n)=b(m)^{*}b(n)^{*}b(mn){\mu\!_{\scriptscriptstyle H}}(m,n),\quad m,% n\in L,$$ since $(\xi f^{*})({\dot{\pi}}(m),{\dot{\pi}}(n))=1$. Second, we also get $$\displaystyle{\lambda}_{\mu_{0}}$$ $$\displaystyle(m;g,s)={\alpha}_{g}(d_{\mu_{0}}(s;g{{}^{-1}}mg))\mu_{0}(g;g{{}^{% -1}}mg)\mu_{0}(m;g)^{*}$$ $$\displaystyle=\mu_{0}(g;g{{}^{-1}}mg)\mu_{0}(m;g)^{*}$$ $$\displaystyle=({\partial}_{H}b^{*})(g;g{{}^{-1}}mg){\mu\!_{\scriptscriptstyle H% }}(g;g{{}^{-1}}mg)({\partial}_{H}b^{*})(m;g)^{*}{\mu\!_{\scriptscriptstyle H}}% (m;g)^{*}$$ $$\displaystyle=b(g)^{*}{\alpha}_{g}(b(g{{}^{-1}}mg)^{*})b(mg)b(m)b(g)b(mg)^{*}{% \mu\!_{\scriptscriptstyle H}}(g;g{{}^{-1}}mg){\mu\!_{\scriptscriptstyle H}}(m;% g)^{*}$$ $$\displaystyle={\alpha}_{g}(b(g{{}^{-1}}mg)^{*})b(m){\mu\!_{\scriptscriptstyle H% }}(g;g{{}^{-1}}mg){\mu\!_{\scriptscriptstyle H}}(m;g)^{*}$$ $$\displaystyle={\alpha}_{g}(b(g{{}^{-1}}mg)^{*})b(m){\lambda}_{\mu}(m;g,s).$$ Therefore we conclude that $${\text{\rm Res}}([\mu_{0}])=[{\lambda}_{\mu},\mu]=\chi\in{\Lambda}_{\alpha}({% \widetilde{H}},L,M,A).$$ This completes the proof of ${\text{\rm Ker}}({\delta})\subset\text{\rm Im}({\text{\rm Res}})$ and so ${\text{\rm Ker}}({\delta})=\text{\rm Im}({\text{\rm Res}})$. Lemma 2.11 There is a natural commutative diagram of exact sequences: $$\eightpoint\begin{CD}{\Lambda}_{\widetilde{{\alpha}}}({\widetilde{H}},L,M,A)@>% {{\delta}}>{}>{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*% _{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R% }},A)))@>{{\text{\rm Inf}}={\text{\rm inf}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\partial}}>{}>{{\text{\rm H}}_{\alpha}^{3}}(H,{% \mathbb{T}})\\ @V{}V{i^{*}_{L,M}}V@V{{\partial}}V{}V\Big{\|}\\ {\Lambda}(H,M,{\mathbb{T}})@>{{{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}}% >{}>{{\text{\rm H}}^{3}}(G,{\mathbb{T}})@>{{\text{\rm inf}}}>{}>{{\text{\rm H}% }^{3}}(H,{\mathbb{T}})\end{CD}$$ Demonstration Proof Map $\boldsymbol{{\partial}}\!:$ Fix a cross-section ${{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}\!:{{\text{\rm H}}_{% \theta}^{1}}({\mathbb{R}},A)\mapsto{{\text{\rm Z}}_{\theta}^{1}}({\mathbb{R}},A)$ and set $${\zeta}_{\nu}(s;n)=({{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu(% n))_{s}\in A,\quad s\in{\mathbb{R}},n\in N,\ \nu\in{\text{\rm Hom}}_{G}(N,{{% \text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)).$$ Choose $([c],\nu)\in{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{% \mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}}% ,A)))$ so that $${\partial}_{2}[c]=\nu\cup{\mathfrak{n}}_{\mathfrak{s}},$$ i.e., $$[d(\ \cdot\ ;q,r)]=\nu({\mathfrak{n}}_{\mathfrak{s}}(q,r))\qquad\text{in}\quad% {{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A),\quad q,r\in Q.$$ Hence there exists $f\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ such that $$d_{c}(s;q,r)={\theta}_{s}(f(q,r))f(q,r)^{*}{\zeta}_{\nu}(s;{\mathfrak{n}}(q,r)% ),\quad q,r\in Q,s\in{\mathbb{R}},$$ 2.202.202.20 and therefore, $$\displaystyle c({\tilde{p}},{\tilde{q}},{\tilde{r}})$$ $$\displaystyle=c_{Q}(p,q,r){\alpha}_{p}\Big{(}{\theta}_{s}(f(q,r))f(q,r)^{*}{% \zeta}_{\nu}(s;{\mathfrak{n}}(q,r))\Big{)},$$ 2.212.212.21 $$\displaystyle\hskip 108.405pt{\tilde{p}}=(p,s),{\tilde{q}},{\tilde{r}}\in{% \widetilde{Q}}.$$ The necessary condition for $c\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ in (2.12) gives the following: $$\displaystyle{\theta}_{s}(c_{Q}$$ $$\displaystyle(p,q,r))c_{Q}(p,q,r)^{*}$$ 2.222.222.22 $$\displaystyle={\alpha}_{p}\Big{(}{\theta}_{s}(f(q,r))f(q,r)^{*}{\zeta}_{\nu}(s% ;{\mathfrak{n}}(q,r))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}(f(p,qr))f(p,qr)^{*}{\zeta}_{\nu% }(s;{\mathfrak{n}}(p,qr))$$ $$\displaystyle\hskip 36.135pt\times\{{\theta}_{s}(f(p,q))f(p,q)^{*}{\zeta}_{\nu% }(s;{\mathfrak{n}}(p,q))$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}(f(pq,r))f(pq,r)^{*}{\zeta}_{\nu% }(s;{\mathfrak{n}}(pq,r))\}^{*}$$ for each $p,q,r\in Q$ and $s\in{\mathbb{R}}$. Now we are going to consider the pull back ${\tilde{\pi}}^{*}(c)\in{{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{G}},A)$ with $${\tilde{\pi}}(g,s)=(\pi(g),s),\quad\tilde{g}=(g,s)\in{\widetilde{G}}=G\times{% \mathbb{R}},$$ But we first check the pull back ${\tilde{\pi}}^{*}(\nu\cup{{\mathfrak{n}}_{N}})\in{{\text{\rm Z}}_{\alpha}^{2}}% (G,{{\text{\rm H}}_{\theta}^{1}})$. To this end, with $\text{m}_{N}(g)=g{\mathfrak{s}}(\pi(g)){{}^{-1}}\in N$ and ${\text{\rm n}}_{N}(g)={\mathfrak{s}}(\pi(g))g{{}^{-1}}\in N$, we observe first $$\displaystyle{{\mathfrak{n}}_{N}}(\pi(g)$$ $$\displaystyle,\pi(h))={\mathfrak{s}}(\pi(g)){\mathfrak{s}}(\pi(h)){\mathfrak{s% }}(\pi(gh)){{}^{-1}},\quad g,h\in G,$$ $$\displaystyle={\text{\rm n}}_{N}(g)g{\text{\rm n}}_{N}(h)h\{{\text{\rm n}}_{N}% (gh)gh\}{{}^{-1}}$$ $$\displaystyle={\text{\rm n}}_{N}(g)g{\text{\rm n}}_{N}(h)g{{}^{-1}}{\text{\rm n% }}_{N}(gh){{}^{-1}};$$ and that $$\begin{aligned} \displaystyle\nu({{\mathfrak{n}}_{N}}(&\displaystyle\pi(g),\pi% (h)))\\ &\displaystyle=\nu({\text{\rm n}}_{N}(g)){\alpha}_{g}(\nu({\text{\rm n}}_{N}(h% ))\nu({\text{\rm n}}_{N}(gh)){{}^{-1}}\ \text{in}\ {{\text{\rm H}}_{\theta}^{1% }},\end{aligned}\quad g,h\in G.$$ Hence we can choose $a(g,h)\in A,g,h\in G,$ such that $$\displaystyle{\zeta}_{\nu}(s;{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))$$ $$\displaystyle={\theta}_{s}(a(g,h))a(g,h)^{*}{\zeta}_{\nu}(s;{\text{\rm n}}_{N}% (g))$$ 2.232.232.23 $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}({\zeta}_{\nu}(s;{\text{\rm n}}_% {N}(h))){\zeta}_{\nu}(s;{\text{\rm n}}_{N}(gh))^{*}$$ We apply now this to the pull back of the above (2.22) and obtain for each $g,h,k\in G$: $$\displaystyle{\theta}_{s}$$ $$\displaystyle(c_{Q}(\pi(g),\pi(h),\pi(k)))c_{Q}(\pi(g),\pi(h),\pi(k))^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}(\pi^{*}(f)(h,k))\pi^{*}(f)(h,k)^% {*}{\theta}_{s}(a(h,k))a(h,k)^{*}{\zeta}_{\nu}(s;{\text{\rm n}}_{N}(h))$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{h}({\zeta}_{\nu}(s;{\text{\rm n}}_% {N}(k))){\zeta}_{\nu}(s;{\text{\rm n}}_{N}(hk))^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}(\pi^{*}(f)(g,hk))\pi^{*}(f)(g,% hk)^{*}{\theta}_{s}(a(g,hk))a(g,hk)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\zeta}_{\nu}(s;{\text{\rm n}}_{N}(g)){% \alpha}_{g}({\zeta}_{\nu}(s;{\text{\rm n}}_{N}(hk))){\zeta}_{\nu}(s;{\text{\rm n% }}_{N}(ghk))^{*}$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}{\theta}_{s}(\pi^{*}(f)(g,h))\pi^{*}% (f)(g,h)^{*}{\theta}_{s}(a(g,h))a(g,h)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\zeta}_{\nu}(s;{\text{\rm n}}_{N}(g)){% \alpha}_{g}({\zeta}_{\nu}(s;{\text{\rm n}}_{N}(h))){\zeta}_{\nu}(s;{\text{\rm n% }}_{N}(gh))^{*}$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}(\pi^{*}(f)(gh,k))\pi^{*}(f)(gh,% k)^{*}{\theta}_{s}(a(gh,k))a(gh,k)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\zeta}_{\nu}(s;{\text{\rm n}}_{N}(gh)){% \alpha}_{gh}({\zeta}_{\nu}(s;{\text{\rm n}}_{N}(k))){\zeta}_{\nu}(s;{\text{\rm n% }}_{N}(ghk))^{*}\Big{\}}^{*}$$ $$\displaystyle={\alpha}_{g}\Big{(}{\theta}_{s}(\pi^{*}(f)(h,k))\pi^{*}(f)(h,k)^% {*}{\theta}_{s}(a(h,k))a(h,k)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}(\pi^{*}(f)(g,hk))\pi^{*}(f)(g,% hk)^{*}{\theta}_{s}(a(g,hk))a(g,hk)^{*})$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}{\theta}_{s}(\pi^{*}(f)(g,h))\pi^{*}% (f)(g,h)^{*}{\theta}_{s}(a(g,h))a(g,h)^{*})$$ $$\displaystyle\hskip 36.135pt\times{\theta}_{s}\Big{(}\pi^{*}(f)(gh,k)\Big{)}% \pi^{*}(f)(gh,k)^{*}{\theta}_{s}(a(gh,k))a(gh,k)^{*}\Big{\}}^{*}$$ and hence, for each each $g,h,k\in G$, $$\displaystyle{\theta}_{s}\Big{(}c_{Q}($$ $$\displaystyle\pi(g),\pi(h),\pi(k)){\partial}_{G}(\pi^{*}(f)^{*}a^{*})(g,h,k)% \Big{)}$$ $$\displaystyle=c_{Q}(\pi(g),\pi(h),\pi(k)){\partial}_{G}(\pi^{*}(f)^{*}a^{*})(g% ,h,k).$$ The ergodicity of the flow ${\theta}$ yields that $$\pi^{*}(c_{Q}){\partial}_{G}(\pi^{*}(f)a)^{*}\in{{\text{\rm Z}}^{3}}(G,{% \mathbb{T}}).$$ Now we change the cocycle $c$ to $c^{\prime}$ within the cohomology class, i.e., $c^{\prime}=({\partial}_{\widetilde{Q}}b)c$ with $b\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ which gives: $$\displaystyle d^{\prime}(s;q,r)={\theta}_{s}(b(q,r))b(q,r)^{*}d(s;q,r),\quad s% \in{\mathbb{R}},q,r\in Q.$$ We also change the cross-section ${{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}:{{\text{\rm H}}_{\theta}% ^{1}}\mapsto{{\text{\rm Z}}_{\theta}^{1}}$ to ${\mathfrak{s}}_{\text{\rm Z}}^{\prime}:{{\text{\rm H}}_{\theta}^{1}}\mapsto{{% \text{\rm Z}}_{\theta}^{1}}$. Then there exists a map $n\in N\mapsto e(n)\in A$ such that $${\zeta}^{\prime}_{\nu}(s;n)={\mathfrak{s}}_{\text{\rm Z}}^{\prime}(\nu)_{s}={% \theta}_{s}(e(n))e(n)^{*}{\zeta}_{\nu}(s;n),\quad n\in N.$$ Thus we obtain, for each $s\in{\mathbb{R}},g,h\in G$, $$\displaystyle d^{\prime}$$ $$\displaystyle(s;q,r)={\theta}_{s}\Big{(}(f(q,r)e({{\mathfrak{n}}_{N}}(q,r))^{*% }b(q,r)\Big{)}$$ $$\displaystyle\hskip 36.135ptf(q,r)^{*}b(q,r)^{*}e({{\mathfrak{n}}_{N}}(q,r)){% \zeta}_{\nu}^{\prime}(s;{{\mathfrak{n}}_{N}}(q,r));$$ $$\displaystyle{\zeta}_{\nu}^{\prime}$$ $$\displaystyle(s;{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))$$ $$\displaystyle={\theta}_{s}(e({{\mathfrak{n}}_{N}}(\pi(g),\pi(h))))e({{% \mathfrak{n}}_{N}}(\pi(g),\pi(h)))^{*}{\zeta}_{\nu}(s;{{\mathfrak{n}}_{N}}(\pi% (g),\pi(h)))$$ $$\displaystyle={\theta}_{s}\Big{(}e({{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))a(g,h)e% ({\text{\rm n}}_{N}(g))^{*}{\alpha}_{g}(\text{e}(m(h))^{*})e({\text{\rm n}}_{N% }(gh))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times e({{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))^{*}a% (g,h)^{*}e({\text{\rm n}}_{N}(g)^{*})$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(e({\text{\rm n}}_{N}(h))^{*})e(% {\text{\rm n}}_{N}(gh))$$ $$\displaystyle\hskip 36.135pt\times{\zeta}_{\nu}^{\prime}(s;{\text{\rm n}}_{N}(% g)){\alpha}_{g}({\zeta}_{\nu}^{\prime}(s;{\text{\rm n}}_{N}(h))){\zeta}_{\nu}^% {\prime}(s;{\text{\rm n}}_{N}(gh))^{*};$$ $$\displaystyle c_{Q}^{\prime}$$ $$\displaystyle(p,q,r)=({\partial}_{\widetilde{Q}}b)(p,q,r)c_{Q}(p,q,r)$$ $$\displaystyle=({\partial}_{Q}b)(p,q,r)c_{Q}(p,q,r),\quad p,q,r\in Q.$$ Therefore, the cochains $f$ and $a$ are transformed to the following $f^{\prime}$ and $a^{\prime}$: $$\displaystyle f^{\prime}(p,q)$$ $$\displaystyle=f(p,q)e({{\mathfrak{n}}_{N}}(p,q))^{*}b(p,q),\quad p,q\in Q;$$ $$\displaystyle a^{\prime}(g,h)$$ $$\displaystyle=e({{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))a(g,h)e({\text{\rm n}}_{N}% (g))^{*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(e({\text{\rm n}}_{N}(h))^{*})e(% {\text{\rm n}}_{N}(gh)),\quad g,h\in G.$$ Thus we get $$\displaystyle\pi^{*}(c_{Q}^{\prime}){\partial}_{G}(\pi^{*}(f^{\prime})a^{% \prime})^{*}$$ $$\displaystyle=\pi^{*}(c_{Q})\pi^{*}({\partial}_{Q}b){\partial}_{G}\Big{(}\pi^{% *}(fb)\pi^{*}(e{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{\mathfrak{n}% }_{N}})a\Big{)}^{*}$$ $$\displaystyle\hskip 72.27pt\times{\partial}_{G}(\pi^{*}(e{\lower-1.29pt\hbox{{% $\scriptscriptstyle\circ$}}}{{\mathfrak{n}}_{N}})){\partial}_{G}^{2}(e{\lower-% 1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm n}}_{N})^{*})$$ $$\displaystyle=\pi^{*}(c_{Q}){\partial}_{G}(\pi^{*}(f)a)^{*}.$$ Finally, in the choice of $f$ and $a$ we have pecisely the ambiguity of ${{\text{\rm C}}^{2}}(Q,{\mathbb{T}})$ and ${{\text{\rm C}}^{2}}(G,{\mathbb{T}})$ which result the change on $\pi^{*}(c_{Q}){\partial}_{G}(\pi^{*}(f)a)^{*}\in{{\text{\rm Z}}^{3}}(G,{% \mathbb{T}})$ by ${{\text{\rm B}}^{3}}(G,{\mathbb{T}})$. Thus we have a well-defined homomorphism $${\partial}:{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{% \mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}}% ,A))\mapsto{{\text{\rm H}}^{3}}(G,{\mathbb{T}}).$$ which depends on the choice of the section ${\mathfrak{s}}\,{\!:\ }Q\longmapsto G.$ Now fix $\chi=[{\lambda},\mu]\in{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ and set $$([c^{{\lambda},\mu}],\nu_{\chi})={\delta}(\chi)\in{{\text{\rm H}}_{{\alpha},% \text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{% \text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)).$$ Associated with $({\lambda},\mu)$ is an ${\widetilde{H}}$-equivariant exact square: $$\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>A@>{}>{}>{\text{\rm B}}@>{}>{}>1\\ @V{}V{}V@V{i}V{}V@V{}V{}V\\ 1@>{}>{}>V@>{}>{}>E@>{}>{}>F@>{}>{}>1\\ @V{}V{}V@V{j}V{\big{\uparrow}{{\mathfrak{s}}_{j}}}V@V{}V{}V\\ 1@>{}>{}>M@>{}>{}>L@>{}>{}>N@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\end{CD}$$ The far left column exact sequence corresponds to the restriction $$({\lambda},\mu)|_{M\times H}=i^{*}_{L,M}({\lambda},\mu)\in{\Lambda}(H,M,{% \mathbb{T}}).$$ To go further, we need the following: Sublemma 2.12 In the above context, we have $${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(i^{*}_{L,M}(\chi))={\partial}(% {\delta}(\chi))\in{{\text{\rm H}}^{3}}(G,{\mathbb{T}}),\quad\chi\in{\Lambda}_{% \alpha}({\widetilde{H}},L,M,A).$$ Demonstration Proof First we arrange the cross-sections in the following way: $$\displaystyle\hskip 36.135pt{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n{% \mathfrak{s}}(p))={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n){{\mathfrak{s}}% \!_{\scriptscriptstyle H}}({\mathfrak{s}}(p)),\quad n\in N,p\in Q;$$ $$\displaystyle{{\mathfrak{s}}_{j}}(m{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n% ))={{\mathfrak{s}}_{j}}(m){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(n)),\quad m\in M,n\in N.$$ We further arrange the cross-sections ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ on $N$, ${{\mathfrak{s}}_{\dot{\partial}}}$ and ${{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}$ on H, so that they satisfy the following composition rules: $$\begin{CD}E@>{{\partial}_{\theta}}>{}>{\text{\rm Z}}\\ @V{j}V{\big{\uparrow}{{\mathfrak{s}}_{j}}}V@V{}V{\big{\uparrow}{{\mathfrak{s}}% _{\!\scriptscriptstyle{\text{\rm Z}}}}}V\\ L@>{\dot{\partial}}>{\underset{{\mathfrak{s}}_{\dot{\partial}}}\to{% \longleftarrow}}>{\text{\rm H}}\\ @V{{\pi\!_{\scriptscriptstyle G}}}V{\big{\uparrow}{{\mathfrak{s}}\!_{% \scriptscriptstyle H}}}V@A{\nu}A{\big{\downarrow}{\mathfrak{s}}_{\nu}}A\\ N@=N\end{CD}\hskip 21.681pt\begin{aligned} &\displaystyle{{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}={\partial}_{\theta}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{j}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{\dot{\partial}}}:{\text{\rm H}}% \mapsto{\text{\rm Z}};\\ &\displaystyle{\partial}_{\theta}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{{\mathfrak{s}}_{j}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{% \mathfrak{s}}\!_{\scriptscriptstyle H}}={\partial}_{\theta}{\lower-1.29pt\hbox% {{$\scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{j}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{\dot{\partial}}}{\lower-1.29pt% \hbox{{$\scriptscriptstyle\circ$}}}\nu={{\mathfrak{s}}_{\!\scriptscriptstyle{% \text{\rm Z}}}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}\nu:N\mapsto{% \text{\rm Z}};\\ &\displaystyle{{\partial}_{\theta}}{\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{{\mathfrak{s}}_{j}}={{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z% }}}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}\nu{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\pi\!_{\scriptscriptstyle G}}:L\mapsto{\text{\rm Z% }}.\end{aligned}$$ As ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}\neq{\mathfrak{s}}_{\dot{\partial}}{% \lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}\nu$ if $M\neq K({\lambda},\mu)$, i.e., if $\nu$ is not injective, the second composition rule needs to be justified. For each $n\in N$, set $m={\mathfrak{s}}_{\dot{\partial}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}\nu(n){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n){{}^{-1}}\in M$ so that $m{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n)={\mathfrak{s}}_{\dot{\partial}}(% \nu(n))$. Then we have $$\displaystyle{{\mathfrak{s}}_{j}}({\mathfrak{s}}_{\dot{\partial}}$$ $$\displaystyle(\nu(n)))={{\mathfrak{s}}_{j}}(m){{\mathfrak{s}}_{j}}({{\mathfrak% {s}}\!_{\scriptscriptstyle H}}(n));$$ $$\displaystyle{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu(n))$$ $$\displaystyle={{\partial}_{\theta}}({{\mathfrak{s}}_{j}}({\mathfrak{s}}_{\dot{% \partial}}(\nu(n)))={{\partial}_{\theta}}({{\mathfrak{s}}_{j}}(m){{\mathfrak{s% }}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}(n)))$$ $$\displaystyle={{\partial}_{\theta}}({{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(n))),$$ which justfies the second composition rule. For each $\ell\in L$, we write $$\ell={{\text{\rm m}}_{M}}(\ell){{\mathfrak{s}}_{\dot{\partial}}}(\dot{\partial% }(\ell))={{\text{\rm m}}_{M}}(\ell){{\mathfrak{s}}_{\dot{\partial}}}(\nu({\pi% \!_{\scriptscriptstyle G}}(\ell)))$$ and obtain $$\displaystyle{{\mathfrak{s}}_{j}}(\ell)$$ $$\displaystyle={{\mathfrak{s}}_{j}}({{\text{\rm m}}_{M}}(\ell)){{\mathfrak{s}}_% {j}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{\dot{% \partial}}}(\nu({\pi\!_{\scriptscriptstyle G}}(\ell))),\quad\ell\in L;$$ $$\displaystyle{\partial}_{\theta}({{\mathfrak{s}}_{j}}(\ell))$$ $$\displaystyle={\partial}_{\theta}({{\mathfrak{s}}_{j}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{{\mathfrak{s}}_{\dot{\partial}}}(\nu({\pi\!_{% \scriptscriptstyle G}}(\ell)))={{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z% }}}}(\nu({\pi\!_{\scriptscriptstyle G}}(\ell))).$$ Each $g\in G$ is uniquely written in the form: $$g={{\text{\rm m}}_{N}}(g){\mathfrak{s}}(\pi(g)),\quad g\in G,$$ with ${{\text{\rm m}}_{N}}(g)\in N$. Therefore we have $${{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)={{\mathfrak{s}}\!_{% \scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g)){\dot{\mathfrak{s}}}(\pi(g)),g% \in G,{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g))\in L.$$ Then the product $gh$ of each pair $g,h\in G$ gives: $$\displaystyle{{\text{\rm m}}_{N}}(gh)$$ $$\displaystyle{\mathfrak{s}}(\pi(gh))=gh$$ $$\displaystyle={{\text{\rm m}}_{N}}(g){\mathfrak{s}}(\pi(g)){{\text{\rm m}}_{N}% }(h){\mathfrak{s}}(\pi(h))$$ $$\displaystyle={{\text{\rm m}}_{N}}(g){\mathfrak{s}}(\pi(g)){{\text{\rm m}}_{N}% }(h){\mathfrak{s}}(\pi(g)){{}^{-1}}{\mathfrak{s}}(\pi(g)){\mathfrak{s}}(\pi(h))$$ $$\displaystyle={{\text{\rm m}}_{N}}(g){\mathfrak{s}}(\pi(g)){{\text{\rm m}}_{N}% }(h){\mathfrak{s}}(\pi(g)){{}^{-1}}{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)){% \mathfrak{s}}(\pi(gh));$$ $$\displaystyle 1$$ $$\displaystyle={{\text{\rm m}}_{N}}(g){\mathfrak{s}}(\pi(g)){{\text{\rm m}}_{N}% }(h){\mathfrak{s}}(\pi(g)){{}^{-1}}{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)){{\text{% \rm m}}_{N}}(gh){{}^{-1}}.$$ We observe the following relation between the cocycles ${{\mathfrak{n}}_{L}}$ and ${{\mathfrak{n}}_{N}}$. $$\displaystyle{\pi\!_{\scriptscriptstyle G}}({{\mathfrak{n}}_{L}}(p$$ $$\displaystyle,q))={\pi\!_{\scriptscriptstyle G}}({\dot{\mathfrak{s}}}(p){\dot{% \mathfrak{s}}}(q){\dot{\mathfrak{s}}}(pq)){{}^{-1}})$$ $$\displaystyle={\mathfrak{s}}(p){\mathfrak{s}}(q){\mathfrak{s}}(pq){{}^{-1}}={{% \mathfrak{n}}_{N}}(p,q),\quad p,q\in Q.$$ We then further compute: $$\displaystyle{{\mathfrak{n}}_{M}}(g,h)$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(h){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(gh){{}^{-1}% },\quad g,h\in G,$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g% ){\mathfrak{s}}(\pi((g))){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m% }}_{N}}(h){\mathfrak{s}}(\pi((h)))$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(gh){\mathfrak{s}}(\pi((gh))){{}^{-1}}$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g% )){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\mathfrak{s}}(\pi((g)))){{% \mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(h)){{\mathfrak{s}% }\!_{\scriptscriptstyle H}}({\mathfrak{s}}(\pi(h))$$ $$\displaystyle\hskip 36.135pt\times\{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(% {{\text{\rm m}}_{N}}(gh)){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\mathfrak{% s}}(\pi(gh)))\}{{}^{-1}}$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g% )){\dot{\mathfrak{s}}}(\pi(g))){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(h)){\dot{\mathfrak{s}}}(\pi(g)){{}^{-1}}{\dot{\mathfrak{s}}% }(\pi(g)){\dot{\mathfrak{s}}}(\pi(h))$$ $$\displaystyle\hskip 36.135pt\times\{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(% {{\text{\rm m}}_{N}}(gh)){\dot{\mathfrak{s}}}(gh)))\}{{}^{-1}}$$ $$\displaystyle={{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g% )){\dot{\mathfrak{s}}}(\pi(g))){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(h)){\dot{\mathfrak{s}}}(\pi(g)){{}^{-1}}{{\mathfrak{n}}_{L}% }(\pi(g)),\pi(h))$$ $$\displaystyle\hskip 36.135pt\times\{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(% {{\text{\rm m}}_{N}}(gh))\}{{}^{-1}}.$$ We now take the cross-section ${{\mathfrak{s}}_{j}}$ and choose $b(g,h)\in A$ so that the following computation is valid: $$\displaystyle{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{M}}(g$$ $$\displaystyle,h))={{\mathfrak{s}}_{j}}\Big{(}{{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(g){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m% }}_{N}}(h)){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g){{}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(g)){{\mathfrak{n}}_{L}}(\pi(g),\pi(h)){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}({{\text{\rm m}}_{N}}(gh)){{}^{-1}}\Big{)}$$ $$\displaystyle=b(g,h){\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)}({{% \mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{% N}}(h)))){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(g)))$$ $$\displaystyle\hskip 72.27pt\times{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{L}}(\pi% (g),\pi(h))){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(gh))){{}^{-1}}$$ $$\displaystyle=b(g,h){\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(g)){\dot{\mathfrak{s}}}(\pi(g))}({{\mathfrak{s}}_{j}}({{% \mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(h)))){{\mathfrak{% s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g)))$$ $$\displaystyle\hskip 72.27pt\times{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{L}}(\pi% (g),\pi(h))){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(gh))){{}^{-1}}$$ $$\displaystyle=b(g,h){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H% }}({{\text{\rm m}}_{N}}(g))){\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))}({{% \mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{% N}}(h))))$$ $$\displaystyle\hskip 72.27pt\times{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{L}}(\pi% (g),\pi(h))){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(gh))){{}^{-1}}.$$ We summerlize this here for later use: $$\displaystyle{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{M}}(g$$ $$\displaystyle,h))=b(g,h){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{% \scriptscriptstyle H}}({{\text{\rm m}}_{N}}(g))){\alpha}_{{\dot{\mathfrak{s}}}% (\pi(g))}({{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(h))))$$ 2.242.242.24 $$\displaystyle\hskip 72.27pt\times{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{L}}(\pi% (g),\pi(h))){{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{% \text{\rm m}}_{N}}(gh))){{}^{-1}}$$ We then apply the coboundary operator ${{\partial}_{\theta}}$ to the both side to obtain: $$\displaystyle 1$$ $$\displaystyle={{\partial}_{\theta}}(b(g,h)){{\partial}_{\theta}}({{\mathfrak{s% }}_{j}}({{\text{\rm m}}_{N}}(g)){{\partial}_{\theta}}({\alpha}_{{\dot{% \mathfrak{s}}}(\pi(g))}({{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{% \scriptscriptstyle H}}({{\text{\rm m}}_{N}}(h))))$$ $$\displaystyle\hskip 36.135pt\times{{\partial}_{\theta}}({{\mathfrak{s}}_{j}}({% {\mathfrak{n}}_{L}}(\pi(g),\pi(h)))){{\partial}_{\theta}}({{\mathfrak{s}}_{j}}% ({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{\rm m}}_{N}}(gh))){{}^{-1}})$$ and use the compostion rules among cross-sections to drive: $$\displaystyle{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu($$ $$\displaystyle{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))={{\partial}_{\theta}}(b(g,h)% {{}^{-1}}){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{% \rm m}}_{N}}(g){{}^{-1}})$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}({{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm m}}_{N}}(h)){{}^{-1}})){{% \mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm m}}_{N}}(% gh)))).$$ Since ${{\text{\rm n}}_{N}}(g)={{\text{\rm m}}_{N}}(g){{}^{-1}},g\in G$, we have $$\displaystyle{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu($$ $$\displaystyle{{\mathfrak{n}}_{N}}(\pi(g),\pi(h)))={{\partial}_{\theta}}(b(g,h)% {{}^{-1}}){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{% \rm n}}_{N}}(g))$$ $$\displaystyle\hskip 72.27pt\times{\alpha}_{g}({{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm n}}_{N}}(h)))){{\mathfrak{s% }}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm n}}_{N}}(gh)){{}^{-1}% })),$$ equivalently $$\displaystyle{\zeta}_{\nu}(s;{{\mathfrak{n}}_{N}}($$ $$\displaystyle\pi(g),\pi(h))={\theta}_{s}(b(g,h){{}^{-1}})b(g,h){\zeta}_{\nu}(s% ;{{\text{\rm n}}_{N}}(g))$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}({\zeta}_{\nu}(s;{{\text{\rm n}}% _{N}}(h))){\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}(gh))^{*},\quad g,h\in G.$$ Therefore the elements $b(g,h){{}^{-1}}\in A$ serves as $a(g,h)$ of (2.23) in the construction of ${\partial}({\delta}(\chi))$. With $u(g)={{\mathfrak{s}}_{j}}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}({{\text{% \rm m}}_{N}}(g)))\in E,g\in G,$ and $w(g,h)={{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{L}}(\pi(g),\pi(h))),$ we apply the coboundary operation ${{\partial}_{G}}$ to the both side of (2.24) relative to the outer action ${\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}}$ of $G$ on $E$ to obtain: $$\displaystyle c^{{\lambda},\mu}_{G}(g$$ $$\displaystyle,h,k)={\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)}\big% {(}{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{M}}(h,k))\big{)}{{\mathfrak{s}}_{j}}(% {{\mathfrak{n}}_{M}}(g,hk))$$ $$\displaystyle\hskip 36.135pt\times\{{{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{M}}(% g,h)){{\mathfrak{s}}_{j}}({{\mathfrak{n}}_{M}}(gh,k))\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}{\alpha}_{{{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(g)}\Big{(}u(h){\alpha}_{{\dot{\mathfrak{s}}}(\pi(h))}(u% (k))w(h,k)u(hk){{}^{-1}}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times u(g){\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))}% (u(hk))w(g,hk)u(ghk){{}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times\{u(g){\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))% }(u(h))w(g,h)u(gh){{}^{-1}}u(gh)$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{{\dot{\mathfrak{s}}}(\pi(gh))}(u(k% ))w(gh,k)u(ghk){{}^{-1}}\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times{\text{\rm Ad}}(u(g)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))}\Big{(}u(h){% \alpha}_{{\dot{\mathfrak{s}}}(\pi(h))}(u(k))w(h,k)u(hk){{}^{-1}}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times u(g){\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))}% (u(hk))w(g,hk)$$ $$\displaystyle\hskip 36.135pt\times\{u(g){\alpha}_{{\dot{\mathfrak{s}}}(\pi(g))% }(u(h))w(g,h)u(gh){{}^{-1}}u(gh)$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{{\dot{\mathfrak{s}}}(\pi(gh))}(u(k% ))w(gh,k)\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}{\alpha}_{{\dot{\mathfrak{s}}% }(\pi(g))}\Big{(}{\alpha}_{{\dot{\mathfrak{s}}}(\pi(h))}(u(k))w(h,k)\Big{)}$$ $$\displaystyle\hskip 36.135pt\times w(g,hk)\{w(g,h){\alpha}_{{\dot{\mathfrak{s}% }}(\pi(gh))}(u(k))w(gh,k)\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}w(g,h){\alpha}_{{\dot{% \mathfrak{s}}}(\pi(gh))}(u(k)))$$ $$\displaystyle\hskip 36.135pt\times w(g,h){{}^{-1}}{\alpha}_{{\dot{\mathfrak{s}% }}(\pi(g))}(w(h,k))$$ $$\displaystyle\hskip 36.135pt\times w(g,hk)\{w(g,h){\alpha}_{{\dot{\mathfrak{s}% }}(\pi(gh))}(u(k))w(gh,k)\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}{\alpha}_{{\dot{\mathfrak{s}}% }(\pi(g))}(w(h,k))w(g,hk)\{w(g,h)w(gh,k)\}{{}^{-1}}$$ $$\displaystyle=({{\partial}_{G}}a)(g,h,k){{}^{-1}}c^{{\lambda},\mu}_{Q}(\pi(g),% \pi(h),\pi(k)).$$ Since the cochain $f\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ of (2.20) is taken to be 1 in our case, the above computation shows that the third cohomology class: $$[{{\partial}_{G}}(a{{}^{-1}})\pi^{*}(c^{{\lambda},\mu}_{Q})]\in{{\text{\rm H}}% ^{3}}(G,{\mathbb{T}})$$ is indeed the 3-cohomology class associated with the far left column $H$-equivariant exact sequence of the exact square before the lemma: $$\begin{CD}1@>{}>{}>{\mathbb{T}}@>{}>{}>V@>{}>{}>M@>{}>{}>1\end{CD}$$ which corresponds to the characteristice invariant $i^{*}_{L,M}(\chi)\in{\Lambda}(H,M,{\mathbb{T}})$. $\heartsuit$ $\boldsymbol{\text{\rm Im}({\delta})\subset{\text{\rm Ker}}({\text{\rm inf}}{% \lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\partial})}$ As seen above, we have ${\partial}({\delta}(\chi))={{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(i^{% *}_{L,M}(\chi))\in{{\text{\rm H}}^{3}}(G,{\mathbb{T}})$. Hence we conclude $$1={\text{\rm inf}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{{\delta}_{% \scriptscriptstyle{\text{\rm HJR}}}}(i^{*}_{L,M}(\chi))={\text{\rm inf}}{% \lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\partial}({\delta}(\chi)),% \quad\chi\in{\Lambda}_{\alpha}(H,L,M,A).$$ $\boldsymbol{\text{\rm Im}({\delta})\supset{\text{\rm Ker}}({\text{\rm inf}}{% \lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\partial})}$ First, we compare our sequence with HJR-exact sequence: $$\eightpoint\begin{CD}{\Lambda}_{\alpha}({\widetilde{H}},L,A)@>{{{\delta}_{% \scriptscriptstyle{\text{\rm HJR}}}}}>{}>{{\text{\rm H}}_{\alpha}^{3}}({% \widetilde{Q}},A)@>{{\text{\rm inf}}}>{}>{{\text{\rm H}}_{\alpha}^{3}}({% \widetilde{H}},A)\\ @A{}A{}A@A{}A{}A@A{{i_{A,{\mathbb{T}}}}_{*}}A{}A\\ {\Lambda}_{\alpha}({\widetilde{H}},L,M,A)@>{{\delta}}>{}>{{\text{\rm H}}_{{% \alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}_% {G}(N,{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A))@>{{\text{\rm Inf}}}>{}>{{% \text{\rm H}}^{3}}(H,{\mathbb{T}})\end{CD}$$ Now suppose that ${\text{\rm Inf}}([c],\nu)=1$ in ${{\text{\rm H}}^{3}}(H,{\mathbb{T}})$. The 3-cocycle $c\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ is naturally an element of ${{\text{\rm Z}}_{\alpha}^{3}}({\widetilde{Q}},A)$. We denote this element by $\tilde{c}$ and its cohomology class $[\tilde{c}]\in{{\text{\rm H}}_{\alpha}^{3}}({\widetilde{Q}},A)$. First, the image ${\partial}([c],\nu)\in{{\text{\rm H}}^{3}}(G,{\mathbb{T}})$ is obtained as the class of ${{\partial}_{G}}(\pi^{*}(f)a{{}^{-1}})\pi^{*}(c_{Q})$ where $f\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ and $a\in{{\text{\rm C}}_{\alpha}^{2}}(G,A)$ are obtained subject to the following conditions: $$\displaystyle d_{c}(s$$ $$\displaystyle;q,r)={\theta}_{s}(f(q,r))f(q,r)^{*}{\zeta}_{\nu}(s;{{\mathfrak{n% }}_{N}}(q,r)),\quad s\in{\mathbb{R}},q,r\in Q;$$ 2.252.252.25 $$\displaystyle{\zeta}_{\nu}(s$$ $$\displaystyle;{\mathfrak{n}}(\pi(g),\pi(h))={\theta}_{s}(a(g,h))a(g,h)^{*}{% \zeta}_{\nu}(s;{{\text{\rm n}}_{N}}(g)),\quad g,h\in G,$$ $$\displaystyle\hskip 72.27pt\times{\alpha}_{g}({\zeta}_{\nu}(s;{{\text{\rm n}}_% {N}}(h))){\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}(gh))^{*},$$ where ${\zeta}_{\nu}(s;n)={{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu(n% ))_{s},n\in N,s\in{\mathbb{R}}$. The image ${\text{\rm Inf}}([c],\nu)$ is obtained as the class of $$\pi_{H}^{*}({{\partial}_{G}}(\pi^{*}(f)a){{}^{-1}}\pi^{*}(c_{Q}))={\partial}_{% H}({\dot{\pi}}^{*}(f)\pi_{H}^{*}(a)){{}^{-1}}{\dot{\pi}}^{*}(c_{Q})\in{{\text{% \rm Z}}^{3}}(H,{\mathbb{T}}).$$ The assumption that ${\text{\rm Inf}}([c],\nu)=1$ means that ${\partial}_{H}({\dot{\pi}}^{*}(f)\pi_{H}^{*}(a)){{}^{-1}}{\dot{\pi}}^{*}(c_{Q}% )\in{{\text{\rm B}}^{3}}(H,{\mathbb{T}})$, i.e., there exists $b\in{{\text{\rm C}}^{2}}(H,{\mathbb{T}})$ such that $$\displaystyle{\partial}_{H}({\dot{\pi}}^{*}(f)\pi_{H}^{*}(a)){{}^{-1}}{\dot{% \pi}}^{*}(c_{Q})={{\partial}_{H}}b$$ Hence for each triple $g,h,k\in H$ we have $$\displaystyle c_{Q}({\dot{\pi}}(g)$$ $$\displaystyle,{\dot{\pi}}(h),{\dot{\pi}}(k))={\alpha}_{g}\Big{(}b(h,k)f({\dot{% \pi}}(h),{\dot{\pi}}(k))a({\pi\!_{\scriptscriptstyle G}}(h),{\pi\!_{% \scriptscriptstyle G}}(k))\Big{)}$$ $$\displaystyle\hskip 14.454pt\times b(g,hk)f({\dot{\pi}}(g),{\dot{\pi}}(hk))a({% \pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(hk))$$ $$\displaystyle\hskip 14.454pt\times\Big{\{}b(g,h)f({\dot{\pi}}(g),{\dot{\pi}}(k% ))a({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(k))$$ $$\displaystyle\hskip 14.454pt\times b(gh,k)f({\dot{\pi}}(gh),{\dot{\pi}}(k))a({% \pi\!_{\scriptscriptstyle G}}(gh),{\pi\!_{\scriptscriptstyle G}}(k))\Big{\}}^{% *}.$$ With $u(g,h)=a({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(h))b% (g,h)f({\dot{\pi}}(g),{\dot{\pi}}(h))$, we get $${\dot{\pi}}^{*}(c_{Q})={{\partial}_{H}}u,$$ and $$\displaystyle c({\dot{\pi}}(\tilde{g})$$ $$\displaystyle,{\dot{\pi}}(\tilde{h}),{\dot{\pi}}(\tilde{k}))={\alpha}_{g}(d_{c% }(s;{\dot{\pi}}(h),{\dot{\pi}}(k)))c_{Q}({\dot{\pi}}(g),{\dot{\pi}}(h),{\dot{% \pi}}(k))$$ $$\displaystyle={\alpha}_{g}(d_{c}(s;{\dot{\pi}}(h),{\dot{\pi}}(k))){\alpha}_{g}% \Big{(}u(h,k)\Big{)}u(g,hk)\{u(g,h)u(gh,k)\}^{*}.$$ The identities (2.25) yields the following computations, for each $g,h,k\in H$, $$\displaystyle d_{c}(s$$ $$\displaystyle;{\dot{\pi}}(h),{\dot{\pi}}(k))={\theta}_{s}(f({\dot{\pi}}(h),{% \dot{\pi}}(k)))f({\dot{\pi}}(h),{\dot{\pi}}(k))^{*}{\zeta}_{\nu}(s;{\mathfrak{% n}}({\dot{\pi}}(h),{\dot{\pi}}(k)));$$ $$\displaystyle{\zeta}_{\nu}(s$$ $$\displaystyle;{{\mathfrak{n}}_{N}}({\dot{\pi}}(g),{\dot{\pi}}(h))={\theta}_{s}% (a({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(h)))a({\pi% \!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(h))^{*}$$ $$\displaystyle\hskip 36.135pt\times z_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!_{% \scriptscriptstyle G}}(g))){\alpha}_{g}({\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({% \pi\!_{\scriptscriptstyle G}}(h)))){\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!% _{\scriptscriptstyle G}}(gh)))^{*};$$ $$\displaystyle d_{c}(s$$ $$\displaystyle;{\dot{\pi}}(h),{\dot{\pi}}(k))={\theta}_{s}(f({\dot{\pi}}(h),{% \dot{\pi}}(k)))f({\dot{\pi}}(h),{\dot{\pi}}(k))^{*}$$ $$\displaystyle\hskip 14.454pt\times{\theta}_{s}(a({\pi\!_{\scriptscriptstyle G}% }(h),{\pi\!_{\scriptscriptstyle G}}(k)))a({\pi\!_{\scriptscriptstyle G}}(h),{% \pi\!_{\scriptscriptstyle G}}(k))^{*}$$ $$\displaystyle\hskip 36.135pt\times z_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!_{% \scriptscriptstyle G}}(h))){\alpha}_{h}({\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({% \pi\!_{\scriptscriptstyle G}}(k)))){\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!% _{\scriptscriptstyle G}}(hk)))^{*}.$$ With $v(s;g)={\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!_{\scriptscriptstyle G}}(g)))$, we get $$\displaystyle d_{c}(s$$ $$\displaystyle;{\dot{\pi}}(h),{\dot{\pi}}(k))={\theta}_{s}(u(h,k))u(h,k)^{*}v(s% ;g){\alpha}_{g}(v(s;h))v(s;gh)^{*}$$ $$\displaystyle=({{\partial}_{\theta}}(u(h,k)))_{s}({{\partial}_{H}}v)(s;h,k)$$ Substituting this to the above computation of ${\dot{\pi}}^{*}(c)$ and setting $$w(g,s;h,t)=u(g,h){\alpha}_{g}(v(s;h)^{*}),$$ we obtain: $$\displaystyle({\dot{\pi}}^{*}c)(\tilde{g}$$ $$\displaystyle,\tilde{h},\tilde{k})={\alpha}_{g}\Big{(}{\theta}_{s}(u(h,k))u(h,% k)^{*}v(s;h){\alpha}_{h}(v(s;k))v(s;hk)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(u(h,k))u(g,hk)\{u(g,h)u(gh,k)\}% ^{*};$$ $$\displaystyle({\partial}_{{\widetilde{H}}}w)(\tilde{g}$$ $$\displaystyle,\tilde{h},\tilde{r})={\alpha}_{\tilde{g}}(w(\tilde{h};\tilde{k})% )w(\tilde{h};\tilde{h}\tilde{k})\{w(\tilde{g};\tilde{h})w(\tilde{g}\tilde{h};% \tilde{k})\}^{*}$$ $$\displaystyle={\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \theta}_{s}\Big{(}u(h,k){\alpha}_{h}(v(t;k))^{*}))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times u(g,hk){\alpha}_{g}(v(s;hk)^{*})$$ $$\displaystyle\hskip 36.135pt\times\{u(g,h){\alpha}_{g}(v(s;h))^{*}u(gh,k){% \alpha}_{gh}(v(s+t;k)^{*})\}^{*}$$ $$\displaystyle={\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \theta}_{s}\Big{(}u(h,k){\alpha}_{h}(v(t;k))^{*}))\Big{)}$$ $$\displaystyle\hskip 36.135pt\times u(g,hk){\alpha}_{g}(v(s;hk)^{*})\Big{\{}u(g% ,h){\alpha}_{g}(v(s;h)^{*})u(gh,k)$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{gh}\big{(}v(s;k)^{*}{\theta}_{s}(v% (t;k)^{*})\big{)}\Big{\}}^{*}$$ $$\displaystyle={\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}% \Big{(}{\theta}_{s}(u(h,k))u(h,k)^{*}v(s,h){\alpha}_{h}(v(s,k))v(s;hk)^{*}\Big% {)}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}(u(h,k))u(g,hk)\{u(g,h)u(gh,k)\}% ^{*}$$ $$\displaystyle=({\dot{\pi}}^{*}c)(\tilde{g},\tilde{h},\tilde{k}).$$ Therefore, we conclude $${\dot{\pi}}^{*}(c)={\partial}_{\widetilde{H}}w.$$ 2.262.262.26 Hence the element $({\lambda},\mu)$ given by $$\displaystyle{\lambda}(\ell$$ $$\displaystyle;g,s)=w(g,s;g{{}^{-1}}\ell g)w(\ell;g,s)^{*},\quad\ell\in L,g\in H% ,s\in{\mathbb{R}};$$ $$\displaystyle\mu(m,n)=w(m,n),\quad m,n\in L,$$ is a characteristice cocycle in ${\text{\rm Z}}_{\widetilde{{\alpha}}}({\widetilde{H}},L,A)$. In terms of the original $a,b$ and $f$, we get $$\displaystyle{\lambda}(m$$ $$\displaystyle;g,s)=a({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{% \scriptscriptstyle G}}(g{{}^{-1}}mg))b(g,g{{}^{-1}}mg)f({\dot{\pi}}(g),{\dot{% \pi}}(g{{}^{-1}}mg))$$ 2.272.272.27 $$\displaystyle\hskip 36.135pt\times{\alpha}_{g}({\zeta}_{\nu}(s;{{\text{\rm n}}% _{N}}({\pi\!_{\scriptscriptstyle G}}(g{{}^{-1}}mg)))^{*})a({\pi\!_{% \scriptscriptstyle G}}(m),{\pi\!_{\scriptscriptstyle G}}(g))^{*}$$ $$\displaystyle\hskip 36.135pt\times b(m,g)^{*}f({\dot{\pi}}(m),{\dot{\pi}}(g))^% {*};$$ $$\displaystyle\mu(m$$ $$\displaystyle;n)=a({\pi\!_{\scriptscriptstyle G}}(m),{\pi\!_{% \scriptscriptstyle G}}(n))b(m,n).$$ Now observe that if $m,n\in M$, then both ${\lambda}$ and $\mu$ takes values in ${\mathbb{T}}$, so that $\chi=[{\lambda},\mu]\in{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$. We are now going to compare the new cocycle $c^{{\lambda},\mu}$ and the original $c$ in the next lemma to complete the proof of Lemma 2.12 and therefore Theorem 2.7: Lemma 2.13 The cochain $W\in{{\text{\rm C}}_{\alpha}^{2}}({\widetilde{Q}},A)$ defined by $$W({\tilde{p}},{\tilde{q}})=w({\dot{\mathfrak{s}}}({\tilde{p}}),{\dot{\mathfrak% {s}}}({\tilde{q}}))w({{\mathfrak{n}}_{L}}(p,q),{\dot{\mathfrak{s}}}({\tilde{p}% }{\tilde{q}}))^{*},\quad{\tilde{p}}=(p,s),{\tilde{q}}=(q,t)\in{\widetilde{Q}},$$ falls in ${{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ and its coboundary ${\partial}_{\widetilde{Q}}W$ bridges the difference between $(c^{{\lambda},\mu},\nu_{\chi})$ and the original $(c,\nu)$, i.e., $([c],\nu)={\delta}(\chi)$. Therefore $${\text{\rm Ker}}({\text{\rm Inf}})\subset\text{\rm Im}({\delta}).$$ Demonstration Proof First we observe that for any pair ${\tilde{p}}=(p,s),{\tilde{q}}=(q,t)\in{\widetilde{Q}}$ $$\displaystyle W({\tilde{p}},{\tilde{q}})$$ $$\displaystyle=w({\dot{\mathfrak{s}}}({\tilde{p}}),{\dot{\mathfrak{s}}}({\tilde% {q}}))w({{\mathfrak{n}}_{L}}(p,q),{\dot{\mathfrak{s}}}({\tilde{p}}{\tilde{q}})% )^{*}$$ $$\displaystyle=u({\dot{\mathfrak{s}}}({\tilde{p}}),{\dot{\mathfrak{s}}}({\tilde% {q}})){\alpha}_{p}(v(s;{\dot{\mathfrak{s}}}(q))u({{\mathfrak{n}}_{L}}(p,q),{% \dot{\mathfrak{s}}}(pq))^{*}$$ $$\displaystyle=a({\mathfrak{s}}(p),{\mathfrak{s}}(q))b({\dot{\mathfrak{s}}}(p),% {\dot{\mathfrak{s}}}(q))f(p,q){\alpha}_{p}({\zeta}_{\nu}(s;{{\text{\rm n}}_{N}% }({\mathfrak{s}}(q)))$$ $$\displaystyle\hskip 36.135pt\times a({\mathfrak{n}}(p,q),{\mathfrak{s}}(pq))^{% *}b({{\mathfrak{n}}_{L}}(p,q),{\dot{\mathfrak{s}}}(pq))^{*}f(p,q)^{*}$$ $$\displaystyle=a({\mathfrak{s}}(p),{\mathfrak{s}}(q))b({\dot{\mathfrak{s}}}(p),% {\dot{\mathfrak{s}}}(q))f(p,q)\quad(\text{as }{{\text{\rm n}}_{N}}({\mathfrak{% s}}(q))=1)$$ $$\displaystyle\hskip 36.135pt\times a({\mathfrak{n}}(p,q),{\mathfrak{s}}(pq))^{% *}b({{\mathfrak{n}}_{L}}(p,q),{\dot{\mathfrak{s}}}(pq))^{*}f(p,q)^{*}$$ $$\displaystyle=W(p,q).$$ Thus $W$ is constant on ${\mathbb{R}}$-variables, so that it belongs to ${{\text{\rm C}}_{\alpha}^{2}}(Q,A)$. By Lemma 2.1, we have $$c=({\partial}_{\widetilde{Q}}W)c^{{\lambda},\mu}$$ and therefore $$c^{{\lambda},\mu}\equiv c\quad{\text{\rm mod}}\ {{\text{\rm B}}_{{\alpha},% \text{\rm s}}^{3}}({\widetilde{Q}},A),\ \text{i.e.,}\quad[c^{{\lambda},\mu}]=[% c]\quad\text{in}\ {{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}}% ,A).$$ Setting $g=1$ in (2.27), we obtain for each $m\in L$ $$\displaystyle{\lambda}(m;s)$$ $$\displaystyle=a(1,{\pi\!_{\scriptscriptstyle G}}(m))b(1,m)f(1,{\dot{\pi}}(m))$$ $$\displaystyle\hskip 36.135pt\times{\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!_% {\scriptscriptstyle G}}(m)))^{*}a({\pi\!_{\scriptscriptstyle G}}(m),1)^{*}$$ $$\displaystyle\hskip 36.135pt\times b(m,1)^{*}f({\dot{\pi}}(m),1)^{*}$$ $$\displaystyle={\zeta}_{\nu}(s;{{\text{\rm n}}_{N}}({\pi\!_{\scriptscriptstyle G% }}(m)))^{*}={\zeta}_{\nu}(s;{\pi\!_{\scriptscriptstyle G}}(m){{}^{-1}}))^{*}$$ since the cochains $a,b$ and $f$ can be chosen such a way that whenever $1$ appears in the arguments they take value $1$. As $${\zeta}_{\nu}(\ \cdot;{\pi\!_{\scriptscriptstyle G}}(m){{}^{-1}})^{*}\equiv{% \zeta}_{\nu}(\ \cdot;{\pi\!_{\scriptscriptstyle G}}(m))\quad{\text{\rm mod}}\ % {{\text{\rm B}}_{\theta}^{1}},$$ i.e., $\nu({\pi\!_{\scriptscriptstyle G}}(m){{}^{-1}}){{}^{-1}}=\nu({\pi\!_{% \scriptscriptstyle G}}(m)),m\in L,$ we conclude that $\nu=\nu_{[{\lambda},\mu]}$. Therefore we conclude that $([c],\nu)={\delta}([{\lambda},\mu])$. This completes the proof of the inclustion ${\text{\rm Ker}}({\text{\rm Inf}})\subset\text{\rm Im}({\delta})$. $\heartsuit$ Lemma 2.14 Let $A$ denote the unitary group ${\eusm U}({\eusm C})$ of an abelian separable von Neumann algebra ${\eusm C}$ or the torus group ${\mathbb{T}}$. Let ${\alpha}$ be an action of a countable discrete group $G$ on ${\eusm C}$. To each $c\in{\text{\rm Z}}_{\alpha}^{3}(G,A)$, there corresponds a countable group $H=H(c)$ and a normal subgroup $M=M(c)$ such that: Demonstration Proof First extend the coefficient group $A$ to the unitary group $$B={\eusm U}({\eusm C}{\overline{\otimes}}\ell^{\infty}(G))$$ on which $G$ acts by ${\alpha}\otimes\rho$ with $\rho$ the right translation action of $G$ on $\ell^{\infty}(G)$, which will be denoted by ${\alpha}$ again whenever it will not cause any confusion, and obtain an exact sequence: $$\begin{CD}1@>{}>{}>A@>{i}>{}>B@>{j}>{\underset{{\mathfrak{s}}_{j}}\to{% \longleftarrow}}>C@>{}>{}>1,\end{CD}$$ where $i(a)=a\otimes 1\in B,a\in A,$ and ${{\mathfrak{s}}_{j}}$ a cross-section which can be fixed without reference to the cocycle $c$. Then set $$u(x,g,h)=u_{c}(x,g,h)={\alpha}_{x}^{-1}(c(x,g,h))\in A,\quad x,g,h\in G,$$ and view $u(\cdot,g,h)$ as an element of $B={\eusm U}(\ell^{\infty}(G){\overline{\otimes}}{\eusm C}))={\text{\rm Map}}(G% ,A)$. The cocycle identity gives that $c={\partial}_{{\alpha}}^{G}u\in{\text{\rm B}}_{{\alpha}}^{2}(G,C)$. Since $j(A)=\{1\}$ in $C$, $\mu=\mu_{c}=j_{*}(u)$ is in ${\text{\rm Z}}_{{\alpha}}^{2}(G,C)$. Let $M$ be the subgroup of $C$ generated by the saturation $\{{\alpha}_{g}(\mu(h,k)):g,h,k\in G\}$ of the range of $\mu$, so that $\mu$ belongs to ${\text{\rm Z}}_{\alpha}^{2}(G,M)$. Now consider the twisted semi-direct product: $$H=H(c)=M\rtimes_{{\alpha},\mu}G$$ and obtain an exact sequence: $$\begin{CD}1@>{}>{}>M@>{}>{}>H@>{{\pi\!_{\scriptscriptstyle G}}}>{}>G@>{}>{}>1.% \end{CD}$$ Set $E=j{{}^{-1}}(M)$ to obtain a crossed extension $E\in{\text{\rm Xext}}_{{\alpha}}(H,M,A)$. With ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ the cross-section given by $${{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)=(1,g)\in H,\quad g\in G,$$ we obtain $$\mu(g,h)={{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g){{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(h){{\mathfrak{s}}\!_{\scriptscriptstyle H}}(gh){{}^{-1}% },\quad g,h\in G.$$ Thus $\mu\in{\text{\rm Z}}_{{\alpha}}^{2}(G,M)$. Now observe that $$f(g,h)={{\mathfrak{s}}_{j}}(\mu(g,h)){{}^{-1}}u(g,h)\in i(A)$$ and that $$({\partial}_{G}f)(g,h,k)={\partial}_{G}({{\mathfrak{s}}_{j}}{\lower-1.29pt% \hbox{{$\scriptscriptstyle\circ$}}}\mu)(g,h,k){{}^{-1}}c(g,h,k),\quad g,h,k\in G.$$ Thus we conclude that ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(\chi_{E})=[{\partial}_{G}({{% \mathfrak{s}}_{j}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}\mu)]=[c]% \in{\text{\rm H}}_{\alpha}^{3}(G,A)$. $\heartsuit$ Remark 2.15. The last lemma shows that if $G$ is a countable discrete amenable group, then so is $H$ because $M$ is abelian and countable, and the quotient $G=H/M$ is amenable. Another important fact is that the groups $H$ and $M$ depend heavily on the choice of the cocycle $c$. Two cohomologous cocycles $c,c^{\prime}\in{\text{\rm Z}}_{\alpha}^{3}(G,A)$ need not produce isomorphic $H(c)$ and $H(c^{\prime})$. In fact, the subgroups $M(c)$ and $M(c^{\prime})$ are not isomorphic. We will address this incovenience later. If we use the entire $C$ in place of $M$, then the resulting groups are isomorphic in a natural way. But in this way, we will lose the countability of $H$. Definition 2.16. We call the group $H(c)$ the resolution group of the cocycle $c\in{{\text{\rm Z}}_{\alpha}^{3}}(G,A)$ and the characteristic coycle $({\lambda}_{c},\mu_{c})\in{\text{\rm Z}}_{\alpha}(H,M,A)$ a resolution of the cocycle $c$. We also call the map ${\pi\!_{\scriptscriptstyle G}}:H(c)\mapsto G$ resolution map and the pair $\{H(c),{\pi\!_{\scriptscriptstyle G}}\}$ a resolution system. Corollary 2.17 Let $\{{\eusm C},{\mathbb{R}},{\theta}\}$ be an ergodic flow and $G$ a discrete countable group acting on the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ via ${\alpha}$. Let $N$ be a normal subgroup of $G$ such that $N\subset{\text{\rm Ker}}({\alpha})$. Then with $Q=G/N$ the quotient group of $G$ by $N$ and ${\mathfrak{s}}\!:Q\mapsto G$ a cross-section of the quotient map $\pi\!:G\mapsto Q$, for any pair $$([c],\nu)\in{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}(Q\times{\mathbb{R}},{% \eusm U}({\eusm C}))*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{% \theta}^{1}}({\mathbb{R}},{\eusm U}({\eusm C})))$$ there exist a countable discrete group $H$ and a surjective homomorphism ${\pi\!_{\scriptscriptstyle G}}\!:H\mapsto G$ and $\chi\in{\Lambda}_{\pi_{G}^{*}({\alpha})}(H\times{\mathbb{R}},L,M,A)$ such that $$([c],\nu)={\delta}(\chi)$$ where $L=\pi_{G}^{-1}(N)$, $M={\text{\rm Ker}}({\pi\!_{\scriptscriptstyle G}})$ and ${\delta}$ is the modified HJR-map in Lemma 2.11 associated with the exact sequence: $$\begin{CD}1@>{}>{}>M@>{}>{}>L@>{{\pi\!_{\scriptscriptstyle G}}}>{}>G@>{\pi}>{% \underset{\mathfrak{s}}\to{\longleftarrow}}>Q@>{}>{}>1.\end{CD}$$ Moreover, the kernel $M={\text{\rm Ker}}({\pi\!_{\scriptscriptstyle G}})$ is chosen to be abelian. Hence if $G$ is amenable in addition, then $H$ is amenable. Demonstration Proof Let ${\partial}$ be the map in Lemma 2.11: $${\partial}:{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}(Q\times{\mathbb{R}},{% \eusm U}({\eusm C}))*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{% \theta}^{1}}({\mathbb{R}},{\eusm U}({\eusm C})))\mapsto{{\text{\rm H}}^{3}}(G,% {\mathbb{T}}).$$ Set $[c_{G}]={\partial}([c],\nu)\in{{\text{\rm H}}^{3}}(G,{\mathbb{T}})$ and choose a cocycle $c_{G}\in{{\text{\rm Z}}^{3}}(G,{\mathbb{T}})$ which represents the cohomology class $[c_{G}]$. Let $H=H(c_{G})$ be the resolution group of $c_{G}$ in Lemma 2.14, i.e., the group $H$ is equipped with a surjective homomorphism ${\pi\!_{\scriptscriptstyle G}}\!:H\mapsto G$ such that $\pi_{G}^{*}([c_{G}])=1$ in ${{\text{\rm H}}^{3}}(H,{\mathbb{T}})$. Thus with $L=\pi_{G}^{-1}(N)\triangleleft H$ and $M={\text{\rm Ker}}({\pi\!_{\scriptscriptstyle G}})\triangleleft H$, we have an exact sequence: $$\begin{CD}1@>{}>{}>M@>{}>{}>L@>{{\pi\!_{\scriptscriptstyle G}}}>{}>G@>{\pi}>{% \underset{\mathfrak{s}}\to{\longleftarrow}}>Q@>{}>{}>1,\end{CD}$$ with specified cross-section ${\mathfrak{s}}$ of $\pi\!:G\mapsto Q$. This generates the associated modified HJR-exact sequence of (2.13) in Theorem 2.7. As $${\text{\rm Inf}}([c],\nu)=\pi_{G}^{*}({\partial}([c],\nu)=\pi_{G}^{*}([c_{G}])% =1,$$ there exists $\chi\in{\Lambda}_{\alpha}({\widetilde{Q}},L,M,A)$ such that $([c],\nu)={\delta}(\chi)$ where ${\widetilde{Q}}=Q\times{\mathbb{R}}$ and $A={\eusm U}({\eusm C})$ of course. If $G$ is amenable, then the group $H$ in Lemma 2.14 must be amenable as the subgroup $M$ of $H$ in Lemma 2.14 is abelian and the quotient group $H/M=G$ is amenable. $\heartsuit$ Change on the Cross-Section $\boldsymbol{{\mathfrak{s}}\!:Q\mapsto G}$: As mentioned repeatedly, the group ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}(Q\times G,A)*_{\mathfrak{s}}{% \text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ depends heavily on the cross-section ${\mathfrak{s}}\!:Q\mapsto G$. So we are going to examine what change occurs if we change the cross-section from ${\mathfrak{s}}\!:Q\mapsto G$ to another ${\mathfrak{s}}^{\prime}\!:Q\mapsto G$. The change does not alter the groups ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ not ${\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$, but the fiber product ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}(Q\times G,A)*_{\mathfrak{s}}{% \text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ changes to ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}(Q\times G,A)*_{\mathfrak{s}^{% \prime}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$. Proposition 2.18 In the setting as above, if ${\mathfrak{s}^{\prime}}\!:Q\mapsto G$ is another cross-section of the homomorphism $\pi\!:G\mapsto Q=G/N$, then there is a natural isomorphism $${\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}\!:\ {{\text{\rm H}}_{{\alpha% },{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}},N,A)\mapsto{{\text{\rm H% }}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{\mathfrak{s}^{\prime}}{% \text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}),$$ 2.282.282.28 where ${\widetilde{Q}}=Q\times{\mathbb{R}}$ as before. Furthermore, if $\mathfrak{s}{{}^{\prime\prime}}\!:Q\mapsto G$ is the third cross-section of $\pi$, then the ismomorphisms satisfy the following chain rule: $${\sigma}_{\mathfrak{s}{{}^{\prime\prime}},{\mathfrak{s}}}={\sigma}_{\mathfrak{% s}{{}^{\prime\prime}},{\mathfrak{s}^{\prime}}}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}$$ 2.292.292.29 Demonstration Proof The new cross-section ${\mathfrak{s}^{\prime}}\!:Q\mapsto G$ generates a new $N$-valued 2-cocycle: $${{\mathfrak{n}}_{N}^{\prime}}(p,q)={\mathfrak{s}^{\prime}}(p){\mathfrak{s}^{% \prime}}(q){\mathfrak{s}^{\prime}}(pq)^{\prime},\quad p,q\in Q.$$ Set $${{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(p)={\mathfrak{s}^{% \prime}}(p){\mathfrak{s}}(p){{}^{-1}}\in N,\quad p\in Q.$$ Then the 2-cocycle ${{\mathfrak{n}}_{N}^{\prime}}$ is written in terms of ${{\mathfrak{n}}_{N}}$ and ${{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$ as follows: $$\left.\begin{aligned} \displaystyle{{\mathfrak{n}}_{N}^{\prime}}(p,q)&% \displaystyle={{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(p){% \mathfrak{s}}(p){{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q){% \mathfrak{s}}(q)\{{{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(pq% ){{\mathfrak{s}}_{\pi}}(pq)\}{{}^{-1}}\\ &\displaystyle={{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(p){% \mathfrak{s}}(p){{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q){% \mathfrak{s}}(p){{}^{-1}}{{\mathfrak{n}}_{N}}(p,q){{\text{\rm n}}_{{\mathfrak{% s}^{\prime}},{\mathfrak{s}}}}(pq){{}^{-1}}\end{aligned}\right\}\quad p,q\in Q.$$ Hence for each $\nu\in{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$ we have $$\nu({{\mathfrak{n}}_{N}^{\prime}}(p,q))=\nu({{\text{\rm n}}_{{\mathfrak{s}^{% \prime}},{\mathfrak{s}}}}(p)){\alpha}_{p}(\nu({{\text{\rm n}}_{{\mathfrak{s}^{% \prime}},{\mathfrak{s}}}}(q)))\nu({{\mathfrak{n}}_{N}}(p,q))\nu({{\text{\rm n}% }_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(pq)){{}^{-1}}.$$ For each $([c],\nu)\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)$, we set $$\displaystyle d_{c^{\prime}}(s;q,r)$$ $$\displaystyle=d_{c}(s;q,r){\zeta}_{\nu}(s;{{\text{\rm n}}_{{\mathfrak{s}^{% \prime}},{\mathfrak{s}}}}(q)){\alpha}_{q}({\zeta}_{\nu}(s;{{\text{\rm n}}_{{% \mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r)){\zeta}_{\nu}(s;{{\text{\rm n}}_{{% \mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr))^{*};$$ $$\displaystyle\hskip 36.135ptc_{Q}^{\prime}(p,q,r)=c_{Q}(p,q,r),\quad s\in{% \mathbb{R}},p,q,r\in Q,$$ where ${\zeta}_{\nu}(s;n)={{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(n)_{s% },n\in N,s\in{\mathbb{R}}$. As ${{\partial}_{Q}}d_{c^{\prime}}={{\partial}_{Q}}d_{c}$, the new pair $(d_{c^{\prime}},c_{Q}^{\prime})$ gives a standard 3-cocycle $c^{\prime}\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ which is not congruent to $c=(d_{c},c_{Q})\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}}% ,A)$ modulo ${{\text{\rm B}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ in general although they are congruent modulo ${{\text{\rm B}}_{\alpha}^{3}}({\widetilde{Q}},A)$. Now we define the map ${{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$ in the following way: $$\displaystyle{{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}([c],\nu)=([c^% {\prime}],\nu),\quad([c],\nu)\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{% \text{\rm out}}}(G\times{\mathbb{R}},N,A).$$ Then as $$[d_{c^{\prime}}(\ \cdot,q,r)]=\nu({{\mathfrak{n}}_{N}^{\prime}}(q,r))\in{{% \text{\rm H}}_{\theta}^{1}},\quad q,r\in Q,$$ the pair $([c^{\prime}],\nu)$ belongs to ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{{\mathfrak{s% }^{\prime}}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$. To check the multiplicativity of ${{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$, for each pair $h,k\in{{\text{\rm H}}_{\theta}^{1}}$ we choose $a(h,k)\in A$ such that $${{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(h){{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(k)={{\partial}_{\theta}}(a(h,k)){{\mathfrak% {s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(hk).$$ Then for each pair $([c],\nu),([\bar{c}],\bar{\nu})\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{% \text{\rm out}}}(G\times{\mathbb{R}},N,A)$, we have $$\displaystyle d_{c^{\prime}}(\ \cdot\ ;q,r)$$ $$\displaystyle=d_{c}(\ \cdot\ ;q,r){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{% \rm Z}}}}(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q))){% \alpha}_{q}({{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{% \rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r))))$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{% \rm Z}}}}(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr))){% {}^{-1}};$$ $$\displaystyle d_{\bar{c}^{\prime}}(\ \cdot\ ;q,r)$$ $$\displaystyle=d_{\bar{c}}(\ \cdot\ ;q,r){{\mathfrak{s}}_{\!\scriptscriptstyle{% \text{\rm Z}}}}(\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{% s}}}}(q))){\alpha}_{q}((\bar{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}% }}}(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r))))$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{% \rm Z}}}}((\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}% (qr))){{}^{-1}};$$ $$\displaystyle(d_{c^{\prime}}$$ $$\displaystyle d_{\bar{c}^{\prime}})(\ \cdot\ ;q,r)=(d_{c}d_{\bar{c}})(\ \cdot% \ ;q,r){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm n% }}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q)))$$ $$\displaystyle\hskip 108.405pt\times{\alpha}_{q}({{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}% },{\mathfrak{s}}}}(r)))){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(% \nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr))){{}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times{{\mathfrak{s}}_{\!\scriptscriptstyle{\text{% \rm Z}}}}(\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(% q))){\alpha}_{q}({{\mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\bar{% \nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r)))){{% \mathfrak{s}}_{\!\scriptscriptstyle{\text{\rm Z}}}}(\bar{\nu}({{\text{\rm n}}_% {{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr))){{}^{-1}};$$ $$\displaystyle=(d_{c}d_{\bar{c}})(\ \cdot\ ;q,r){{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(\nu\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}% ^{\prime}},{\mathfrak{s}}}}(q))){\alpha}_{q}({{\mathfrak{s}}_{\!% \scriptscriptstyle{\text{\rm Z}}}}(\nu\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}% ^{\prime}},{\mathfrak{s}}}}(r)))){{\mathfrak{s}}_{\!\scriptscriptstyle{\text{% \rm Z}}}}(\nu\bar{\nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}% }}(qr))){{}^{-1}}$$ $$\displaystyle\hskip 36.135pt\times{{\partial}_{\theta}}(a(\nu({{\text{\rm n}}_% {{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q)),\bar{\nu}({{\text{\rm n}}_{{% \mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q)))){{\partial}_{\theta}}({\alpha}_{q% }(a(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r)),\bar{\nu% }({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(r))))$$ $$\displaystyle\hskip 36.135pt\times{{\partial}_{\theta}}(a(\nu({{\text{\rm n}}_% {{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr),\bar{\nu}({{\text{\rm n}}_{{% \mathfrak{s}^{\prime}},{\mathfrak{s}}}}(qr)))){{}^{-1}}$$ $$\displaystyle=d_{(c\bar{c})^{\prime}}(\ \cdot\ ;q,r)({{\partial}_{\theta}}{% \partial}_{Q}b))(q,r),$$ where $b\in{\text{\rm C}}_{\alpha}^{1}(Q,A)$ is given by $$b(q)=a(\nu({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q)),\bar{% \nu}({{\text{\rm n}}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}(q)))\in A.$$ Also we have $$(c\bar{c})_{Q}^{\prime}=(c\bar{c})_{Q}=c_{Q}\bar{c}_{Q}({\partial}_{Q}{% \partial}_{Q}b)=c_{Q}^{\prime}\bar{c}_{Q}^{\prime}.$$ Therefore, we get $[c^{\prime}][\bar{c}^{\prime}]=[(c\bar{c})^{\prime}]$ in ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)$ by Lemma 2.5, i.e., ${{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$ is multiplicative. The chain rule follows from the definition of ${{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$. We leave it to the reader. The chain rule also gives that the map ${{\sigma}_{{\mathfrak{s}^{\prime}},{\mathfrak{s}}}}$ is an isomorphism. $\heartsuit$ The chain rule (2.29) allows us to define the cohomology group independent of the cross-section in the following way: first let $S$ be the set of all cross sections ${\mathfrak{s}}\!:Q\mapsto G$ of the homomorphism $\pi$ and set: $$\displaystyle{{\text{\rm H}}_{\alpha}^{\text{\rm out}}}(G,N,A)$$ $$\displaystyle=\Big{\{}\{([c],\nu)_{\mathfrak{s}}\!:{\mathfrak{s}}\in S\}\in% \prod_{{\mathfrak{s}}\in S}{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{% \rm out}}}(G\times{\mathbb{R}},N,A):$$ 2.302.302.30 $$\displaystyle\hskip 36.135pt([c],\nu)_{\mathfrak{s}^{\prime}}={{\sigma}_{{% \mathfrak{s}^{\prime}},{\mathfrak{s}}}}(([c],\nu)_{\mathfrak{s}}),\quad{% \mathfrak{s}^{\prime}},{\mathfrak{s}}\in S\Big{\}}.$$ Definition 2.19. The group ${\text{\rm H}}_{{\alpha}}^{\text{\rm out}}(G,N,A)$ will be called the modular obstruction group. Each pair $(c,{\zeta})\in{{\text{\rm Z}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)% \times{\text{\rm Map}}(N,{{\text{\rm Z}}_{\theta}^{1}})$ which gives rise to an element $([c],[{\zeta}])\in{{\text{\rm H}}_{\alpha}^{\text{\rm out}}}(G,N,A)$ will be called a modular obstruction cocycle. §3. Outer Actions of a Discrete Group on a Factor Let ${\eusm M}$ be a separable factor. Associated with ${\eusm M}$ is the characteristic square: $$\begin{CD}111\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\mathbb{T}}@>{}>{}>A@>{{{\partial}_{\theta}}}>{}>{{\text{\rm B}}_{% \theta}^{1}}({\mathbb{R}},A)@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>{\eusm U}({\eusm M})@>{}>{}>{\widetilde{{\eusm U}}}({\eusm M})@>{{{% \partial}_{\theta}}}>{}>{{\text{\rm Z}}_{\theta}^{1}}({\mathbb{R}},A)@>{}>{}>1% \\ @V{{\text{\rm Ad}}}V{}V@V{{\widetilde{\text{\rm Ad}}}}V{}V@V{}V{}V\\ 1@>{}>{}>{\text{\rm Int}}({\eusm M})@>{}>{}>{{\text{\rm Cnt}}_{\text{\rm r}}}(% {\eusm M})@>{{{\dot{\partial}}_{\theta}}}>{}>{{\text{\rm H}}_{\theta}^{1}}({% \mathbb{R}},A)@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 111\\ \end{CD}$$ 3.13.13.1 where $A={\eusm U}({\eusm C})$ is the unitary group of the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ of weights on ${\eusm M}$, which is ${\text{\rm Aut}}({\eusm M})\times{\mathbb{R}}$-equivariant. Applying the previous section to the groups $$\displaystyle H={\text{\rm Aut}}({\eusm M}),M={\text{\rm Int}}({\eusm M}),G={% \text{\rm Out}}({\eusm M}),N={{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$$ $$\displaystyle Q={{\text{\rm Out}}_{\tau,{\theta}}}({\widetilde{{\eusm M}}})={% \text{\rm Out}}({\eusm M})/{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)={% \text{\rm Aut}}({\eusm M})/{{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}),$$ we obtain the intrinsic invariant and the intrinsic modular obstruction: $$\displaystyle\Theta({\eusm M})\in{\Lambda}_{{\text{\rm mod}}\times{\theta}}({% \text{\rm Aut}}({\eusm M})\times{\mathbb{R}},{{\text{\rm Cnt}}_{\text{\rm r}}}% ({\eusm M}),A);$$ $$\displaystyle{{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})\in{\text{\rm H}}_{{% \text{\rm mod}}\times{\theta},{\text{\rm s}}}^{\text{\rm out}}({\text{\rm Out}% }({\eusm M}),{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A),A).$$ Choosing a cross-section: $g\in{\text{\rm Out}}({\eusm M})\mapsto{\alpha}_{g}\in{\text{\rm Aut}}({\eusm M})$, we obtain an outer action of ${\text{\rm Out}}({\eusm M})$ on ${\eusm M}$, i.e., $$\displaystyle{\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{h}$$ $$\displaystyle\equiv{\alpha}_{gh}\quad{\text{\rm mod}}\ {\text{\rm Int}}({\eusm M% }),g,h\in{\text{\rm Out}}({\eusm M});$$ $$\displaystyle{\alpha}_{{\text{\rm id}}}$$ $$\displaystyle={\text{\rm id}};\quad{\alpha}_{g}\not\in{\text{\rm Int}}({\eusm M% })\quad\text{if}\ g\neq{\text{\rm id}}.$$ Choosing $\{u(g,h)\in{\eusm U}({\eusm M}):g,h\in{\text{\rm Out}}({\eusm M})\}$ so that $${\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{h}={% \text{\rm Ad}}(u(g,h)){\alpha}_{gh},\quad g,h\in{\text{\rm Out}}({\eusm M}),$$ we obtain a 3-cocycle $c\in{{\text{\rm Z}}^{3}}({\text{\rm Out}}({\eusm M}),{\mathbb{T}})$: $$c(g,h,k)={\alpha}_{g}(u(h,k))u(g,hk)\{u(g,h)u(gh,k)\}^{*},\quad g,h,k\in{\text% {\rm Out}}({\eusm M}).$$ Its cohomology class $[c]\in{{\text{\rm H}}^{3}}({\text{\rm Out}}({\eusm M}),{\mathbb{T}})$ does not depend on the choice of the cross-section ${\alpha}:g\in{\text{\rm Out}}({\eusm M})\mapsto{\alpha}_{g}\in{\text{\rm Aut}}% ({\eusm M})$ nor on the choice of $\{u(g,h)\}$. The intrinsic obstruction ${\text{\rm Ob}}({\eusm M})=[c]$ of ${\eusm M}$ is, by definition, the cohomology class $[c]\in{{\text{\rm H}}^{3}}({\text{\rm Out}}({\eusm M}),{\mathbb{T}})$. Proposition 3.1 The intrinsic obstruction ${\text{\rm Ob}}({\eusm M})$ of the factor ${\eusm M}$ is the image ${\partial}({{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M}))$ of the instrinsic modular obstruction ${{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})$ of ${\eusm M}$ under the map $${\partial}:{\text{\rm H}}_{{\text{\rm mod}}\times{\theta},{\text{\rm s}}}^{% \text{\rm out}}({\text{\rm Out}}({\eusm M}),{{\text{\rm H}}_{\theta}^{1}},A)% \mapsto{{\text{\rm H}}^{3}}({\text{\rm Out}}({\eusm M}),{\mathbb{T}})$$ given by Lemma 2.11. Demonstration Proof In the notations in the last section, we take ${\text{\rm Aut}}({\eusm M})$ for $H$, ${\text{\rm Out}}({\eusm M})$ for $G$, ${{\text{\rm Out}}_{\tau,{\theta}}}({\widetilde{{\eusm M}}})$ for $Q$, ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ for $L$, ${\text{\rm Int}}({\eusm M})$ for $M$ and $N$ for ${{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)$. Then with $$\tilde{\chi}=\Theta({\eusm M})\in{\Lambda}_{{\text{\rm mod}}\times{\theta}}({% \text{\rm Aut}}({\eusm M})\times{\mathbb{R}},{{\text{\rm Cnt}}_{\text{\rm r}}}% ({\eusm M}),A)$$ the characteristic square (3.1) gives that $M=K(\tilde{\chi})$. The characteristic invariant $\chi\in{\Lambda}({\text{\rm Aut}}({\eusm M}),{\text{\rm Int}}({\eusm M}),{% \mathbb{T}})$ associated with the ${\text{\rm Aut}}({\eusm M})$-equivariant exact sequence: $$\begin{CD}1@>{}>{}>{\mathbb{T}}@>{}>{}>{\eusm U}({\eusm M})@>{}>{}>{\text{\rm Int% }}({\eusm M})@>{}>{}>1\end{CD}$$ is precisely $\chi=i_{{{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}),{\text{\rm Int}}({\eusm M% })}^{*}(\tilde{\chi})$ the pull back in Lemma 2.11. Then the obstruction ${\text{\rm Ob}}({\eusm M})={{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(\chi)$ is ${\partial}({{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M}))={\partial}({\delta}(% \tilde{\chi}))$ by Lemma 2.11. $\heartsuit$ Therefore, the modular obstruction ${{\text{\rm Ob}}_{\text{\rm m}}}({\eusm M})$ contains the information carried by the obstruction ${\text{\rm Ob}}({\eusm M})$. Let $G$ be a countable group. Fix a free outer action ${\alpha}$ of $G$ on ${\eusm M}$. If $\dot{\alpha}_{g}$ is the class of ${\alpha}_{g}$ in ${\text{\rm Out}}({\eusm M})$, then the map $\dot{\alpha}\!:g\in G\mapsto\dot{\alpha}_{g}\in{\text{\rm Out}}({\eusm M})$ is an injective homomorphism. With $N=\dot{\alpha}{{}^{-1}}({{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A))\triangleleft G$, the quotient map $\pi\!:g\in G\mapsto\pi(g)=gN\in Q=G/N$ and a cross-section ${\mathfrak{s}}\!:Q\mapsto G$ of $\pi$, we get the modular obstruction $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})\in{{\text{\rm H}}_{{\alpha},{% \mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}},N,A).$$ Two outer actions ${\alpha}$ and ${\beta}$ of $G$ on the same factor ${\eusm M}$ are, by definition, outer conjugate if there exists an automorphism ${\sigma}\in{\text{\rm Aut}}({\eusm M})$ such that $$\dot{\beta}_{g}=\dot{\sigma}\dot{\alpha}_{g}\dot{\sigma}{{}^{-1}},\quad g\in G,$$ where $\dot{\sigma}\in{\text{\rm Out}}({\eusm M})$ is the class of ${\sigma}$ in ${\text{\rm Out}}({\eusm M})$, i.e., $\dot{\sigma}={\sigma}{\text{\rm Int}}({\eusm M})\in{\text{\rm Out}}({\eusm M})$. Theorem 3.2 Let $G$ be a countable discrete group and ${\eusm M}$ a separable infinite factor with flow of weights $\{{\eusm C},{\mathbb{R}},{\theta}\}$. Suppose that ${\alpha}\!:g\in G\mapsto{\alpha}_{g}\in{\text{\rm Aut}}({\eusm M})$ is a free outer action of $G$ on ${\eusm M}$. Set $N=N({\alpha})={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$, $Q=G/N$ and fix a cross-section ${\mathfrak{s}}\!:Q\mapsto G$ of the quotient map $\pi\!:G\mapsto Q$. i) The modular obstruction: $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})\in{\text{\rm H}}_{{\text{\rm mod}}(% {\alpha})\times{\theta}\!,\ {\text{\rm s}}}^{3}(Q\times{\mathbb{R}},{\eusm U}(% {\eusm C}))*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}% }({\mathbb{R}},{\eusm U}({\eusm C})))$$ is an invariant for the outer conjugacy class of ${\alpha}$. ii) If ${\eusm M}$ is an approximately finite dimensional factor and $G$ is amenable, then the triplet $(N({\alpha}),{\text{\rm mod}}({\alpha}),{{\text{\rm Ob}}_{\text{\rm m}}}({% \alpha}))$ is a complete invariant of the outer conjugacy class of ${\alpha}$ in the sense that if ${\beta}:G\mapsto{\text{\rm Aut}}({\eusm M})$ is another outer action of $G$ on ${\eusm M}$ such that $N({\alpha})=N({\beta})$, and there exists an automorphism ${\sigma}\in{\text{\rm Aut}}_{\theta}({\eusm C})$ such that $${\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm mod}}({% \alpha}_{g}){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}% ={\text{\rm mod}}({\beta}_{g}),\quad g\in G;\quad{\sigma}_{*}({{\text{\rm Ob}}% _{\text{\rm m}}}({\alpha}))={{\text{\rm Ob}}_{\text{\rm m}}}({\beta}),$$ then the automorphism ${\sigma}$ of ${\eusm C}$ can be extended to an automorphism denoted by ${\sigma}$ again to the non-commutative flow of weights $\{{\widetilde{{\eusm M}}},{\mathbb{R}},{\theta},\tau\}$ such that $${\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{g}{\lower-1% .29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}\equiv{\beta}_{g}% \quad{\text{\rm mod}}\ {\text{\rm Int}}({\eusm M}),\quad g\in G.$$ Demonstration Proof We continue to denote the unitary group ${\eusm U}({\eusm C})$ by $A$ for short. Let $[c^{\alpha}]\in{{\text{\rm H}}^{3}}(G,{\mathbb{T}})$ be the obstruction ${\text{\rm Ob}}({\alpha})$ and $c^{\alpha}\in{{\text{\rm Z}}^{3}}(G,{\mathbb{T}})$ represent $[c]$ which is obtained by fixing a family $\{u(g,h)\in{\eusm U}({\eusm M})\!:\ g,h\in G\}$ such that $${\alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{h}={% \text{\rm Ad}}(u(g,h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha% }_{gh},\quad g,h\in G,$$ and by setting $$c^{\alpha}(g,h,k)={\alpha}_{g}(u(h,k))u(g,hk)\{u(g,h)u(gh,k)\}^{*}\in{\mathbb{% T}},\quad g,h,k\in G.$$ Let ${\pi\!_{\scriptscriptstyle G}}\!:H=H(c^{\alpha})\mapsto G$ be the resolution group of the cocycle $c^{\alpha}\in{{\text{\rm Z}}^{3}}(G,{\mathbb{T}})$ and the resolution map, i.e., $\pi_{G}^{*}(c^{\alpha})\in{{\text{\rm B}}^{3}}(H,{\mathbb{T}})$. Choose $b:h\in H\mapsto b(h)\in{\mathbb{T}}$ such that $$c^{\alpha}({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{\scriptscriptstyle G}}(h)% ,{\pi\!_{\scriptscriptstyle G}}(k))=b(h,k)b(g,hk)\{b(g,h)b(gh,k)\}^{*},\quad g% ,h,k\in H.$$ Setting $$\bar{u}_{H}(g,h)=b(g,h)^{*}u({\pi\!_{\scriptscriptstyle G}}(g),{\pi\!_{% \scriptscriptstyle G}}(h)),\quad g,h\in H,$$ we obtain $${\alpha}_{{\pi\!_{\scriptscriptstyle G}}(g)}(\bar{u}_{H}(h,k))\bar{u}_{H}(g,hk% )\{\bar{u}_{H}(g,h)\bar{u}_{H}(gh,k)\}^{*}=1.$$ Hence $\{{\alpha}_{\pi\!_{\scriptscriptstyle G}},\bar{u}_{H}\}$ is a cocycle twisted action of $H$. Then by [ST1: Theorem 4.13, page 156], there exits a family $\{v_{H}(g)\in{\eusm U}({\eusm M}):\ g\in H\}$ such that $$\bar{u}_{H}(g,h)={\alpha}_{{\pi\!_{\scriptscriptstyle G}}(g)}(v_{H}(h)^{*})v_{% H}(g)^{*}v_{H}(gh),\quad g,h\in H,$$ so that the map $$g\in H\mapsto{\beta}_{g}={\text{\rm Ad}}(v_{H}(g)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{{\pi\!_{\scriptscriptstyle G}}(g)}\in{% \text{\rm Aut}}({\eusm M})$$ is an action of $H$ on ${\eusm M}$ as seen below: $$\displaystyle{\beta}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \beta}_{h}$$ $$\displaystyle={\text{\rm Ad}}(v_{H}(g)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{{\pi\!_{\scriptscriptstyle G}}(g)}{\lower% -1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}(v_{H}(h)){\lower-1.2% 9pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{{\pi\!_{\scriptscriptstyle G}}% (h)}$$ $$\displaystyle={\text{\rm Ad}}(v_{H}(g){\alpha}_{{\pi\!_{\scriptscriptstyle G}}% (g)}(v_{H}(h))){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{{\pi% \!_{\scriptscriptstyle G}}(g)}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}% }{\alpha}_{{\pi\!_{\scriptscriptstyle G}}(h)}$$ $$\displaystyle={\text{\rm Ad}}(v_{H}(g){\alpha}_{{\pi\!_{\scriptscriptstyle G}}% (g)}(v_{H}(h))){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}% }(\bar{u}_{H}(g,h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{% {\pi\!_{\scriptscriptstyle G}}(gh)}$$ $$\displaystyle={\text{\rm Ad}}(v_{H}(gh)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{{\pi\!_{\scriptscriptstyle G}}(gh)}={% \beta}_{gh},\quad g,h\in H.$$ Therefore, the outer action ${\alpha}_{\pi\!_{\scriptscriptstyle G}}$ is perturbed to an action ${\beta}$ by inner automorphisms. With ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ a cross-section of ${\pi\!_{\scriptscriptstyle G}}$, the map $\dot{\beta}\!:\ g\in G\mapsto{\beta}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}% }(g)}\in{\text{\rm Aut}}({\eusm M})$ is an outer action of $G$ on ${\eusm M}$ which is an inner perturbation of the original outer action ${\alpha}$ because $$\displaystyle\dot{\beta}_{g}$$ $$\displaystyle={\beta}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)}={\text{% \rm Ad}}(v_{H}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)){\lower-1.29pt% \hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{{\pi\!_{\scriptscriptstyle G}}({{% \mathfrak{s}}\!_{\scriptscriptstyle H}}(g))}$$ $$\displaystyle={\text{\rm Ad}}(v_{H}({{\mathfrak{s}}\!_{\scriptscriptstyle H}}(% g)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{g},\quad g\in G.$$ Hence we may and do replace the outer action ${\alpha}$ by $\dot{\beta}$. Then the outer action ${\alpha}$ is given by an action ${\beta}$ of $H$ in the following way: $${\alpha}_{g}={\beta}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\qquad g% \in G.$$ The action ${\beta}$ of $H$ gives rise to the characteristic invariant $\chi({\beta})\in{\Lambda}(H,M,{\mathbb{T}})$ with $M={\text{\rm Ker}}({\pi\!_{\scriptscriptstyle G}})={\alpha}{{}^{-1}}({\text{% \rm Int}}({\eusm M}))$, so that the obstraction ${\text{\rm Ob}}({\alpha})$ becomes the image ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}(\chi({\beta}))$ of $\chi({\beta})$ under the HJR-map ${{\delta}_{\scriptscriptstyle{\text{\rm HJR}}}}$. i) We only need to prove that the modular obstruction is unchanged under the perturbation by inner automorphisms. Choose $\{w(p,q):p,q\in Q\}\i{\widetilde{{\eusm U}}}({\eusm M})$ so that $${\alpha}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{q}={% \widetilde{\text{\rm Ad}}}(w(p,q)){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}_{pq},\quad p,q\in Q,$$ where ${\alpha}_{p}$ means ${\alpha}_{{\mathfrak{s}}(p)}$ for short. We write ${\alpha}_{\tilde{p}}\in{\text{\rm Aut}}({\widetilde{{\eusm M}}})$ for ${\alpha}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\theta}_{s},{% \tilde{p}}=(p,s)\in{\widetilde{Q}}=Q\times{\mathbb{R}}$. Then for each triple ${\tilde{p}}=(p,s),{\tilde{q}}=(q,t),{\tilde{r}}=(r,u)\in{\widetilde{Q}}$, the cocycle $c=c^{\alpha}$ representing ${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})$ is given by: $$\displaystyle c^{\alpha}({\tilde{p}},{\tilde{q}},{\tilde{r}})$$ $$\displaystyle={\alpha}_{{\tilde{p}}}(w(q,r))w(p,qr)\{w(p,q)w(pq,r)\}^{*}$$ $$\displaystyle={\alpha}_{p}({\theta}_{s}(w(q,r))w(q,r)^{*}){\alpha}_{p}(w(q,r))% w(p,qr)\{w(p,q)w(pq,r)\}^{*}$$ $$\displaystyle={\alpha}_{p}(d(s;q,r))c_{Q}(p,q,r),$$ where $$\displaystyle d(s;q,r)={\theta}_{s}(w(q,r))w(q,r)^{*};$$ $$\displaystyle c_{Q}(p,q,r)$$ $$\displaystyle={\alpha}_{p}(w(q,r))w(p,qr)\{w(p,q)w(pq,r)\}^{*}.$$ The $G$-equivariant homomorphism $\nu:N\mapsto{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)$ is given by $\nu_{\alpha}(m)=\dot{{\partial}_{\theta}}({\alpha}_{m})\in{{\text{\rm H}}_{% \theta}^{1}}({\mathbb{R}},A),m\in N.$ Let $\{v(g):g\in G\}\i{\eusm U}({\eusm M})$ and set $${\beta}_{g}={\text{\rm Ad}}(v(g)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{\alpha}_{g},\quad g\in G.$$ Then we have, with ${\beta}_{p}={\beta}_{{\mathfrak{s}}(p)},p\in Q,$ $$\displaystyle{\beta}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \beta}_{q}$$ $$\displaystyle={\text{\rm Ad}}(v({\mathfrak{s}}(p))){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{p}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\text{\rm Ad}}(v({\mathfrak{s}}(q))){\lower-1.29pt% \hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{q}$$ $$\displaystyle={\text{\rm Ad}}(v({\mathfrak{s}}(p)){\alpha}_{p}(v({\mathfrak{s}% }(q)))){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{p}{\lower-1.% 29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{q}$$ $$\displaystyle={\widetilde{\text{\rm Ad}}}(v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(q)))w(p,q)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{pq}$$ $$\displaystyle={\widetilde{\text{\rm Ad}}}(v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(q)))w(p,q)v({\mathfrak{s}}(pq))^{*}){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\text{\rm Ad}}(v({\mathfrak{s}}(pq))){\lower-1.29% pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{pq}$$ $$\displaystyle={\text{\rm Ad}}(v({\mathfrak{s}}(p)){\alpha}_{p}(v({\mathfrak{s}% }(q)))w(p,q)v({\mathfrak{s}}(pq))^{*}){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\beta}_{pq},\quad p,q\in Q.$$ Therefore, we get $$\displaystyle c^{\beta}({\tilde{p}}$$ $$\displaystyle,{\tilde{q}},{\tilde{r}})={\beta}_{\tilde{p}}\Big{(}v({\mathfrak{% s}}(q)){\alpha}_{q}(v({\mathfrak{s}}(r)))w(q,r)v({\mathfrak{s}}(qr))^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(qr))w(p,qr)v({\mathfrak{s}}(pqr))^{*}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(q))w(p,q)v({\mathfrak{s}}(pq))^{*}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(pq)){\alpha}_{pq}(v({% \mathfrak{s}}(r))w(pq,r)v({\mathfrak{s}}(pqr))^{*}\Big{)}^{*}$$ $$\displaystyle={\beta}_{p}\Big{\{}{\theta}_{s}\Big{(}v({\mathfrak{s}}(q)){% \alpha}_{q}(v({\mathfrak{s}}(r)))w(q,r)v({\mathfrak{s}}(qr))^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}v({\mathfrak{s}}(q)){\alpha}_{q}(v({% \mathfrak{s}}(r)))w(q,r)v({\mathfrak{s}}(qr))^{*}\Big{)}^{*}\Big{\}}$$ $$\displaystyle\hskip 36.135pt\times{\beta}_{p}\Big{(}v({\mathfrak{s}}(q)){% \alpha}_{q}(v({\mathfrak{s}}(r)))w(q,r)v({\mathfrak{s}}(qr))^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(qr))w(p,qr)v({\mathfrak{s}}(pqr))^{*}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(q))w(p,q)v({\mathfrak{s}}(pq))^{*}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(pq)){\alpha}_{pq}(v({% \mathfrak{s}}(r))w(pq,r)v({\mathfrak{s}}(pqr))^{*}\Big{)}^{*}$$ $$\displaystyle=v({\mathfrak{s}}(p)){\alpha}_{p}\Big{(}{\theta}_{s}(w(q,r))w(q,r% )^{*}\Big{)}v({\mathfrak{s}}(p))^{*}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(p)){\alpha}_{p}\Big{(}v({% \mathfrak{s}}(q)){\alpha}_{q}(v({\mathfrak{s}}(r)))w(q,r)v({\mathfrak{s}}(qr))% ^{*}\Big{)}v({\mathfrak{s}}(p))^{*}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(qr))w(p,qr)v({\mathfrak{s}}(pqr))^{*}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}v({\mathfrak{s}}(p)){\alpha}_{p}(v({% \mathfrak{s}}(q))w(p,q)v({\mathfrak{s}}(pq))^{*}$$ $$\displaystyle\hskip 36.135pt\times v({\mathfrak{s}}(pq)){\alpha}_{pq}(v({% \mathfrak{s}}(r))w(pq,r)v({\mathfrak{s}}(pqr))^{*}\Big{)}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}{\theta}_{s}(w(q,r))w(q,r)^{*}\Big{)}{\alpha}% _{p}\Big{(}{\alpha}_{q}(v({\mathfrak{s}}(r)))w(q,r)\Big{)}$$ $$\displaystyle\hskip 36.135pt\times w(p,qr)\Big{(}w(p,q){\alpha}_{pq}(v({% \mathfrak{s}}(r))w(pq,r)\Big{)}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}{\theta}_{s}(w(q,r))w(q,r)^{*}\Big{)}w(p,q){% \alpha}_{pq}(v({\mathfrak{s}}(r)))w(p,q)^{*}$$ $$\displaystyle\hskip 36.135pt\times{\alpha}_{p}(w(q,r))w(p,qr)\Big{(}w(p,q){% \alpha}_{pq}(v({\mathfrak{s}}(r))w(pq,r)\Big{)}^{*}$$ $$\displaystyle={\alpha}_{p}\Big{(}{\theta}_{s}(w(q,r))w(q,r)^{*}\Big{)}{\alpha}% _{p}(w(q,r))w(p,qr)\Big{(}w(p,q)w(pq,r)\Big{)}^{*}$$ $$\displaystyle=c^{\alpha}({\tilde{p}},{\tilde{q}},{\tilde{r}}).$$ Therefore, the inner perturbation ${\beta}$ of the outer action ${\alpha}$ of $G$ does not change the modular obstruction cocycle $c^{\alpha}$, i.e., $c^{\alpha}=c^{\beta}$ as seen above. Hence ${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})={{\text{\rm Ob}}_{\text{\rm m}}}({% \beta})$. This proves the assertion (i). ii) Assume that ${\eusm M}$ is an approximately finite dimensional factor with non-commutative flow $\{{\widetilde{{\eusm M}}},{\mathbb{R}},{\theta},\tau\}$ of weights and the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ of weights on ${\eusm M}$, and suppose that $G$ is a countable discrete amenable group. Let $\dot{\alpha}$ and $\dot{\beta}$ be outer actions of $G$ on ${\eusm M}$ such that We want to conclude from this data that the outer actions $\dot{\alpha}$ and $\dot{\beta}$ of $G$ are outer conjugate. The assumption (c) implies that $${\text{\rm Ob}}(\dot{\alpha})={\partial}(({{\text{\rm Ob}}_{\text{\rm m}}}(% \dot{\alpha}),\nu_{\dot{\alpha}}))={\partial}(({{\text{\rm Ob}}_{\text{\rm m}}% }(\dot{\beta}),\nu_{\dot{\beta}}))={\text{\rm Ob}}(\dot{\beta})\in{{\text{\rm H% }}^{3}}(G,{\mathbb{T}}).$$ Therefore, we may and do choose the same obstruction cocycle $c=c^{\dot{\alpha}}=c^{\dot{\beta}}$, which allows us to pick up the common resolution system ${\pi\!_{\scriptscriptstyle G}}:H=H(c)\mapsto G$ and actions ${\alpha}$ and ${\beta}$ of $H$ on ${\eusm M}$ which give $\dot{\alpha}$ and $\dot{\beta}$ respectively: $$\dot{\alpha}_{g}={\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\quad% \dot{\beta}_{g}={\beta}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\quad g% \in G.$$ First, the resolution group $H$ is amenable by Lemma 2.14 and the actions ${\alpha}$ and ${\beta}$ of $H$ give rise to the following invariants: $$\displaystyle L=\pi_{G}^{-1}(N)={\alpha}{{}^{-1}}({{\text{\rm Cnt}}}({\eusm M}% ))={\beta}{{}^{-1}}({{\text{\rm Cnt}}}({\eusm M})),$$ $$\displaystyle{\chi_{\text{\rm m}}}({\alpha}),\ {\chi_{\text{\rm m}}}({\beta})% \in{\Lambda}_{{\text{\rm mod}}({\alpha})\times{\theta}}(H\times{\mathbb{R}},L,% A).$$ Since $M={\text{\rm Ker}}({\pi\!_{\scriptscriptstyle G}})={\alpha}{{}^{-1}}({\text{% \rm Int}}({\eusm M}))={\beta}{{}^{-1}}({\text{\rm Int}}({\eusm M}))$, we have $$M=K({\chi_{\text{\rm m}}}({\alpha}))=K({\chi_{\text{\rm m}}}({\beta})).$$ Therefore, the modular characteristic invariant ${\chi_{\text{\rm m}}}({\alpha})$ and ${\chi_{\text{\rm m}}}({\beta})$ are both members of ${\Lambda}_{{\alpha}\times{\theta}}({\widetilde{H}},L,M,A)$ with ${\widetilde{H}}=H\times{\mathbb{R}}$, where we are now going to use ${\alpha}$ for ${\text{\rm mod}}({\alpha})={\text{\rm mod}}({\beta})$. The resolution system $\{H,{\pi\!_{\scriptscriptstyle G}}\}$ generates the following modified HJR-exact sequence: $$\begin{CD}\cdots @>{}>{}>{{\text{\rm H}}^{2}}(H,{\mathbb{T}})@>{{\text{\rm Res% }}}>{}>{\Lambda}_{{\alpha}\times{\theta}}({\widetilde{H}},L,M,A)\\ @>{{\delta}}>{}>{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A% )@>{{\partial}}>{}>{{\text{\rm H}}^{3}}(H,{\mathbb{T}}),\end{CD}$$ such that $${\delta}({\chi_{\text{\rm m}}}({\alpha}))={{\text{\rm Ob}}_{\text{\rm m}}}(% \dot{\alpha})={{\text{\rm Ob}}_{\text{\rm m}}}(\dot{\beta})={\delta}({\chi_{% \text{\rm m}}}({\beta})).$$ With this, our assetion (ii) follows immediately from the next theorem $\heartsuit$ Theorem 3.3 Let ${\alpha}$ and ${\beta}$ be two actions of a countable discrete group $H$ on an infinite factor ${\eusm M}$ with $L={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))={\beta}{{}^{% -1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$ and $M={\alpha}{{}^{-1}}({\text{\rm Int}}({\eusm M}))={\beta}{{}^{-1}}({\text{\rm Int% }}({\eusm M}))$. Let $G=H/M$ and ${\pi\!_{\scriptscriptstyle G}}\!:H\mapsto G$ be the quotient map. Suppose that ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}\!:G\mapsto H$ is a cross-section and set $$\dot{\alpha}_{g}={\alpha}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\quad% \dot{\beta}_{g}={\beta}_{{{\mathfrak{s}}\!_{\scriptscriptstyle H}}(g)},\quad g% \in G,$$ to obtain outer actions $\dot{\alpha}$ and $\dot{\beta}$ of $G$ on ${\eusm M}$. i) The two outer actions ${\dot{\alpha}}$ and ${\dot{\beta}}$ of $G$ are outer conjugate if and only if the two original actions ${\alpha}$ and ${\beta}$ of $H$ are outer conjugate. ii) If the two actions ${\alpha}$ and ${\beta}$ of $H$ on ${\eusm M}$ are outer conjugate, then there exists an automorphism ${\sigma}\in{\text{\rm Aut}}_{\theta}({\eusm C})$ such that iii) If ${\eusm M}$ is an approximately finite dimensional infinite factor and $H$ is amenable in addition, then the existence of an automorphism ${\sigma}\in{\text{\rm Aut}}_{\theta}({\eusm C})$ such that: is sufficient for ${\alpha}$ and ${\beta}$ to be outer conjugate. Demonstration Proof i) It is obvious that the outer conjugacy of the outer actions ${\dot{\alpha}}$ and ${\dot{\beta}}$ of $G$ follows from that of the original actions ${\alpha}$ and ${\beta}$ of $H$. So suppose that the outer actions ${\dot{\alpha}}$ and ${\dot{\beta}}$ of $G$ are outer conjugate, which means the existence of an automorphism ${\sigma}\in{\text{\rm Aut}}({\eusm M})$ and a family $\{u(g):g\in G\}\subset{\eusm U}({\eusm M})$ of unitaries such that $${\text{\rm Ad}}(u(g)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\dot{% \alpha}}_{g}={\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\dot{% \beta}}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}},% \quad g\in G.$$ Writing each $h\in H$ in the form: $$h={{\text{\rm m}}_{M}}(h){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\pi\!_{% \scriptscriptstyle G}}(h)),\quad h\in H,\ {{\text{\rm m}}_{M}}(h)\in M,$$ we have $$\begin{aligned} &\displaystyle{\alpha}_{h}={\alpha}_{{{\text{\rm m}}_{M}}(h)}{% \lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\dot{\alpha}}_{{\pi\!_{% \scriptscriptstyle G}}(h)};\\ &\displaystyle{\beta}_{h}={\beta}_{{{\text{\rm m}}_{M}}(h)}{\lower-1.29pt\hbox% {{$\scriptscriptstyle\circ$}}}{\dot{\beta}}_{{\pi\!_{\scriptscriptstyle G}}(h)% },\end{aligned}\quad h\in H.$$ As ${\alpha}_{m}$ and ${\beta}_{m}$ are inner for each $m\in M$, they are in the following form: $$\displaystyle{\alpha}_{m}={\text{\rm Ad}}(v(m)),\quad{\beta}_{m}={\text{\rm Ad% }}(w(m)),\quad m\in M,$$ for some $v(m),w(m)\in{\eusm U}({\eusm M})$. Therefore, we have, for each $h\in H$, $$\displaystyle{\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta}_% {h}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}$$ $$\displaystyle={\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta}% _{{{\text{\rm m}}_{M}}(h){{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\pi\!_{% \scriptscriptstyle G}}(h))}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \sigma}{{}^{-1}}={\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \beta}_{{{\text{\rm m}}_{M}}(h)}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$% }}}{\dot{\beta}}_{{\pi\!_{\scriptscriptstyle G}}(h)}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}$$ $$\displaystyle={\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{% \rm Ad}}(w({{\text{\rm m}}_{M}}(h))){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\sigma}{{}^{-1}}{\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\dot{\beta}}_{{\pi\!_{\scriptscriptstyle G}}(h)}{\lower-1.29pt\hbox{% {$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}$$ $$\displaystyle={\text{\rm Ad}}({\sigma}(w({{\text{\rm m}}_{M}}(h)))){\lower-1.2% 9pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}(u({\pi\!_{% \scriptscriptstyle G}}(h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \dot{\alpha}}_{{\pi\!_{\scriptscriptstyle G}}(h)}$$ $$\displaystyle={\text{\rm Ad}}({\sigma}(w({{\text{\rm m}}_{M}}(h)))){\lower-1.2% 9pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}(u({\pi\!_{% \scriptscriptstyle G}}(h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{{{\text{\rm m}}_{M}}(h)}^{-1}{\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}_{h}$$ $$\displaystyle={\text{\rm Ad}}({\sigma}(w({{\text{\rm m}}_{M}}(h)))){\lower-1.2% 9pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}(u({\pi\!_{% \scriptscriptstyle G}}(h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \text{\rm Ad}}(v({{\text{\rm m}}_{M}}(h))^{*}){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{h}$$ $$\displaystyle={\text{\rm Ad}}(u(h)){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}_{h},$$ where $u(h)={\sigma}(w({{\text{\rm m}}_{M}}(h)))u({\pi\!_{\scriptscriptstyle G}}(h))v% ({{\text{\rm m}}_{M}}(h))^{*}$. Hence the actions ${\alpha}$ and ${\beta}$ of $H$ are outer conjugate. ii) Assume that the two actions ${\alpha}$ and ${\beta}$ of $H$ on ${\eusm M}$ are outer conjugate. Then there exist ${\sigma}\in{\text{\rm Aut}}({\eusm M})$ and a family $\{u(h):h\in H\}\subset{\eusm U}({\eusm M})$ such that $u(1)=1$ and $${\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta}_{h}{\lower-1.% 29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}={\text{\rm Ad}}(u(h))% {\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{h},\quad h\in H.$$ Since ${\text{\rm Int}}({\eusm M})$ acts on the flow of weights trivially, we have $${\text{\rm mod}}({\sigma}){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \text{\rm mod}}({\beta}_{h}){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \text{\rm mod}}({\sigma}){{}^{-1}}={\text{\rm mod}}({\alpha}_{h}),\quad h\in H,$$ and conclude that ${\text{\rm mod}}({\sigma})$ conjugates ${\text{\rm mod}}({\alpha})$ and ${\text{\rm mod}}({\beta})$, i.e., the assertion (a). Replacing ${\beta}_{g},g\in H,$ by ${\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta}_{g}{\lower-1.% 29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}},g\in H,$ we may and do assume from now on for short that ${\text{\rm mod}}({\alpha})={\text{\rm mod}}({\beta})$ and $${\beta}_{g}={\text{\rm Ad}}(u(h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{\alpha}_{h},\quad h\in H.$$ As ${\alpha}$ and ${\beta}$ are both actions, we have $$\displaystyle{\text{\rm Ad}}(u(gh))$$ $$\displaystyle{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{gh}={% \beta}_{gh}={\beta}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta% }_{h}={\text{\rm Ad}}(u(g)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \alpha}_{g}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}(u(% h)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{h}$$ $$\displaystyle={\text{\rm Ad}}(u(g){\alpha}_{g}(u(h))){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{g}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{h}$$ $$\displaystyle={\text{\rm Ad}}(u(g){\alpha}_{g}(u(h))){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{gh},\quad g,h\in H.$$ Thus we get $$\mu(g,h)=u(g){\alpha}_{g}(u(h))u(gh)^{*}\in{\mathbb{T}},\quad g,h\in H,$$ and $\mu\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})$. Each ${\alpha}_{m},m\in L,$ falls in ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$, so that it is of the form: $${\alpha}_{m}={\widetilde{\text{\rm Ad}}}(v(m)),\quad v(m)\in{\widetilde{{\eusm U% }}}({\eusm M}).$$ As ${\beta}_{m}={\text{\rm Ad}}(u(m)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{\alpha}_{m}$, we may choose $w(m)=u(m)v(m),m\in L$. The unitary families $\{v(m):m\in L\}$ and $\{w(m):m\in L\}$ generate the corresponding modular characteristic cocycles: $$\displaystyle{\alpha}_{\tilde{g}}(v(g{{}^{-1}}mg))$$ $$\displaystyle={\lambda}_{\alpha}(m;g,s)v(m);\quad{\beta}_{\tilde{g}}(w(g{{}^{-% 1}}mg))$$ $$\displaystyle={\lambda}_{\beta}(m;g,s)w(m);$$ $$\displaystyle v(m)v(n)$$ $$\displaystyle=\mu_{\alpha}(m,n)v(mn);\qquad\quad w(m)w(n)$$ $$\displaystyle=\mu_{\beta}(m,n)w(mn),$$ with $\tilde{g}=(g,s)\in{\widetilde{H}}$ and $m,n\in L$. Now we take a closer look at $({\lambda}_{\beta},\mu_{\beta})\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,A)$: $$\displaystyle\mu_{\beta}$$ $$\displaystyle(m,n)=w(m)w(n)w(mn)^{*},\quad m,n\in L,$$ $$\displaystyle=(u(m)v(m))(u(n)v(n))(u(mn)v(mn))^{*}$$ $$\displaystyle=u(m){\alpha}_{m}(u(n))v(m)v(n)v(mn)^{*}u(mn)^{*}$$ $$\displaystyle=u(m){\alpha}_{m}(u(n))\mu_{\alpha}(m,n)u(mn)^{*}$$ $$\displaystyle=\mu_{\alpha}(m,n)u(m){\alpha}_{m}(u(n))u(mn)^{*}$$ $$\displaystyle=\mu_{\alpha}(m,n)\mu(m,n),$$ and for $(m,\tilde{g})=(m,g,s)\in L\times{\widetilde{H}}$ $$\displaystyle{\lambda}_{\beta}$$ $$\displaystyle(m;\tilde{g})u(m)v(m)={\lambda}_{\beta}(m;\tilde{g})w(m)$$ $$\displaystyle={\beta}_{\tilde{g}}(w(g{{}^{-1}}mg))$$ $$\displaystyle={\text{\rm Ad}}(u(g)){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}_{\tilde{g}}\Big{(}u(g{{}^{-1}}mg)v(g{{}^{-1}}mg)\Big{)}$$ $$\displaystyle=u(g){\alpha}_{g}(u(g{{}^{-1}}mg)){\lambda}_{\alpha}(m;\tilde{g})% v(m)u(g)^{*}$$ $$\displaystyle=\mu(g,g{{}^{-1}}mg)u(mg){\lambda}_{\alpha}(m;\tilde{g})v(m)u(g)^% {*}$$ $$\displaystyle={\lambda}_{\alpha}(m;\tilde{g})\mu(g,g{{}^{-1}}mg)\mu(m,g)^{*}u(% m){\alpha}_{m}(u(g))v(m)u(g)^{*}$$ $$\displaystyle={\lambda}_{\alpha}(m;\tilde{g})\mu(g,g{{}^{-1}}mg)\mu(m,g)^{*}u(% m)v(m),$$ Therefore the characteristic cocycles $({\lambda}_{\beta},\mu_{\beta})\in{\text{\rm Z}}_{{\alpha}\times{\theta}}({% \widetilde{H}},L,A)$ is of the form: $$\displaystyle\mu_{\beta}(m,n)$$ $$\displaystyle=\mu(m,n)\mu_{\alpha}(m,n),\quad m,n\in L;$$ $$\displaystyle{\lambda}_{\beta}(m;\tilde{g})$$ $$\displaystyle=\mu(g,g{{}^{-1}}mg)\overline{\mu(m,g)}{\lambda}_{\alpha}(m;% \tilde{g}),\quad\tilde{g}=(g,s)\in{\widetilde{H}}.$$ Thus we conclude that ${\chi_{\text{\rm m}}}(\chi({\beta}))={\text{\rm Res}}([\mu]){\chi_{\text{\rm m% }}}(\chi({\alpha}))$ in ${\Lambda}_{{\alpha}\times{\theta}}({\widetilde{H}},L,M,A)$. In virtue of Theorem 2.7, this is equivalent to the fact that $${{\text{\rm Ob}}_{\text{\rm m}}}(\dot{\alpha})={{\text{\rm Ob}}_{\text{\rm m}}% }(\dot{\beta})\in{{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},% A)*_{{\mathfrak{s}}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}}).$$ iii) Suppose that ${\eusm M}$ is an infinite AFD factor and $H$ is amenable. The automorphism ${\sigma}\in{\text{\rm Aut}}_{\theta}({\eusm C})$ can be extended to an automorphism in ${{\text{\rm Aut}}_{\tau,{\theta}}}({\widetilde{{\eusm M}}})$ by [ST3] which will be denoted again by ${\sigma}$. Replacing $\{{\beta}_{g}:g\in H\}$ by $\{{\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\beta}_{g}{\lower-% 1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma}{{}^{-1}}:g\in H\}$, we may and do assume that ${\text{\rm mod}}({\alpha})={\text{\rm mod}}({\beta})$ and ${\delta}({\chi_{\text{\rm m}}}({\alpha}))={\delta}({\chi_{\text{\rm m}}}({% \beta}))$ in the invariant group ${{\text{\rm H}}_{{\alpha},\text{\rm s}}^{3}}({\widetilde{Q}},A)*_{{\mathfrak{s% }}}{\text{\rm Hom}}_{G}(N,{{\text{\rm H}}_{\theta}^{1}})$. The modified HJR-exact sequence of Theorem 2.7 yields the existence of a cohomology class $[\mu]\in{{\text{\rm H}}^{2}}(H,{\mathbb{T}})$ such that $${\chi_{\text{\rm m}}}({\beta})={\text{\rm Res}}([\mu]){\chi_{\text{\rm m}}}({% \alpha})\in{\Lambda}_{{\alpha}\times{\theta}}({\widetilde{H}},L,M,A).$$ A cocycle perturbation of ${\alpha}$, denoted by ${\alpha}$ again, leaves a subfactor ${\eusm B}$ of type I${}_{\infty}$ pointwise invariant. Let $u:g\in H\mapsto u(g)\in{\eusm U}({\eusm B})$ be a projective unitary representation of $H$ in ${\eusm B}$ with the multiplier $\mu\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})$ representating $[\mu]$ such that $$u(g)u(h)=\mu(g,h)u(gh),\quad g,h\in H.$$ Set ${}_{u}{\alpha}_{g}={\text{\rm Ad}}(u(g)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{g},g\in H$. Then it is a straightforward calculation to show that ${\chi_{\text{\rm m}}}({}_{u}{\alpha})={\text{\rm Res}}([\mu]){\chi_{\text{\rm m% }}}({\alpha})$. Therefore, the characteristic invariant ${\chi_{\text{\rm m}}}({}_{u}{\alpha})$ is precisely ${\chi_{\text{\rm m}}}({\beta})$ of ${\beta}$. Hence the cocycle conjugacy classification theorem, [KtST1], guarantees the concycle conjgacy of ${}_{u}{\alpha}$ and ${\beta}$. Therefore, the original actions ${\alpha}$ and ${\beta}$ are outer conjugate. $\heartsuit$ §4. Model Construction As laid down in [KtST1], the construction of a model from a set of invariants is an integral part of the classifiction theory. It is particulary important here because the invariants associated with outer actions do not form a standard Borel space. For example, the classification functor cannot be Borel in the case of type III${}_{0}$. So we have to begin with a desingularization of the space of invariants. We fix an ergodic flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ to begin with. An action ${\alpha}$ of a group $G$ on the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ means a homomorphism $g\in G\mapsto{\alpha}_{g}\in{\text{\rm Aut}}_{\theta}({\eusm C})$, where ${\text{\rm Aut}}_{\theta}({\eusm C})=\{{\sigma}\in{\text{\rm Aut}}({\eusm C}):% {\sigma}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\theta}_{s}={\theta}% _{s}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\sigma},s\in{\mathbb{R}}\}$. As before, we denote the unitary group ${\eusm U}({\eusm C})$ of ${\eusm C}$ by $A$ for short. The first cohomology group ${{\text{\rm H}}_{\theta}^{1}}={{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A)$ can not be a standard Borel space if the flow ${\theta}$ is properly ergodic. So we have to consider the first cocycle group ${{\text{\rm Z}}_{\theta}^{1}}={{\text{\rm Z}}_{\theta}^{1}}({\mathbb{R}},A)$ instead together with the coboundary subgroup ${{\text{\rm B}}_{\theta}^{1}}={{\text{\rm B}}_{\theta}^{1}}({\mathbb{R}},A)$. Next we fix a countable discrete amenable group $G$ and an exact sequence: $$\begin{CD}1@>{}>{}>N@>{}>{}>G@>{\pi}>{\underset{\mathfrak{s}}\to{% \longleftarrow}}>Q@>{}>{}>1\end{CD}$$ together with a cross-section ${\mathfrak{s}}$ which will be fixed throughout as in the previous section and therefore the $N$-valued cocyle ${{\mathfrak{n}}_{N}}:$ $${{\mathfrak{n}}_{N}}(p,q)={\mathfrak{s}}(p){\mathfrak{s}}(q){\mathfrak{s}}(pq)% {{}^{-1}},\quad p,q\in Q,$$ is also fixed. Let ${\text{\rm Hom}}_{\mathbb{R}}(Q,{\text{\rm Aut}}({\eusm C}))$ be the set of all homomorphisms ${\alpha}\!:p\in Q\mapsto{\alpha}_{p}\in{\text{\rm Aut}}_{\theta}({\eusm C})$ from $Q$ into the group of all automorphisms of ${\eusm C}$ commuting with the flow ${\theta}$. It is easily seen to be a Polish space. Each ${\alpha}\in{{\text{\rm Hom}}_{\mathbb{R}}}(Q,{\text{\rm Aut}}({\eusm C}))$ can be identified with an action of $G$ whose kernel contains $N$. So we view ${\alpha}$ as an action of $G$ on ${\eusm C}$ freely whenever necessary. We also use the notations ${\widetilde{Q}}=Q\times{\mathbb{R}}$ and ${\widetilde{G}}=G\times{\mathbb{R}}$ freely. We fix the action ${\alpha}$ of $Q$ and consequently of $G$ on the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ throughout this section and the joint action ${\alpha}\times{\theta}$ will be denoted by the single character ${\alpha}$ for short. With these data, we have the group of modular obstructions: ${{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,A)$ with $A={\eusm U}({\eusm C})$ which will be fixed throughout this section. The group ${{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{\mathbb{R}% },N,A)$ is not a standard Borel group in general, in particular it will never be standard except trivial cases if the flow ${\theta}$ is properly ergodic. Also there is no way to construct a model directly from an element $([c],\nu)\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)$ either. We must desingularize the group of invariants first. To this end, we first consider the group ${{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)$ of modular obstruction cocycles $(c,{\zeta})$. However, being an obstruction cocycle, $(c,{\zeta})$ does not allow us to construct an outer action of $G$ directly. We recall Corollary 2.17 to find a resolution system: $$\begin{CD}1@>{}>{}>M@>{}>{}>H@>{{\pi\!_{\scriptscriptstyle G}}}>{}>G@>{}>{}>1% \end{CD}$$ with $H$ a countable discrete group such that We also recall that in this resolution proceedure we need several extra data. For example the map ${\partial}:{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times% {\mathbb{R}},N,A)\mapsto{{\text{\rm Z}}^{3}}(G,{\mathbb{T}})$ requires a choice of $(a,f)\in{{\text{\rm C}}_{\alpha}^{2}}(G,A)\times{{\text{\rm Z}}^{2}}(Q,A)$ so that $c_{G}=\pi^{*}(c_{Q}){{\partial}_{G}}(\pi^{*}(f)a)^{*}\in{{\text{\rm Z}}^{3}}(G% ,{\mathbb{T}})$. But in any case we do have a resolution system $\{H,{\pi\!_{\scriptscriptstyle G}},L,M\}$ of $([c],\nu)$. So instead of going through all steps of desingularizations starting from the cocycle $(c,{\zeta})\in{{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)$, we move directly to $\{H,{\pi\!_{\scriptscriptstyle G}},L,M\}$ and call $({\lambda},\mu)\in{\Lambda}_{\alpha}({\widetilde{H}},L,M,A)$ a resolution of the modular obstruction $([c],\nu)\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G% \times{\mathbb{R}},N,A)$ if $${\delta}([{\lambda},\mu])=([c],\nu).$$ If we begin with $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$, it is easy to see the corresponding obtruction cocycle $(c,{\zeta})\in{{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)$: We will write $(c,{\zeta})={\partial}_{\dot{\mathfrak{s}}}({\lambda},\mu)$. Let ${\text{\rm Rsn}}(H,{\pi\!_{\scriptscriptstyle G}},([c],\nu))$ be the set of all $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$ such that ${\delta}([{\lambda},\mu])=([c],\nu)$. On the space ${{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,\allowmathbreak N% ,A)$ of modular obstruction cocycles, the group ${{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ acts in the following way: $$(c,{\zeta})\mapsto(({{\partial}_{{\widetilde{Q}}}}b)c,{\zeta}),b\in{{\text{\rm C% }}_{\alpha}^{2}}(Q,A),$$ which does not change the cohomology class of $(c,{\zeta})$. Also the group $${{\text{\rm Z}}^{2}}(H,{\mathbb{T}})\times{\text{\rm C}}^{1}(N,A)$$ acts on ${\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)$ without changing the cohomology class of ${\delta}_{\mathfrak{s}}({\lambda},\mu)$: $$({\lambda},\mu)\mapsto(({\partial}_{1}a){\lambda}_{\xi}{\lambda},({\partial}_{% 2}a)\xi_{L}\mu),\quad(\xi,a)\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})\times{% \text{\rm C}}^{1}(N,A),$$ where $({\lambda}_{\xi},\xi_{L})$ is the characteristic cocycle given by (2.10): $$\displaystyle{\lambda}_{\xi}(m;g,s)$$ $$\displaystyle=\xi(g,g{{}^{-1}}mg)\overline{\xi(m,g)},\quad m\in L,(g,s)\in{% \widetilde{H}};$$ $$\displaystyle\xi_{L}$$ $$\displaystyle=\text{\rm the restriction of }\xi\ \text{to }L\times L.$$ Now as soon as we have a characteristic cocycle $({\lambda},\mu)$, we have a covariant cocycle $\{{\eusm M},H,{\alpha}^{{\lambda},\mu}\}$ equipped with a map $u:m\in L\mapsto u(m)\in{\widetilde{{\eusm U}}}({\eusm M})$ such that $$\displaystyle u(m)u(n)$$ $$\displaystyle=\mu(m,n)u(mn),\quad m,n\in L;$$ $$\displaystyle{\alpha}_{m}^{{\lambda},\mu}$$ $$\displaystyle={\widetilde{\text{\rm Ad}}}(u(m)),;$$ $$\displaystyle{\alpha}_{g}^{{\lambda},\mu}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\theta}_{s}(u(g{{}^{-1}}mg))$$ $$\displaystyle={\lambda}(m;g,s)u(m),\quad(g,s)\in{\widetilde{H}};$$ which therefore gives: $${\dot{\alpha}}_{g}^{{\lambda},\mu}={\alpha}_{{{\mathfrak{s}}\!_{% \scriptscriptstyle H}}(g)}^{{\lambda},\mu},\quad g\in G,$$ whose modular obstruction cocycle is precisely ${\delta}_{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\lambda},\mu)$. The action of $a\in{\text{\rm C}}^{1}(N,A)$ on $({\lambda},\mu)$ does not change the action ${\alpha}^{{\lambda},\mu}$ itself, but on the unitary family $\{u(m):m\in L\}$ which is perturbed to $\{(au)(m)=a(m)u(m):m\in L\}$. So this does not cause any interesting change. The perturbation by $\xi\in{{\text{\rm Z}}^{2}}(H,{\mathbb{T}})$ gives somewhat non trivial change on ${\alpha}^{{\lambda},\mu}$. Namely, what we need is to consider the left regular $\xi$-projective representation, say $v^{\xi}\!:g\in H\mapsto v^{\xi}(g)\in{\eusm U}(\ell^{2}(H))$, so that $$v^{\xi}(g)v^{\xi}(h)=\xi(g,h)v^{\xi}(gh),\quad g,h\in H.$$ Now the new action $g\in H\mapsto{\alpha}_{g}^{{\lambda},\mu}\otimes{\text{\rm Ad}}(v^{\xi}(g))\in% {\text{\rm Aut}}({\eusm M}{\overline{\otimes}}{\eusm L}(\ell^{2}(H))$ has the modular characteristic cocycle $({\lambda}_{\xi}{\lambda},\xi_{L}\mu)$, which is of course does not change the outer conjugacy class of the outer action ${\dot{\alpha}}^{{\lambda},\mu}$ of $G$. The change caused by the action of $b\in{{\text{\rm C}}_{\alpha}^{2}}(Q,A)$ is again absorbed by changing the unitary family $\{u({{\mathfrak{n}}_{L}}(p,q)):p,q\in Q\}$ to $\{b(p,q)u({{\mathfrak{n}}_{L}}(p,q)):p,q\in Q\}$, which does not change the outer action ${\dot{\alpha}}^{{\lambda},\mu}$ itself. Therefore the scheme of model constructions looks like: $$\begin{CD}({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,A)@>{}% >{}>{\delta}_{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\lambda},\mu)\in{{% \text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)\\ @V{}V{}V@A{}A{}A\\ {\alpha}^{{\lambda},\mu}\in{\text{\rm Act}}(H,{\eusm M})@>{}>{}>{\dot{\alpha}}% ^{{\lambda},\mu}={\alpha}_{{\mathfrak{s}}\!_{\scriptscriptstyle H}}\in{\text{% \rm Oct}}(G,{\eusm M})\end{CD}$$ where ${\text{\rm Act}}(G,{\eusm M})$ and ${\text{\rm Oct}}(G,{\eusm M})$ are respectively the spaces of actions and outer actions of $G$ on ${\eusm M}$. Summerizing the discussion, we get the following: Theorem 4.1 Let $G$ be a countable discrete amenable group and $N$ a normal subgroup. Let $\{{\eusm C},{\mathbb{R}},{\theta}\}$ be an ergodic flow and ${\alpha}$ an action of $G$ on the flow $\{{\eusm C},{\mathbb{R}},{\theta}\}$ with ${\text{\rm Ker}}({\alpha})\supset N$, i.e., ${\alpha}$ is a homomorphism of $G$ into the group ${\text{\rm Aut}}_{\theta}({\eusm C})$ of automorphisms commuting with the flow ${\theta}$ whith ${\alpha}_{m}={\text{\rm id}},m\in N.$ Let $A$ denote the unitary group ${\eusm U}({\eusm C})$. For every modular obstruction cocycle $(c,{\zeta})\in{{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)$, there exists an amenable resolution system $\{H,L,M,{\pi\!_{\scriptscriptstyle G}},{\lambda},\mu\}$ with $({\lambda},\mu)\in{\text{\rm Z}}_{\alpha}({\widetilde{H}},L,M,\allowmathbreak A)$ and a cross-section ${{\mathfrak{s}}\!_{\scriptscriptstyle H}}\!:G\mapsto H$ of the map ${\pi\!_{\scriptscriptstyle G}}$ such that $${\delta}_{{\mathfrak{s}}\!_{\scriptscriptstyle H}}({\lambda},\mu)\equiv(c,{% \zeta})\quad{\text{\rm mod}}\ {{\text{\rm B}}_{{\alpha},{\mathfrak{s}}}^{\text% {\rm out}}}(G,N,A).$$ Consequently, the action ${\alpha}^{{\lambda},\mu}$ associated with the characteristic cocycle $({\lambda},\mu)$ gives an outer action ${\dot{\alpha}}^{{\lambda},\mu}={\alpha}_{{\mathfrak{s}}\!_{\scriptscriptstyle H}}$ of $G$ on the approximately finite dimensional factor ${\eusm M}$ with flow of weights $\{{\eusm C},{\mathbb{R}},{\theta}\}$ such that $${{\text{\rm Ob}}_{\text{\rm m}}}({\dot{\alpha}}^{{\lambda},\mu})=([c],[{\zeta}% ])\in{{\text{\rm H}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G\times{% \mathbb{R}},N,A).$$ The homomorphism $\nu=[{\zeta}]\in{\text{\rm Hom}}(N,{{\text{\rm H}}_{\theta}^{1}})$ is injective if and only if ${\dot{\alpha}}$ is free. §5. Non-Triviality of the Exact Sequence: $\boldsymbol{\begin{CD}1@>{}>{}>{{\text{\rm H}}_{\theta}^{1}}@>{}>{}>{\text{\rm Out% }}({\eusm M})@>{}>{}>{{\text{\rm Out}}_{\tau,{\theta}}}({\widetilde{{\eusm M}}% })@>{}>{}>1\end{CD}}$ Theorem 5.1 Let ${\alpha}$ be an outer action of a countable discrete group $G$ on a separable factor ${\eusm M}$ with $N={\alpha}{{}^{-1}}({{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M}))$ with modular obstruction $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})=([c],\nu)\in{{\text{\rm H}}_{\alpha% }^{\text{\rm out}}}(G,N,{\eusm U}({\eusm C})).$$ Let $Q$ be the quotient group $Q=G/N$ and ${\mathfrak{s}}$ be a cross-section of the quotient map $\pi\!:G\mapsto Q$. Then the map ${\alpha}_{\mathfrak{s}}\!:p\in Q\mapsto{\alpha}_{{\mathfrak{s}}(p)}\in{\text{% \rm Aut}}({\eusm M})$ can be perturbed by ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$ to an action of $Q$ if and only if the modular obstruction $${{\text{\rm Ob}}_{\text{\rm m}}}({\alpha})=([c],\nu)\in{{\text{\rm H}}_{{% \alpha},{\mathfrak{s}}}^{\text{\rm out}}}(G,N,A)={\text{\rm H}}_{{\alpha},{% \text{\rm s}}}^{3}({\widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}_{G}(N,{{% \text{\rm H}}_{\theta}^{1}})$$ has trivial $$[c\cdot{\alpha}_{p}\left(\partial_{Q}(b)\right)]=[c({\tilde{p}},{\tilde{q}},{% \tilde{r}}){\alpha}_{p}(\partial_{Q}(b)(s;q,r))]=1$$ for some $b(\cdot,\ q)\in{\text{\rm Z}}_{\theta}^{1}(\mathbb{R},A)$, which implies $\nu\cup{{\mathfrak{n}}_{N}}\in{{\text{\rm B}}_{\alpha}^{2}}(Q,{{\text{\rm H}}_% {\theta}^{1}}).$ Demonstration Proof Suppose $[c\cdot{\alpha}_{p}\left(\partial_{Q}(b)\right)]=1$ for some $b(\cdot,q)\in{\text{\rm Z}}_{\theta}^{1}(\mathbb{R},A).$ Choose $\{u(p,q)\in{\widetilde{{\eusm U}}}({\eusm M})\!:\ p,q\in Q\}$ so that $${\alpha}_{\tilde{p}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_% {\tilde{q}}={\text{\rm Ad}}(u(p,q)){\lower-1.29pt\hbox{{$\scriptscriptstyle% \circ$}}}{\alpha}_{{\tilde{p}}{\tilde{q}}},\quad{\tilde{p}},{\tilde{q}}\in{% \widetilde{Q}}.$$ The associated modular obstruction cocycle $(c,\nu)\in{{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}({% \widetilde{G}},N,A)$ is given by: $$\displaystyle c({\tilde{p}},{\tilde{q}},{\tilde{r}})$$ $$\displaystyle={\alpha}_{\tilde{p}}(u(q,r))u(p,qr)\{u(p,q)u(pq,r)\}^{*};$$ $$\displaystyle\nu(m)$$ $$\displaystyle=[{\alpha}_{m}]\in{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},A),% \quad m\in N.$$ The triviality of $[c\cdot{\alpha}_{p}\left(\partial_{Q}(b)\right)]$ means the existence of $f\in{{\text{\rm C}}^{2}}(Q,A)$ such that $c\cdot{\alpha}_{p}\left(\partial_{Q}(b)\right)={\partial}_{\widetilde{Q}}f$. Setting $$v(p,q)=f(p,q)^{*}w(p){\alpha}_{p}(w(q))u(p,q)w(pq)^{*},$$ where $w(p)\in{\widetilde{{\eusm M}}}$ with $\ w(p)^{*}{\theta}_{t}(w(p))=b(t,p),$ we get $${\text{\rm Ad}}w(p){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{% \tilde{p}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad}}w(q)% {\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{\tilde{q}}={\text{% \rm Ad}}(v(p,q)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\text{\rm Ad% }}w(pq){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{{\tilde{p}}{% \tilde{q}}}\quad\text{and}\quad{\partial}_{\widetilde{Q}}v=1.$$ Since $v(q,r)^{*}{\theta}_{t}(v(q,r))=1$ for $t\in\mathbb{R}$, the unitaries $v(q,r)$ are elements of ${\eusm M}.$ Setting $${}_{w}{\alpha}_{p}={\widetilde{\text{\rm Ad}}}w(p){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{p},$$ obtain a cocycle crossed action $\{{}_{w}{\alpha},v\}$ of $Q$ on ${\eusm M}$. As the fixed point algebra ${\eusm M}^{{}_{w}{\alpha}}$ can be assumed to be properly infinite without loss of generality, we can find a family $\{a(p)\in{\eusm U}({\eusm M}):p\in Q\}$ such that $$\displaystyle 1$$ $$\displaystyle=a(p){\alpha}_{p}(a(q))v(p,q)a(pq)^{*}$$ $$\displaystyle=f(p,q)^{*}a(p){}_{w}{\alpha}_{p}(a(q))w(p){\alpha}_{p}(w(q))u(p,% q)w(pq)^{*}a(pq)^{*}$$ $$\displaystyle=f(p,q)^{*}a(p)w(p){\alpha}_{p}(a(q)w(q))u(p,q)\left(a(pq)w(pq)% \right)^{*};$$ $$\displaystyle f(p,q)$$ $$\displaystyle=a(p)w(p){\alpha}_{p}(a(q)w(q))u(p,q)\left(a(pq)w(pq)\right)^{*}.$$ Hence ${\beta}={}_{a\cdot w}{\alpha}:{\tilde{p}}\in{\widetilde{Q}}\mapsto{}_{a\cdot w% }{\alpha}_{\tilde{p}}={\text{\rm Ad}}(a(p)w(p)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{\tilde{p}}\in{\text{\rm Aut}}({\widetilde% {{\eusm M}}})$ is an action of ${\widetilde{Q}}$ on ${\widetilde{{\eusm M}}}$. The restriction of ${\beta}$ to ${\eusm M}$ is precisely a ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$-perturbation of the original action ${\alpha}$ since ${\widetilde{\text{\rm Ad}}}(a(p)w(p))\in{{\text{\rm Cnt}}_{\text{\rm r}}}({% \eusm M})$. Now we have $$\displaystyle d_{c}(s,q,r)$$ $$\displaystyle={\theta}_{s}(u(q,r))u(q,r)^{*}$$ $$\displaystyle={\theta}_{s}\Big{(}f(q,r){\alpha}_{q}((a(r)w(r))^{*})(a(q)w(q))^% {*}a(pq)w(qr)\Big{)}$$ $$\displaystyle\hskip 14.454pt\times\Big{(}f(q,r){\alpha}_{q}((a(r)w(r))^{*})(a(% q)w(q))^{*}a(pq)w(qr)\Big{)}^{*}$$ $$\displaystyle=({{\partial}_{\theta}}f(q,r))_{s}{\alpha}_{q}(b(s,r)^{*})b(s,q)^% {*}b(s,pq)$$ where ${\theta}_{s}(a(p)w(p))(a(p)w(p))^{*}={\theta}_{s}(w(p))(w(p))^{*}=b(s,p)\in A.$ Hence we have $$\displaystyle\nu({{\mathfrak{n}}_{N}}(q,r))=[d_{c}(\cdot,q,r)]$$ $$\displaystyle=[{\partial}_{\widetilde{Q}}(b(\cdot,\cdot)^{*})(q,r)]\quad\text{% in}\ {{\text{\rm H}}_{\theta}^{1}}.$$ Thus we conclude that $\nu\cup{{\mathfrak{n}}_{N}}\in{{\text{\rm B}}_{\alpha}^{2}}({\widetilde{Q}},{{% \text{\rm H}}_{\theta}^{1}})$. Conversely, suppose that ${\alpha}_{\mathfrak{s}}$ is perturbed to an action of $Q$ by ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$. Choose $\{w(p)\in{\widetilde{{\eusm U}}}({\eusm M}):p\in Q\}$ so that $${\widetilde{\text{\rm Ad}}}(w(p)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{\alpha}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\widetilde{% \text{\rm Ad}}}(w(p)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}% _{q}={\widetilde{\text{\rm Ad}}}(w(pq)){\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{pq},\quad p,q\in Q.$$ Let $\{u(p,q)\in{\widetilde{{\eusm U}}}({\eusm M}):p,q\in Q\}$ be a family such that $${\alpha}_{p}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{q}={% \text{\rm Ad}}(u(p,q)){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha% }_{pq}.$$ Then we have $$f(p,q)=w(p){\alpha}_{p}(w(q))u(p,q)w(pq)^{*}\in A,$$ and that $$\displaystyle({\partial}_{\widetilde{Q}}$$ $$\displaystyle f)({\tilde{p}},{\tilde{q}},{\tilde{r}})={\alpha}_{\tilde{p}}\Big% {(}w(q){\alpha}_{q}(w(r))u(q,r)w(qr)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}w(p){\alpha}_{p}(w(qr))u(p,qr)w(pqr)^% {*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{\{}\Big{(}w(p){\alpha}_{p}(w(q))u(p,q)w% (pq)^{*}\Big{)}$$ $$\displaystyle\hskip 36.135pt\times\Big{(}w(p){\alpha}_{pq}(w(r))u(pq,r)w(pqr)^% {*}\Big{)}\Big{\}}^{*}$$ $$\displaystyle=c({\tilde{p}},{\tilde{q}},{\tilde{r}}){\alpha}_{p}\left(b(s,q){% \alpha}_{q}(b(s,r)b(s,qr)^{*}\right),$$ where $b(s,p)=w(p)^{*}{\theta}_{s}(w(p))\in{\text{\rm Z}}^{1}_{\theta}(\mathbb{R},A).$ Thus we conclude $$[c({\tilde{p}},{\tilde{q}},{\tilde{r}}){\alpha}_{p}(\partial_{Q}(b)(s;q,r))]=1.$$ $\heartsuit$ This characterization has an immediate consequence: Theorem 5.2 If ${\eusm M}$ is an approximately finite dimensional factor of type III with flow of weights $\{{\eusm C},{\mathbb{R}},{\theta}\}$, then the exact sequence: $$\begin{CD}1@>{}>{}>{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},{\eusm U}({\eusm C% }))@>{}>{}>{\text{\rm Out}}({\eusm M})@>{}>{}>{{\text{\rm Out}}_{\tau,{\theta}% }}({\widetilde{{\eusm M}}})@>{}>{}>1\end{CD}$$ does not split. Demonstration Proof Let $G$ be the discrete Heisenberg group: $$G=\left\{\begin{pmatrix}1&a&c\\ 0&1&b\\ 0&0&1\end{pmatrix}:a,b,c\in{\mathbb{Z}}\right\}$$ and $$N=\left\{\begin{pmatrix}1&0&c\\ 0&1&0\\ 0&0&1\end{pmatrix}:c\in{\mathbb{Z}}\right\}$$ be the center of $G$ as in Example 7.1 on [KtST1]. We write an element of $G$ as $(a,b,c)\in{\mathbb{Z}}$ with the multiplication rule: $$(a,b,c)(a^{\prime},b^{\prime},c^{\prime})=(a+a^{\prime},b+b^{\prime},c+c^{% \prime}+ab^{\prime}).$$ We then form the quotient group $Q=G/N$ and obtain an exact sequence: $$\begin{CD}1@>{}>{}>N@>{}>{}>G@>{{\pi_{Q}}}>{}>Q@>{}>{}>0.\end{CD}$$ The quotient group $Q$ is isomorphic to ${\mathbb{Z}}^{2}$. Define a cross-section ${\mathfrak{s}}$ of ${\pi_{Q}}$ in the following way: $${\mathfrak{s}}(a,b)=\begin{pmatrix}1&a&0\\ 0&1&b\\ 0&0&1\end{pmatrix},\quad(a,b)\in{\mathbb{Z}}^{2}=Q,$$ and compute $$\displaystyle{{\mathfrak{n}}_{N}}(a,b$$ $$\displaystyle;a^{\prime},b^{\prime})=ab^{\prime}\in{\mathbb{Z}}$$ 5.15.15.1 where $N$ is identified with ${\mathbb{Z}}$. Choose $({\lambda},\mu)\in{\text{\rm Z}}(G,N,{\mathbb{T}})$ to be trivial, i.e., $\mu(m,n)=1,m,n\in N$ and ${\lambda}(m,g)=1,m\in N,g\in G$. But choose $\nu=e^{iT}\in{\text{\rm Hom}}(N,{\mathbb{T}})=\widehat{N}={\mathbb{T}}$ with $T>0$ to be determined, so that the characteristic cocycle $({\lambda},\mu)\in{\text{\rm Z}}(G\times{\mathbb{R}},N,{\mathbb{T}})$ is given by: $$\displaystyle\mu(m,n)=1,\quad m,n\in N;$$ $$\displaystyle{\lambda}(m,(g,s))=\exp({{\text{\rm i}}T^{\prime}ms}),\quad s\in{% \mathbb{R}},g\in G,$$ where $T^{\prime}={2\pi}/T.$ Let ${\eusm M}$ be an AFD factor of type III with flow of weights $\{{\eusm C},{\mathbb{R}},{\theta}\}$. Viewing the torus ${\mathbb{T}}$ as the subgroup ${\eusm U}({\eusm C}^{\theta})$ of the unitary group $A={\eusm U}({\eusm C})$, we view the cocycle $({\lambda},\mu)$ as an element of ${\text{\rm Z}}_{\alpha}({\widetilde{G}},N,A)$. Now choose $T>0$ such that ${\sigma}_{nT}^{\varphi}\not\in{\text{\rm Int}}({\eusm M})$ for every $n\in{\mathbb{Z}},n\neq 0,$ with ${\varphi}$ a preassigned faithful semi-finite normal weight on ${\eusm M}$. Such a $T\in{\mathbb{R}}$ exists because $\{t\in{\mathbb{R}},{\sigma_{t}^{{\varphi}}}\in{\text{\rm Int}}({\eusm M})\}$ must be a meager subgroup of ${\mathbb{R}}$. Let ${\alpha}={\alpha}^{{\lambda},\mu}$ be the action of $G$ on ${\eusm M}$ associated with the cocycle $({\lambda},\mu)$ and ${\text{\rm mod}}({\alpha}_{g})={\text{\rm id}}$. The construction yields that the action ${\alpha}$ is free and it enjoys the following property: $${\alpha}_{m}={\sigma}_{mT}^{\varphi},\quad m\in N,$$ with ${\varphi}$ a dominant weight on ${\eusm M}$. We can assume the invariance ${\varphi}={\varphi}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{% g},g\in G$ for ${\alpha}$. The freeness of ${\alpha}$ shows that the map: ${\dot{\alpha}}:g\in G\mapsto{\dot{\alpha}}_{g}=[{\alpha}_{g}]\in{\text{\rm Out% }}({\eusm M})$ is an injective homomorphism such that ${\dot{\alpha}}_{m}={\sigma}_{mT}\in{{\text{\rm H}}_{\theta}^{1}}({\mathbb{R}},% A)\subset{\text{\rm Out}}({\eusm M})$. We are now going to compute the modular obstruction cocycle $(c,\nu)\in{{\text{\rm Z}}_{{\alpha},{\mathfrak{s}}}^{\text{\rm out}}}({% \widetilde{Q}},A)*_{\mathfrak{s}}{\text{\rm Hom}}(N,{{\text{\rm H}}_{\theta}^{% 1}}).$ Since $$\displaystyle{\alpha}_{\tilde{p}}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ% $}}}{\alpha}_{\tilde{q}}$$ $$\displaystyle={\alpha}_{{\mathfrak{s}}(p),s}{\lower-1.29pt\hbox{{$% \scriptscriptstyle\circ$}}}{\alpha}_{{\mathfrak{s}}(q),t}={\alpha}_{{\mathfrak% {s}}(p){\mathfrak{s}}(q)}{\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{% \theta}_{s+t}$$ $$\displaystyle={\alpha}_{{{\mathfrak{n}}_{N}}(p,q){\mathfrak{s}}(pq)}{\lower-1.% 29pt\hbox{{$\scriptscriptstyle\circ$}}}{\theta}_{s+t}$$ $$\displaystyle={\text{\rm Ad}}({\varphi}^{{\text{\rm i}}T^{\prime}{{\mathfrak{n% }}_{N}}(p,q)}){\lower-1.29pt\hbox{{$\scriptscriptstyle\circ$}}}{\alpha}_{{% \tilde{p}}{\tilde{q}}},$$ with $u(p,q)={\varphi}^{{\text{\rm i}}T^{\prime}{{\mathfrak{n}}_{N}}(p,q)}$ we get $$\displaystyle c({\tilde{p}},{\tilde{q}},{\tilde{r}})$$ $$\displaystyle={\alpha}_{\tilde{p}}(u(q,r))u(p,qr)\{u(p,q)u(pq,r)\}^{*}$$ $$\displaystyle={\theta}_{s}(u(q,r))u(q,r)^{*}{\alpha}_{p}(u(q,r))u(p,qr)\{u(p,q% )u(pq,r)\}^{*}$$ $$\displaystyle=\exp(-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r)){% \varphi}^{{\text{\rm i}}T^{\prime}({{\mathfrak{n}}_{N}}(q,r)+{{\mathfrak{n}}_{% N}}(p,qr)-{{\mathfrak{n}}_{N}}(p,q)-{{\mathfrak{n}}_{N}}(pq,r))}$$ $$\displaystyle=\exp(-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r))$$ and $c_{Q}=1$. In order for $c\cdot{\alpha}_{p}(\partial_{Q}(b))$ with some $b(\cdot,q)\in{\text{\rm Z}}_{\theta}^{1}(\mathbb{R},A)$ to be trivial, it is necessary and sufficient that there exists $f\in{{\text{\rm C}}^{2}}(Q,A)$ such that ${\partial}_{\widetilde{Q}}f=c\cdot{\alpha}_{p}(\partial_{Q}(b))$. The function $f$ satisfies the equations: $$\exp(-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r))={\alpha}_{q}(b(s,r)^% {*})b(s,q)^{*}b(s,qr)f(q,r)^{*}{\theta}_{s}(f(q,r))$$ which means that $[\exp(-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r))]\in{\text{\rm B}}^{% 2}(Q,{\text{\rm H}}^{1}_{\theta}).$ As ${\text{\rm mod}}({\alpha}_{p})={\text{\rm id}},p\in Q,$ and $Q$ is a free abelian group, the second cohomology group ${{\text{\rm H}}^{2}}(Q,{\text{\rm H}}^{1}_{\theta})$ is isomorphic to the group $X(Q^{2},{\text{\rm H}}^{1}_{\theta})$ of all ${\text{\rm H}}^{1}_{\theta}$-valued skew symmetric bihomomorphisms. We have $$\displaystyle\exp\Big{(}$$ $$\displaystyle-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r)\Big{)}\exp% \Big{(}{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(r,q)\Big{)}$$ 5.25.25.2 $$\displaystyle=$$ $$\displaystyle\Big{(}{\alpha}_{q}(b(s,r)^{*})b(s,q)^{*}b(s,qr)f(q,r)^{*}{\theta% }_{s}(f(q,r))\Big{)}$$ $$\displaystyle\hskip 14.454pt\times\Big{(}{\alpha}_{r}(b(s,q)^{*})b(s,r)^{*}b(s% ,rq)f(r,q)^{*}{\theta}_{s}(f(r,q))\Big{)}^{*}$$ $$\displaystyle=$$ $$\displaystyle f(r,q)f(q,r)^{*}{\theta}_{s}(f(r,q)^{*}f(q,r)).$$ By (5.1), we have $$\exp(-{\text{\rm i}}T^{\prime}s{{\mathfrak{n}}_{N}}(q,r)+{\text{\rm i}}T^{% \prime}s{{\mathfrak{n}}_{N}}(r,q))=\exp(-{\text{\rm i}}T^{\prime}s(ab^{\prime}% -a^{\prime}b))$$ where $q=(a,b),r=(a^{\prime},b^{\prime})$ Thus it follows from (5.2) that the modular automorphisms ${\sigma}_{T(ab^{\prime}-a^{\prime}b)}^{\varphi}$ are inner, which contradicts to the choice of $T$. Therefore $[c({\tilde{p}},{\tilde{q}},{\tilde{r}}){\alpha}_{p}(\partial_{Q}(b)(s;q,r))]\neq 1$ in ${\text{\rm H}}_{{\alpha},{\text{\rm s}}}^{3}({\widetilde{Q}},A)$. Theorem 5.1 says that ${\alpha}_{\mathfrak{s}}$ cannot be perturbed into an action of $Q$ by ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$. Since we have a commutative diagram of exact sequences: $$\begin{CD}1@>{}>{}>N@>{}>{}>G@>{{\pi_{Q}}}>{\underset{\mathfrak{s}}\to{% \longleftarrow}}>Q@>{}>{}>1\\ @V{\nu}V{}V@V{{\dot{\alpha}}}V{}V@V{{\widetilde{{\alpha}}}}V{}V\\ 1@>{}>{}>{{\text{\rm H}}_{\theta}^{1}}@>{}>{}>{\text{\rm Out}}({\eusm M})@>{% \pi}>{\underset{{\mathfrak{s}}_{\pi}}\to{\longleftarrow}}>{{\text{\rm Out}}_{% \tau,{\theta}}}({\widetilde{{\eusm M}}})@>{}>{}>1,\end{CD}$$ if the second sequence splits via cross-section ${{\mathfrak{s}}_{\pi}}$, then the associated injection ${\widetilde{{\alpha}}}$ of $Q$ into ${{\text{\rm Out}}_{\tau,{\theta}}}({\widetilde{{\eusm M}}})$ is composed with the cross-section ${{\mathfrak{s}}_{\pi}}$ to be an outer action of $Q$, say ${\beta}$. But ${{\text{\rm H}}^{3}}(Q,{\mathbb{T}})=1$, so that ${\beta}$ can be perturbed into an action of $Q$, denoted by ${\beta}$ again. Then we have ${\beta}_{p}\equiv{\alpha}_{{\mathfrak{s}}(p)}\ {\text{\rm mod}}\ {{\text{\rm Cnt% }}_{\text{\rm r}}}({\eusm M})$. Therefore the ${\alpha}_{\mathfrak{s}}$ is perturbed to an action by ${{\text{\rm Cnt}}_{\text{\rm r}}}({\eusm M})$, which contradicts to the fact $[c({\tilde{p}},{\tilde{q}},{\tilde{r}}){\alpha}_{p}(\partial_{Q}(b)(s;q,r))]\neq 1$ for any $b(\cdot,p)\in{\text{\rm Z}}^{1}(\mathbb{R},{\text{\rm H}}^{1}_{\theta})$ as seen above. $\heartsuit$ References Cnn1 A. Connes, Une classification des facteurs de type III , Ann. Scient. Ecole Norm. Sup. 4ème Sèrie, 6 (1973), 133-252. Cnn2 —, Almost periodic states and factors of type III${}_{1}$, J. Funct. Anal., 16 (1974), 415-445. Cnn3 —, Outer conjugacy classes of automorphisms of factors, Ann. Sci. École Norm. Sup., 4éme Série, 8, (1975), 383-419. Cnn4 —, Outer conjugacy of automorphisms of factors, Symposia Mathematica, 20 (1976), 149-159. Cnn5 —, Classification of injective factors, Ann. of Math., 104 (1976), 73-115.. Cnn6 —, Periodic automorphisms of the hyperfinite factor of type II${}_{1}$, Acta Math. Szeged, 39 (1977), 39-66. CT A. Connes and M. Takesaki, The flow of weights on factors of type III , T$\hat{\text{o}}$hoku Math. J., 29 (1977), 473-575. FT1 A.J. Falcone and M. Takesaki, Operator valued weights without structure theory, Trans. Amer. Math. Soc., 351 (1999), 323–341. FT2 —, Non-commutative flow of weights on a von Neumann algebra, J. Funct. Anal., 182 (2001), 170 - 206. Hb J. Huebschmann, Group extensions, crossed pairs and an eight term exact sequence, J. Reine Angew. Math. 321 (1981), 150–172. J1 V.F.R. Jones, Actions of finite groups on the hyperfinite type II factor, Amer. Math. Soc. Memoire, 237 (1980). JT V.F.R. Jones and M. Takesaki, Actions of compact abelian groups on semifinite injective factors, Acta Math., 153 (1984), 213-258. KtST1 Y. Katayama, C.E. Sutherland and M. Takesaki, The characteristic square of a factor and the cocycle conjugacy of discrete amenable group actions on factors, Invent. Math., 132 (1998), 331-380. KtST2 —, The structure of the automorphism group of a factor and cocycle conjugacy of discrete group actions, Proceeding of Conference on Operator Algebras and Quantum Field Theory, (1997), International Press, 166-198. KtT1 Y. Katayama and M. Takesaki, Outer actions of a countable discrete amenable group on an AFD factor, To appear. KtT2 —, Outer actions of a countable discrete amenable group on approximately finite dimensional factors II, Special Cases, In preparation. KwST Y. Kawahigashi, C.E. Sutherland and M. Takesaki, The structure of the automorphism group of an injective factor and the cocycle conjugacy of discrete abelian group actions, Acta Math., 169 (1992), 105-130. McWh S. Mac Lane and J.H. Whitehead, On the 3-type of a complex, Proc. Nat. Acad. Sci., U.S.A., 36, (1950), 41-48. NkTk Y. Nakagami and M. Takesaki, Duality for crossed products of von Neumannm algebras, Lecture Notes in Math., vol. 731, Springer-Verlag, 1979. Ocn A. Ocneanu, Actions of discrete amenable groups on factors, Lecture Notes in Math. No. 1138, (1985), Springer, Berlin,. Rc J.G. Ratcliffe, Crossed extensions, Trans. Amer. Math. Soc., 237, (1980), 73 - 89. St1 C.E. Sutherland, Cohomology and extensions of von Neumann algebras, I and II , Publ. RIMS., Kyoto Univ., 16 , 105 - 133; 135 - 174. St2 —, A Borel parametrization of Polish groups, Publ. RIMS., Kyoto Unviv., 21 (1985), 1067 - 1086. ST1 C.E. Sutherland and M. Takesaki, Actions of discrete amenable groups and groupoids on von Neumann algebras, Publ Res. Inst. Math. Sci. 21 (1985), 1087-1120. ST2 —, Actions of Discrete amenable groups on injective foactors of type III${}_{\lambda}$, ${\lambda}\neq 1$, Pacific J. Math. 137 (1989), 405-444. ST3 —, Right inverse of the module of approximately finite dimensional factors of type III and aproximately finite ergodic principal measured groupoids. Operator algebras and their applications, II , Fields Institute Comm. 20 (1998), 149-159. Tk1 M. Takesaki, Theory of Operator Algebras I, Springer - Verlag, 1979. Tk2 —, Theory of Operator Algebras II, Springer - Verlag, 2002. Tk3 —, Theory of Operator Algebras III, Springer - Verlag, 2002.
Lyman alpha and Lyman continuum emission of Mg ii-selected star-forming galaxies Y. I. Izotov${}^{1}$, J. Chisholm${}^{2}$, G. Worseck${}^{3}$, N. G. Guseva${}^{1}$, D. Schaerer${}^{4,5}$, J. X. Prochaska${}^{6}$ ${}^{1}$Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, 14-b Metrolohichna str., Kyiv, 03143, Ukraine, ${}^{2}$Astronomy Department, University of Texas at Austin, 2515 Speedway, Stop C1400 Austin, TX 78712-1205, USA, ${}^{3}$ Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Str. 24/25, D-14476 Potsdam, Germany, ${}^{4}$Observatoire de Genève, Université de Genève, 51 Ch. des Maillettes, 1290, Versoix, Switzerland, ${}^{5}$IRAP/CNRS, 14, Av. E. Belin, 31400 Toulouse, France, ${}^{6}$University of California Observatories-Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064, USA E-mail: yizotov@bitp.kiev.ua (Accepted XXX. Received YYY; in original form ZZZ) Abstract We present observations with the Cosmic Origins Spectrograph onboard the Hubble Space Telescope of seven compact low-mass star-forming galaxies at redshifts, $z$, in the range 0.3161 – 0.4276, with various O${}_{3}$Mg${}_{2}$ = [O iii] $\lambda$5007/Mg ii $\lambda$2796+2803 and Mg${}_{2}$ = Mg ii $\lambda$2796/Mg ii $\lambda$2803 emission-line ratios. We aim to study the dependence of leaking Lyman continuum (LyC) emission on the characteristics of Mg ii emission together with the dependencies on other indirect indicators of escaping ionizing radiation. LyC emission with escape fractions $f_{\rm esc}$(LyC) = 3.1 – 4.6 per cent is detected in four galaxies, whereas only 1$\sigma$ upper limits of $f_{\rm esc}$(LyC) in the remaining three galaxies were derived. A strong narrow Ly$\alpha$ emission line with two peaks separated by $V_{\rm sep}$ $\sim$ 298 – 592 km s${}^{-1}$ was observed in four galaxies with detected LyC emission and very weak Ly$\alpha$ emission is observed in galaxies with LyC non-detections. Our new data confirm the tight anti-correlation between $f_{\rm esc}$(LyC) and $V_{\rm sep}$ found for previous low-redshift galaxy samples. $V_{\rm sep}$ remains the best indirect indicator of LyC leakage among all considered indicators. It is found that escaping LyC emission is detected predominantly in galaxies with Mg${}_{2}$ $\ga$ 1.3. A tendency of an increase of $f_{\rm esc}$(LyC) with increasing of both the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ is possibly present. However, there is substantial scatter in these relations not allowing their use for reliable prediction of $f_{\rm esc}$(LyC). keywords: (cosmology:) dark ages, reionization, first stars — galaxies: abundances — galaxies: dwarf — galaxies: fundamental parameters — galaxies: ISM — galaxies: starburst ††pubyear: 2022††pagerange: Lyman alpha and Lyman continuum emission of Mg ii-selected star-forming galaxies–B 1 Introduction It was established during last decade that Lyman continuum (LyC) emission, which is produced in copious amount in both the high redshift star-forming galaxies (SFGs) at $z$ $\sim$ 2 - 4 (Vanzella et al., 2015; de Barros et al., 2016; Shapley et al., 2016; Bian et al., 2017; Vanzella et al., 2018; Rivera-Thorsen et al., 2019; Saha et al., 2020; Meštric et al., 2020; Vielfaure et al., 2020; Fletcher et al., 2019; Marchi et al., 2017, 2018; Steidel et al., 2018) and the low-redshift SFGs at $z$ $\la$ 0.4 (Leitet et al., 2013; Borthakur et al., 2014; Leitherer et al., 2016; Chisholm et al., 2017; Izotov et al., 2016a, b, 2018a, 2018b, 2021a; Flury et al., 2022a, b; Xu et al., 2022), can escape from the galaxies resulting in ionization of the intergalactic medium (IGM). These galaxies are considered as analogues of the galaxies at redshifts 6 - 8, which are presumably the main sources of the reionization of the Universe (Ouchi et al., 2009; Wise & Chen, 2009; Yajima, Choi & Nagamine, 2011; Mitra, Ferrara & Choudhury, 2013; Bouwens et al., 2015; Finkelstein et al., 2019; Lewis et al., 2020; Naidu et al., 2020; Meyer et al., 2020). It was also found that $f_{\rm esc}$(LyC) in many discovered galaxies is of the order of 10 - 20 per cent or higher. This could be sufficient for efficient reionization of the IGM at $z$ $\ga$ 6 (e.g. Ouchi et al., 2009; Robertson et al., 2013, 2015; Dressler et al., 2015; Khaire et al., 2016). Direct LyC observations of high-redshift galaxies are difficult because of their faintness, the increasing of IGM opacity, and contamination by lower-redshift interlopers (e.g. Vanzella et al., 2010, 2012; Inoue et al., 2014; Grazian et al., 2016). Furthermore, the knowledge of the galaxy H$\beta$ or H$\alpha$ luminosity is needed to derive the production rate of ionizing photons and thus the $f_{\rm esc}$(LyC). This is not possible yet for most of high-$z$ LyC emitters. Low-redshift galaxies are brighter, but observations from space, with the aid of Hubble Space Telescope (HST), are needed for the detection of LyC emission in $z$ $\ga$ 0.3 galaxies. This can be done only for limited samples of low-$z$ galaxies. On the other hand, the H$\beta$ and H$\alpha$ emission lines can easily be observed in low-$z$ galaxies from the ground. In fact, many such galaxies were observed in the course of the Sloan Digital Sky Survey (SDSS). This survey was succesfully used to select promising LyC leaking candidates and their subsequent observations with the HST (this paper, Izotov et al., 2016a, b, 2018a, 2018b, 2021a; Wang et al., 2021; Flury et al., 2022a; Xu et al., 2022). Due to difficulties of direct detection of LyC emission in both the high- and low-redshift SFGs indirect indicators for the determination of the $f_{\rm esc}$(LyC) can be used. However, at present, it cannot be very reliably determined from most indicators due to the large scatter in their correlations with $f_{\rm esc}$(LyC). The shape of the Ly$\alpha$ line can be considered as the prime indicator of the $f_{\rm esc}$(LyC) value, since it depends on the distribution of the neutral hydrogen around the galaxy, which also determines the escape of ionizing radiation (e.g. Verhamme et al., 2015). In most galaxies with the Ly$\alpha$ emission line it has a two-peak shape due to scattering in the neutral gas with a relatively high column density of H i, with a weaker blue peak and a stronger red peak. The offset of the peaks from the line centre serves as a measure of the neutral hydrogen optical depth along the line of sight (e.g. Verhamme et al., 2015). In particular, a tight correlation between the Ly$\alpha$ blue and red peak separation and the escape fraction of ionizing radiation was found (Izotov et al., 2018b). More complex Ly$\alpha$ profiles with three or more peaks are rarely observed (Vanzella et al., 2018; Izotov et al., 2018b; Rivera-Thorsen et al., 2017, 2019). They show significant central line emission, an indication of direct escape through porous channels in addition to escape via scattering. In these cases the separation of the Ly$\alpha$ emisson peaks is a poor tracer of $f_{\rm esc}$(LyC) because of a combination of two distinct modes of Ly$\alpha$ escape (Naidu et al., 2022). We also note that at redshifts $z$ $\ga$ 6 the detection of Ly$\alpha$ is difficult because of declining Ly$\alpha$ transmission with redshift (Gronke et al., 2021). This decline with redshift is sharper on the blue side of Ly$\alpha$ making it more difficult to detect the blue peak. Therefore, other indirect indicators are needed, for example, those, which use strong emission lines in the rest-frame optical and UV ranges, or UV absorption lines, including hydrogen lines of the Lyman series and heavy element lines, such as Si ii $\lambda$1260 that can measure the Lyman continuum escape fraction (e.g. Gazagnes et al., 2018, 2020; Chisholm et al., 2018; Flury et al., 2022a, b; Saldana-Lopez et al., 2022). Jaskot & Oey (2013) and Nakajima & Ouchi (2014) proposed to use the O${}_{32}$ = [O iii]$\lambda$5007/[O ii]$\lambda$3727 flux ratio arguing that its high values of up to $\sim$ 60 in some low-$z$ galaxies (Stasińska et al., 2015; Izotov, Thuan & Guseva, 2021b) may indicate that the ISM is predominantly ionized, allowing the escape of Lyman continuum photons. Indeed, Izotov et al. (2016a, b, 2018a, 2018b, 2021a) obtained HST/COS observations of compact SFGs at redshifts $z$ $\sim$ 0.3 - 0.4 with O${}_{32}$ = 5 - 28 and an escape fraction in the range of 2 - 72 per cent. Although they did find some trend of increasing $f_{\rm esc}$(LyC) with increasing O${}_{32}$, the dependence is weak, with a large scatter. It has also been suggested that $f_{\rm esc}$(LyC) tends to be higher in low-mass galaxies (Wise et al., 2014; Trebitsch et al., 2017). However, Izotov et al. (2018b, 2021a) added low-mass LyC leakers and found rather a relatively weak anti-correlation between $f_{\rm esc}$(LyC) and stellar mass $M_{\star}$ in a wide range between 10${}^{7}$ - 10${}^{10}$ M${}_{\odot}$. A similar correlation is also found in the Low-$z$ Lyman Continuum Survey (LzLCS) in Flury et al. (2022b). Mg ii $\lambda$2796, 2803 emission may also provide a constraint of the LyC escape and its doublet ratio can be used to infer the neutral gas column density (Henry et al., 2018; Chisholm et al., 2020; Xu et al., 2022; Naidu et al., 2022; Katz et al., 2022). These two lines in emission are commonly seen in the spectra of local compact star-forming galaxies (Guseva et al., 2013, 2019) including LyC leaking galaxies (Chisholm et al., 2020; Guseva et al., 2020) and might be more likely to leak LyC than similar galaxies without strong Mg ii (Xu et al., 2022). They are also detected in $z$ $\sim$ 1 - 2 galaxies (Weiner et al., 2009; Erb et al., 2012; Finley et al., 2017; Naidu et al., 2022) and in a $z$ $\sim$ 5 star-forming galaxy (Witstok et al., 2021). Henry et al. (2018) found that the Mg ii escape fraction correlates with the Ly$\alpha$ escape fraction, and that the Mg ii emission line profiles are broader and more redshifted in galaxies with low escape fractions. They and Chisholm et al. (2020) pointed out that the link between Ly$\alpha$ and Mg ii can be used for a LyC diagnostic at high redshifts, where Ly$\alpha$ and LyC are difficult to observe. However, Katz et al. (2022) pointed out from the numerical simulations that Mg ii is a useful diagnostic of escaping ionizing radiation only in the optically thin regime. The goal of this paper is to determine $f_{\rm esc}$(LyC) for seven low-mass galaxies with various Mg${}_{2}$ = Mg ii $\lambda$2796/Mg ii $\lambda$2803 flux ratios and various O${}_{3}$Mg${}_{2}$ = [O iii]$\lambda$5007/Mg ii $\lambda$2796+2803 flux ratios. The O${}_{3}$Mg${}_{2}$ flux ratios range from 10 to 35 in six galaxies and $\ga$ 100 in one galaxy, where Mg ii emission is almost undetected. We aim to study the dependence of leaking LyC emission on the characteristics of Mg ii emission. We also wish to enlarge the known sample of low-redshift LyC leakers, to search for and to improve reliable diagnostics for the indirect estimation of $f_{\rm esc}$(LyC). The properties of the selected SFGs derived from observations in the optical range are presented in Section 2. The HST observations and data reduction are described in Section 3. The surface brightness profiles in the UV range are discussed in Section 4. In Section 5, we compare the HST/COS spectra with the extrapolation of the SEDs modelled with the SDSS spectra to the UV range. Ly$\alpha$ emission and escaping Lyman continuum emission are discussed in Section 6 together with the corresponding escape fractions. The indirect indicators of escaping LyC emission are considered in Section 7. Mg ii diagnostics are discussed in Section 8. We summarize our findings in Section 9. 2 Integrated properties of selected galaxies We selected a sample of local compact low-mass SFGs from the SDSS in the redshift range $z$ = 0.32 - 0.43 with O${}_{3}$Mg${}_{2}$ in a wide range to observe their Ly$\alpha$ and LyC emission with HST/COS. These galaxies are chosen to be sufficiently bright, to have high O${}_{32}$ ratios and high equivalent widths EW(H$\beta$) of the H$\beta$ emission line. This ensures that a galaxy can be acquired and observed with low- and medium-resolution gratings in one visit, consisting of 4 orbits. Finally we selected a total sample of 7 galaxies with EW(H$\beta$) $>$ 170 Å and O${}_{32}$ $\ga$ 4. They are listed in Table 1. All galaxies are nearly unresolved by the SDSS 5-band images and have FWHMs of $\sim$ 1.0 arcsec, so that all the galaxy’s light falls within the 2.5 arcsec diameter COS aperture and within the 2 arcsec diameter SDSS aperture. This ensures that global quantities can be derived from both the UV and optical spectra. We note, however, that Mg ii lines are located in the noisy parts of SDSS spectra and detected with a low signal-to-noise ratio, at least in some galaxies. As such, their fluxes, and especially the Mg ii flux ratio Mg${}_{2}$ = Mg ii $\lambda$2796/Mg ii $\lambda$2803 should only be considered tentatively. We note that follow up spectroscopy of these galaxies with high signal-to-noise ratio covering the wavelength range with Mg ii emission will be presented in King et al., in preparation. The SDSS, GALEX and WISE apparent magnitudes of the selected galaxies are shown in Table 6, indicating that these SFGs are among the faintest low-redshift LyC leaker candidates selected so far for HST observations. To derive absolute magnitudes and other integrated parameters we adopted luminosity and angular size distances (NASA Extragalactic Database (NED), Wright, 2006) with the cosmological parameters $H_{0}$ = 67.1 km s${}^{-1}$ Mpc${}^{-1}$, $\Omega_{\Lambda}$ = 0.682, $\Omega_{m}$ = 0.318 (Ade et al., 2014). These distances are presented in Table 1. Internal interstellar extinction $A$($V$)${}_{\rm int}$ has been derived from the observed decrement of hydrogen emission lines in the SDSS spectra after correction for the Milky Way extinction with $A$($V$)${}_{\rm MW}$ from the NED, adopting the Cardelli, Clayton & Mathis (1989) reddening law and $R$($V$)${}_{\rm int}$ = 2.7 and $R$($V$)${}_{\rm MW}$ = 3.1. The motivation of the adopted $R$($V$)${}_{\rm int}$ value is following. Izotov et al. (2017) modelled UV FUV and NUV magnitudes of the large sample of SDSS galaxies and found that the FUV magnitudes of galaxies better match the observed magnitudes with $R$($V$)${}_{\rm int}$ = 2.7 if EW(H$\beta$) $>$ 150Å, which is the case for our galaxies, whereas $R$($V$)${}_{\rm int}$ = 3.1 is more appropriate for galaxies with lower EW(H$\beta$)s. However, we note that in the optical range, which is used for SED fitting, the determination of intrinsic fluxes of the Lyman continuum and of the elemental abundances, extinction does only slightly depend on $R$($V$)${}_{\rm int}$. The extinction-corrected emission lines are used to derive ionic and total element abundances following the methods described in Izotov et al. (2006) and Guseva et al. (2013). The emission-line fluxes $I$($\lambda$) relative to the H$\beta$ flux corrected for both the Milky Way and internal extinctions, the restframe equivalent widths, the Milky Way ($C$(H$\beta$)${}_{\rm MW}$) and internal ($C$(H$\beta$)${}_{\rm int}$) extinction coefficients, and extinction-corrected H$\beta$ fluxes are shown in Table 7. It is seen in the Table that the extinction-corrected fluxes of the H$\delta$, H$\gamma$ and H$\alpha$ emission lines in all galaxies are consistent within the errors with theoretical recombination values indicating that $C$(H$\beta$)${}_{\rm int}$ is derived correctly. The fluxes and the direct $T_{\rm e}$ method are used to derive the physical conditions (electron temperature and electron number density) and the element abundances in the H ii regions. These quantities are shown in Table 8. The derived oxygen abundances are comparable to those in known low-redshift LyC leakers by Izotov et al. (2016a, b, 2018a, 2018b, 2021a). The ratios of the $\alpha$-element (neon and magnesium) abundances to oxygen abundance are similar to those in dwarf emission-line galaxies (e.g. Izotov et al., 2006; Guseva et al., 2013). On the other hand, the nitrogen-to-oxygen abundance ratios in some galaxies are somewhat elevated, similar to those in other LyC leakers at $z$ $\ga$ 0.3. We determine absolute FUV magnitudes from the fluxes of the intrinsic (i.e. extinction-corrected) SEDs at the rest-frame wavelength $\lambda$ = 1500 Å, which are reddened adopting extinction derived from the observed decrement of hydrogen Balmer lines. The attenuations are, on average, similar to the ones for other $z$ $\sim$ 0.3 - 0.4 LyC leakers and the $M_{\rm FUV}$ are similar as observed at high-redshift. The H$\beta$ luminosities $L$(H$\beta$) and the corresponding star-formation rates, SFR, were obtained from the extinction-corrected H$\beta$ fluxes, using the relation from Kennicutt (1998) for the SFR and adopting $I$(H$\alpha$)/$I$(H$\beta$) from Table 7. SFRs are increased by a factor 1/[1 $-$ $f_{\rm esc}$(LyC)] to take into account the escaping ionizing radiation which is discussed later. The SFRs corrected for escaping LyC radiation are shown in Table 2. They are somewhat below the range of 14 - 80 M${}_{\odot}$ yr${}^{-1}$ for the other LyC leakers studied by Izotov et al. (2016a, b, 2018a, 2018b, 2021a). We use the SDSS spectra of our LyC leakers to fit the SED in the optical range and derive their stellar masses. The fitting method, using a two-component model with a young burst and older continuosly formed stellar population, is described for example in Izotov et al. (2018a, b). Spectral energy distributions of instantaneous bursts in the range between 0 and 10 Gyr with evolutionary tracks of non-rotating stars by Girardi et al. (2000) and a combination of stellar atmosphere models (Lejeune, Buser & Cuisiner, 1997; Schmutz, Leitherer & Gruenwald, 1992) are used to produce the integrated SED for each galaxy. The star formation history is approximated by a young burst with a randomly varying age $t_{b}$ in the range $<$ 10 Myr, and a continuous star formation for older ages between times $t_{1}$ and $t_{2}$, randomly varying in the range 10 Myr - 10 Gyr, and adopting a constant SFR. The contribution of the two components is determined by randomly varying the ratio of their stellar masses, $b$ = $M_{\rm o}$/$M_{\rm y}$, in the range 0.1 - 1000, where $M_{\rm o}$ and $M_{\rm y}$ are the masses of the old and young stellar populations. The nebular continuum emission, including free-free and free-bound hydrogen and helium emission, and two-photon emission, is also taken into account using the observed H$\beta$ flux (i.e. not corrected for escaping LyC emission), the ISM temperature, and density. The fraction of nebular continuum emission in the observed spectrum near H$\beta$ is determined by the ratio of the observed H$\beta$ equivalent width EW(H$\beta$)${}_{\rm obs}$, shifted to the rest frame, to the equivalent width EW(H$\beta$)${}_{\rm rec}$ for pure nebular emission. EW(H$\beta$)${}_{\rm rec}$ varies from $\sim$ 900 Å to $\sim$ 1100 Å, for electron temperatures in the range $T_{\rm e}$ = 10000 - 20000 K. We note that non-negligible nebular emission in the continuum is produced only by the young burst with ages of a few Myr. The Salpeter (1955) initial mass function (IMF) is adopted, with a slope of $-$2.35, upper and lower mass limits $M_{\rm up}$ and $M_{\rm low}$ of 100 M${}_{\odot}$ and 0.1 M${}_{\odot}$, respectively. Izotov et al. (2016a) compared differences in SEDs obtained with two different IMFs, by Salpeter (1955) and Kroupa (2001). They concluded that the effect is minor. A $\chi$${}^{2}$ minimization technique was used 1) to fit the continuum in such parts of the restframe wavelength range 3600 - 6500 Å, where the SDSS spectrum is least noisy and free of nebular emission lines, and 2) to reproduce the observed H$\beta$ and H$\alpha$ equivalent widths. The total stellar masses ($M_{\star}$ = $M_{\rm y}$ + $M_{\rm o}$) of our LyC leakers derived from SED fitting are presented in Table 2. They are derived in exactly the same way as the stellar masses of the other LyC leakers studied by Izotov et al. (2016a, b, 2018a, 2018b, 2021a), permitting a direct comparison. 3 HST/COS observations and data reduction HST/COS spectroscopy of the seven selected galaxies was obtained in program GO 15845 (PI: Y. I. Izotov) during the period October 2020 – May 2021. The observational details are presented in Table 3. As in our previous programs (Izotov et al., 2016a, b, 2018a, 2018b, 2021a), the galaxies were directly acquired by COS near ultraviolet (NUV) imaging. All these galaxies are compact (as compact as all the other targets from our previous programs) and they have accurate SDSS astrometry for direct imaging acquisition. The NUV-brightest region of each target was centered in the 2.5 arcsec diameter spectroscopic aperture (Fig. 1). We note, however, that the acquisition exposure failed for J1014$+$5501 and J1352$+$5617 due to guide star acquisition failure in both cases because the acquisition of the guide stars was delayed. This is a frequent HST gyro issue. For safety reasons, the shutter remained closed and no acquisition image was obtained. Therefore, both galaxies were blindly acquired. The blind acquisition accuracy is $\sim$ 0.3 arcsec, which will result in very modest vignetting for a compact galaxy, possibly introducing uncertainties in the wavelength and flux calibration in the partially vignetted COS aperture. For J1352$+$5617 the vignetting is negligible, because the COS spectrophotometric magnitude (FUV$=21.90$ mag) agrees well with the GALEX FUV$=21.83\pm 0.17$ mag. For J1014$+$5501 the spectrophotometry (FUV$=22.69$ mag) is still consistent with the GALEX magnitude (FUV$=21.88\pm 0.59$ mag), considering the significant Eddington bias for the latter. The wavelength calibration was confirmed with Lyman series absorption lines of the galaxies. The spectra were obtained with the low-resolution grating G140L and medium-resolution grating G160M, applying all four focal-plane offset positions. The 800 Å setup was used for the G140L grating (sensitive wavelength range 1100–1950 Å, resolving power $R\simeq 1050$ at 1150 Å) to include the redshifted LyC emission for all targets. We obtained resolved spectra of the galaxies’ Ly$\alpha$ emission lines with the G160M grating ($R\sim 16000$ at 1600 Å), varying the G160M central wavelength with galaxy redshift to cover the emission line and the nearby continuum on a single detector segment. The individual exposures were reduced with the calcos pipeline v3.3.10, followed by accurate background subtraction and co-addition as required for our Poisson-limited data with FaintCOS v1.09 (Makan et al., 2021). We used the same methods and extraction aperture sizes as in Izotov et al. (2018a, b, 2021a) to achieve a homogeneous reduction of the galaxy sample observed in multiple programmes.We corrected for scattered geocoronal Ly$\alpha$ according to Worseck et al. (2016). The accuracy of our custom correction for scattered light in COS G140L data was checked by comparing the LyC fluxes obtained in the total exposure and in orbital night, respectively. We find that the differences in LyC fluxes for five galaxies are less or similar to the 1$\sigma$ errors. Due to insufficient time spent in orbital night, this check was not possible for J1157$+$5801 and J1352$+$5617. However, we verified that the detected LyC flux of J1352$+$5617 (Section 6) is insignificantly affected by residual uncertainties in the G140L scattered light model. 4 Acquisition images and surface brightness profiles in the NUV range The acquisition images of five galaxies in the NUV range are shown in Fig. 1. All galaxies are very compact with angular diameters considerably smaller than the COS spectroscopic aperture (the circles in Fig. 1) and linear diameters of $\sim$ 1 – 4 kpc. However, two of the most compact galaxies, J0130$-$0014 and J1157$+$5801, appear to be non-leaking LyC galaxies, whereas LyC emission is detected in the remaining three galaxies with more extended envelopes (see Section 6). We use these images to derive the surface brightess (SB) profiles of our galaxies, in accordance with previous studies by Izotov et al. (2016b, 2018a, 2018b, 2021a). No SB profiles have been derived for galaxies J1014$+$5501 and J1352$+$5617 because their acquisition exposures failed, as noted before. In accordance with Izotov et al. (2016b, 2018a, 2018b, 2021a) we have found that the outer parts of our galaxies are characterised by a linear decrease in SB (in mag per square arcsec scale), characteristic of a disc structure, and by a sharp increase in the central part due to the bright star-forming region (Fig. 2). The scale lengths $\alpha$ of our galaxies, defined in Eq. 1 of Izotov et al. (2016b), are in the range $\sim$ 0.2 – 0.6 kpc (Fig. 2), lower than $\alpha$ = 0.6 – 1.8 kpc in other LyC leakers (Izotov et al., 2016b, 2018a, 2018b), but similar to scale lengths of low-mass LyC leakers with masses $<$ 10${}^{8}$ M${}_{\odot}$ (Izotov et al., 2021a). The corresponding surface densities of star-formation rate in the studied galaxies, $\Sigma$ = SFR/($\pi\alpha^{2}$), are similar to those of other LyC leakers. The half-light radii $r_{50}$ of our galaxies in the NUV are considerably smaller than $\alpha$ because of the bright compact star-forming regions in the galaxy centres (see Table 2). 5 Modelled spectral energy distributions in the UV range To derive the fraction of the escaping ionizing radiation we use the two methods (e.g. Izotov et al., 2018a) based on the comparison between the observed flux in the Lyman continuum and its intrinsic flux in the same wavelength range. The intrinsic LyC flux is obtained 1) from SED fitting of the SDSS spectra simultaneously with reproducing the observed H$\beta$ and H$\alpha$ equivalent widths (and thus corresponding observed H$\beta$ and H$\alpha$ fluxes) or 2) from the flux of the H$\beta$ emission line. The attenuated extrapolations of SEDs to the UV range along with the observed COS spectra are shown in Fig. 3. For comparison, we also show the GALEX FUV and NUV fluxes with magenta filled squares and the fluxes in the SDSS $u,g,r,i,z$ filters with blue filled circles. We find that the spectroscopic and photometric data in the optical range are consistent, indicating that almost all the emission of our galaxies is inside the SDSS spectroscopic aperture. Therefore, aperture corrections are not needed. The attenuated modelled intrinsic SEDs in the optical range and their extrapolations to the UV range (Fig. 3) are obtained by assuming that extinctions for stellar and nebular emission are equal and adopting the extinction coefficients $C$(H$\beta$)${}_{\rm MW}$ from the NED and $C$(H$\beta$)${}_{\rm int}$ derived from the hydrogen Balmer decrement (Table 7), and the reddening law by Cardelli et al. (1989) at $\lambda$ $\geq$ 1250Å and its extension to shorter wavelengths by Mathis (1990) with $R(V)_{\rm int}$ = 3.1 (red solid lines), $R(V)_{\rm int}$ = 2.7 (black solid lines) and $R(V)_{\rm int}$ = 2.4 (cyan solid lines). Mathis (1990) presents the data only for $R(V)$ = 3.1. For practical use, we fit them with polynomials and adjusted in such a way to have the same values at $\lambda$=1250Å and for a variety of $R(V)$s as the values from the Cardelli et al. (1989) reddening law at the same wavelength and same $R(V)$. The dotted lines indicate the range of attenuated SEDs adopting $R(V)_{\rm int}$ = 2.7 and varying $C$(H$\beta$) within 1$\sigma$ errors of its nominal value. It is seen in Fig. 3 that the SDSS spectra are reproduced by the models quite well. On average, extrapolations of the attenuated SEDs to the UV range with $R(V)_{\rm int}$ = 2.7 reproduce the observed COS spectra somewhat better with flux deviations not exceeding $\sim$ 10 per cent for most galaxies. An exception is J1014$+$5501, for which the difference in fluxes is as high as $\sim$ 50 per cent. This difference can possibly be caused in part by the uncertain location of the galaxy within the COS spectroscopic aperture as the acquisition exposure was failed. It could also be caused by the underestimation of interstellar extinction, which is derived from the hydrogen Balmer decrement in the SDSS spectrum. The observed FUV shape could be fit by increasing $C$(H$\beta$) by 0.065 from the value in Table 7. This would increase the H$\beta$ fluxes by $\sim 15$% and decrease the Ly$\alpha$ and LyC escape fractions by a similar amount. However, in this case the extinction-corrected fluxes of H$\delta$, H$\gamma$ and H$\alpha$ emission lines are considerably off from their theoretical recombination values. Furthermore, the difference between the models and observations can be caused by the non-perfect absolute flux calibration of the SDSS spectrum. However, we note that Fig. 3 is used only for the sake of illustration to check whether extrapolation of the SED in the optical range reproduces the observed COS spectrum. But it is not used for the determination of the escaping LyC fraction. Instead the observed LyC flux is measured in COS spectra and the intrinsic LyC flux is determined by two methods mentioned above: from the extinction-corrected flux of the H$\beta$ emission line $I$(H$\beta$) and from simultaneous fitting of the SED in the optical range and of observed equivalent widths of the H$\beta$ and H$\alpha$ emission lines. The fluxes of latter lines are also iteratively corrected for the escaping ionizing radiation (e.g. Izotov et al., 2018b) and they determine the level of the intrinsic LyC emission. It is seen in Fig. 3 that the SED in the optical range is almost independent on $R(V)_{\rm int}$. Consequently, the LyC escape fraction $f_{\rm esc}$(LyC) is also almost independent of $R(V)_{\rm int}$. This is because $f_{\rm esc}$(LyC) is derived from the ratio of the observed to modelled intrinsic LyC fluxes with the latter fluxes being derived from data in the optical range. The relation between $I$(H$\beta$) and the intrinsic LyC flux at 900 Å $I$(900 Å), assuming the instantaneous burst model, takes a form (Izotov et al., 2016b) $$\frac{I({\rm H}\beta)}{I(900~{}\mbox{\AA})}=2.99\times{\rm EW}({\rm H}\beta)^{0.228}~{}\mbox{\AA},$$ (1) where EW(H$\beta$) is in Å, $I$(H$\beta$) and $I$(900 Å) are in erg s${}^{-1}$ and erg s${}^{-1}$ Å${}^{-1}$, respectively. The term with EW(H$\beta$) in Eq. 1 takes into account the weak dependence on the starburst age. According to this equation, uncertainties on $I$(900 Å) are due to small uncertainties of $C$(H$\beta$) (Table 7) and, thus, on $I$(H$\beta$) are unlikely to be greater than $\sim$ 15 - 20 per cent. 6 Ly$\alpha$ and LyC emission A resolved Ly$\alpha$ $\lambda$1216 Å emission line is detected in the G160M medium-resolution spectra of five out of seven galaxies (Fig. 4). Its shape is similar to that observed in most known LyC leakers (Izotov et al., 2016a, b, 2018a, 2018b, 2021a) and in some other galaxies at lower redshift (Jaskot & Oey, 2014; Henry et al., 2015; Yang et al., 2017a; Izotov et al., 2020). Profiles with two peaks are detected in the spectra of four galaxies from the present sample with detected LyC emission, J0141$-$0304, J0844$+$5312, J1137$+$3605, J1352$+$5617, and in one galaxy with non-detected LyC emission, J1014$+$5501. The blue Ly$\alpha$ component in the latter galaxy is $\sim$ 2.5 times brighter than the red component (Fig. 4d). This fact is at variance with that for other galaxies, where the blue component is considerably weaker than the red component, and may be indicative of a gas inflow. The Ly$\alpha$ emission line is very weak in the spectra of two galaxies, J0130$-$0014 and J1157$+$5801. The parameters of Ly$\alpha$ emission are shown in Table 4. The observed G140L total-exposure spectra with the LyC spectral region (grey lines) and extrapolations to the UV range of predicted intrinsic SEDs in the optical range (blue dash-dotted lines) are shown in Fig. 5. Additionally, we include the attenuated extrapolations of the intrinsic SEDs (black solid lines), the same as those with $R(V)$ = 2.7 that are shown in Fig. 3 but with different flux and wavelength scales. The level of the observed LyC continuum is indicated by horizontal red lines. The vertical dotted lines show the Lyman limit. The Lyman continuum emission is detected in the spectra of four galaxies, J0141$-$0304, J0844$+$5312, J1137$+$3605 and J1352$+$5617 (solid red lines), and only 1$\sigma$ upper limits are derived in the spectra of the remaining three galaxies (dotted red lines). The measurements are summarised in Table 5. Izotov et al. (2016a, b, 2018a, 2018b) used the ratio of the escaping fluxes $I_{\rm esc}$ to the intrinsic fluxes $I_{\rm mod}$ of the Lyman continuum to derive $f_{\rm esc}$(LyC): $$f_{\rm esc}({\rm LyC})=\frac{I_{\rm esc}(\lambda)}{I_{\rm mod}(\lambda)},$$ (2) where $\lambda$ is the mean wavelength of the range near 900 Å used for averaging the LyC flux density (see Table 5). Izotov et al. (2016b) proposed two methods to iteratively derive the intrinsic fluxes $I_{\rm mod}$ and, correspondingly, the LyC escape fractions $f_{\rm esc}$(LyC): 1) from simultaneous fitting of the SED in the optical range together with observed equivalent widths of the H$\beta$ and H$\alpha$ emission lines and 2) from the equivalent width of the H$\beta$ emission line, its extinction-corrected flux and adopting relations between $I$(H$\beta$)/$I_{\rm mod}$ and EW(H$\beta$) from the models of photoionized H ii regions (Eq. 1, Izotov et al., 2016b). The extinction-corrected flux of the H$\beta$ emission line in both methods determines the intrinsic LyC flux at 900 Å by taking into account the starburst age which mainly depends on the H$\beta$ and/or H$\alpha$ equivalent widths. We use both methods in this paper. The escape fraction $f_{\rm esc}$(LyC) ranges between 3.1 and 4.6 per cent in four out of the seven galaxies and the 1$\sigma$ upper limits of $f_{\rm esc}$(LyC) for the remaining galaxies are shown in Table 5. We find that $f_{\rm esc}$(LyC) obtained by the two methods are similar. 7 Indirect determination of the LyC escape fraction The direct measurement of LyC emission is the best way to derive the LyC escape fraction. However, LyC emission in most cases is weak and it difficult to detect in both the high-$z$ and low-$z$ galaxies. Furthermore, only HST can be used for the observation of the LyC wavelength range in galaxies with $z$ $\sim$ 0.3 – 1.0. Therefore, reasonable indirect indicators of LyC leakage at low and high redshift are needed, namely those which can more easily be derived from observations, to build a larger sample for statistical studies. Several possible indicators have been proposed, which are based mainly on observations of strong emission lines in the UV and optical ranges. For the analysis of possible indirect indicators we use a sample of $\sim$ 30 – 50 galaxies with Mg ii emission in their SDSS spectra from Izotov et al. (2016a, b, 2018a, 2018b, 2021a), Borthakur et al. (2014), Chisholm et al. (2017), Flury et al. (2022a), Xu et al. (2022) and this paper. The number of galaxies varies for different indicators because not all indicators are determined for all galaxies in the sample. The Ly$\alpha$ escape fraction $f_{\rm esc}$(Ly$\alpha$), which is derived from the observed Ly$\alpha$/H$\beta$ emission line ratio, can potentially be linked with the LyC escape fraction. However, there are differences between mechanisms controlling the escape of LyC and Ly$\alpha$. The LyC photons can efficiently be absorbed by neutral hydrogen and/or dust. On the other hand, Ly$\alpha$ photons can be ceased only via absorption by dust and via inefficient two-photon transitions. Thus, the fraction of escaping Ly$\alpha$ photons is expected to be higher than that of escaping LyC photons, in agreement with theoretical predictions (Dressler et al., 2015; Jaskot & Oey, 2013; Nakajima & Ouchi, 2014). This is seen in Fig. 6a, where almost all LyC leaking galaxies are located below the line of equal escape fractions (black solid line). There is a tendency for $f_{\rm esc}$(LyC) to increase with increasing $f_{\rm esc}$(Ly$\alpha$) but with a large spread (see also e.g. Izotov et al., 2018b, 2021a; Flury et al., 2022b). New data do not contradict with this conclusion. The shape of the Ly$\alpha$ profile provides the best indirect indicator of the LyC leakage due to the fact that it depends on the column density of the neutral hydrogen along the line of sight, which determines the optical depth in both the Ly$\alpha$ emission line and the LyC continuum. In particular, a non-zero intensity at the center of Ly$\alpha$ or a small offset of its brighter red component from the center of the line indicate low optical depth in the H i cloud. However, these indicators may be influenced by insufficient spectral resolution and uncertainties in the wavelength calibration. On the other hand, the separation between its blue and red components in medium-resolution COS spectra is less subject to these limitations. Previously Verhamme et al. (2017) and Izotov et al. (2018b) found a tight dependence of $f_{\rm esc}$(LyC) on the separation $V_{\rm sep}$ between the peaks of the Ly$\alpha$ emission line in LyC leakers. This dependence has been updated in the later paper by Izotov et al. (2021a) and in this paper. The new data also follow the relation discussed by Izotov et al. (2018b) (see the solid line in Fig. 6b). There is no new galaxy in our present sample having a peak separation less than $\sim$300 km s${}^{-1}$, which is considerably higher compared to the lowest peak separation of $\sim$ 150 km s${}^{-1}$ in the sample of low-$z$ leakers shown in Fig. 6b. The relation by Izotov et al. (2018b) is likely not applicable for complex Ly$\alpha$ profiles with three or more peaks, indicating considerable direct Ly$\alpha$ escape, in addition to escape through scattering in the neutral gas (Naidu et al., 2022). The Ly$\alpha$ profile in only one galaxy, J1243$+$4646, from the Izotov et al. (2018b) sample consists of three peaks with the peak separations of 143 and 164 km s${}^{-1}$. This galaxy does not change significantly the shape of the relation shown in Fig. 6b by the solid line because most of the galaxies in the sample have two Ly$\alpha$ peaks. The new observations of Mg ii-selected galaxies (red symbols) support previous findings on the existence of the tight relation between $f_{\rm esc}$(LyC) and $V_{\rm sep}$. However, the application of this relation for galaxies observed during epoch of reionization is limited because of incomplete ionization of the intergalactic medium and thus high optical depth for Ly$\alpha$ emission. Low galaxy stellar masses are also considered as a possible indicator of high $f_{\rm esc}$(LyC) (Wise et al., 2014; Trebitsch et al., 2017). Indeed, there is a trend of decreasing $f_{\rm esc}$(LyC) with increasing stellar mass in galaxies with the detected LyC continuum (filled circles in Fig. 6c). However, Izotov et al. (2021a) found several strongly star-forming galaxies with $M_{\star}$ $<$ 10${}^{8}$M${}_{\odot}$ and non-detected LyC (blue open circles in Fig. 6c), considerably weaking the anti-correlation between $f_{\rm esc}$(LyC) and $M_{\star}$. New data in the present paper are in agreement with the conclusion of no or only a weak correlation between $f_{\rm esc}$(LyC) and $M_{\star}$. Jaskot & Oey (2013) and Nakajima & Ouchi (2014) proposed a high O${}_{32}$ ratio as an indication of escaping ionizing radiation. However, the increase of this ratio is caused not only by decreasing optical depth of the neutral hydrogen around the H ii region, but also by increasing ionization parameter and/or decreasing metallicity. These effects are difficult to separate. O${}_{32}$ in low-redshift galaxies can easily be derived from their spectra in the optical range. This quantity is known for all low-$z$ LyC leakers. The relation between $f_{\rm esc}$(LyC) and O${}_{32}$ has been discussed by Faisst (2016), Izotov et al. (2018b, 2021a) and Flury et al. (2022b). Its updated version from Izotov et al. (2021a) is presented in Fig. 6d, which shows a trend of increasing $f_{\rm esc}$(LyC) with increasing of O${}_{32}$, but with a substantial scatter. This scatter, in part, can be caused by a variety of scenarios with leakage through channels with low optical depth and their orientation relative to the observer. Similar conclusion can be drawn from the Flury et al. (2022b) data. Therefore, a high O${}_{32}$ can be used for selection of the LyC leaking candidates, but it is not a very certain indicator of high $f_{\rm esc}$(LyC) (Izotov et al., 2018b; Nakajima et al., 2020). 8 Mg ii diagnostics Henry et al. (2018) and Chisholm et al. (2020) have proposed to use the double resonance line of Mg ii $\lambda$2796, 2803 in emission as an indicator of escaping LyC emission based on the fact that its escape fraction correlates with the Ly$\alpha$ escape fraction. Later, Xu et al. (2022) also proposed Mg ii as low-$z$ tracer of Ly$\alpha$ and LyC, Naidu et al. (2022) pointed out that Mg ii$\lambda$2796/$\lambda$2803 line ratio is higher in $z$ $\sim$ 2 galaxies with higher $f_{\rm esc}$(LyC). Following these papers we consider the properties of Mg ii emission and their relations with the Ly$\alpha$ and LyC escape fractions. For many low-redshift LyC leaking galaxies (Izotov et al., 2016a, b, 2018a, 2018b, 2021a; Flury et al., 2022a; Xu et al., 2022, this paper) the wavelength range with the redshifted Mg ii $\lambda$2796, 2803 emission lines is covered by the SDSS spectra (Fig. 9 – 9). However, these redshifted lines are outside the wavelength range of SDSS spectra from the releases earlier than DR10 of some LyC leakers with lowest redshifts of $z$ $\approx$ 0.3 (for example, J0925$+$1403, J1011$+$1947, J1442$-$0209). XShooter spectra covering the Mg ii emission (Fig. 9) are also available for some LyC galaxies (Guseva et al., 2020), including those with $z$ $\approx$ 0.3. We note that Mg ii emission is located in the noisy parts of the SDSS spectra. Because of weakness of these lines they cannot be measured with high accuracy. The spectral resolution of SDSS spectra is insufficient to determine the Mg ii emission line profiles. On the other hand, the accuracy of measurements and spectral resolution are better for XShooter spectra. Because of the limitations for the SDSS sample, we consider only two characteristics for the entire SDSS+XShooter sample, the extinction-corrected O${}_{3}$Mg${}_{2}$ = [O iii]$\lambda$5007/Mg ii $\lambda$2796+2803 and Mg${}_{2}$ = Mg ii $\lambda$2796/Mg ii $\lambda$2803 flux ratios, which are less subject to the uncertainties compared to those in fitting of the Mg ii emission line profiles. Mg ii emission is detected in most LyC leaking galaxies if it falls in the wavelength range of SDSS spectra, as expected in the case of low neutral gas column densities. The two galaxies with very little (or no) Mg ii detections in Fig. 9 (J0130$-$0014 and J1157$+$5801) also do not have LyC detections, illustrating how a non-detection of Mg ii can also lead to a non-detection of LyC. However, there is one possible exception. The galaxy J1121$+$3806 has $f_{\rm esc}$(LyC) $\sim$ 35 per cent and strong and narrow Ly$\alpha$ emission line (Izotov et al., 2021a). On the other hand, Mg ii emission in this galaxy is barely seen (Fig. 9d). Thus, the high LyC leakage is possibly not always associated with the presence of strong Mg ii emission. However, the SNR of SDSS spectrum is low and this galaxy merits deeper observations (King et al. in preparation). Fig. 10a and 10b show the dependencies of the Ly$\alpha$ escape fraction $f_{\rm esc}$(Ly$\alpha$) on the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$, respectively. It is seen that $f_{\rm esc}$(Ly$\alpha$) is almost independent of both the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ ratios. Mg${}_{2}$ in two galaxies, J1127$+$4610 and J1455$+$6107 (Fig. 9e, 9n, Izotov et al., 2021a), in Fig. 10b is considerably above the value of 2 in the case of zero optical depth in Mg ii lines, which is unlikely. However, we note that Mg${}_{2}$s in these two galaxies are measured with the largest errors, $\sim$ 2 times higher than typical errors for objects shown in Fig. 10b. Furthermore, these lines in all galaxies were not corrected for interstellar or stellar photospheric Mg ii absorption. Equivalent widths of these absorption lines are somewhat uncertain. Guseva et al. (2019) adopted equal equivalent widths of $\sim$ 0.5Å for each of Mg ii absorption lines, whereas Pérez-Ràfols et al. (2015) derived 2.33 Å for both lines, which are consistent with the value of $\sim$ 1Å for Mg ii $\lambda$2796 absorption line in star-forming galaxies with stellar masses $\la$ 10${}^{9.5}$M${}_{\odot}$ (Martin et al., 2012) and the values adopted by Prochaska, Kasen & Rubin (2011). All these values are lower than equivalent widths of Mg ii emission lines (Table 7). Assuming that equivalent widths of Mg ii $\lambda$2796 and $\lambda$2803 absorption lines are equal and correcting emission lines by multiplying with (EW${}_{\rm em}$+EW${}_{\rm abs}$)/EW${}_{\rm em}$ results in a reduction of Mg${}_{2}$ ratio if this ratio is above 1. This is because the equivalent width of the Mg ii $\lambda$2796 emission line is greater than that of the Mg ii $\lambda$2803 emission line. The effect is larger for higher values of Mg${}_{2}$ reducing the number of galaxies with Mg${}_{2}$ above 2. Using the analytic work of Chisholm et al. (2020), a Mg${}_{2}$ of 1.3 would correspond to an Mg ii 2803Å optical depth of 0.43 (or a 2796 optical depth near 1). For the typical abundances of the sample, that would lead to H i column densities near 9.4$\times$10${}^{16}$ cm${}^{-2}$, which is very close to being optically thin for the LyC emission. It is notable that Mg${}_{2}$ in all five galaxies with high $f_{\rm esc}$(LyC) observed with the high SNR at the XShooter by Guseva et al. (2020) is very close to 2 (black symbols in Fig. 10b), in agreement with expectations for the low optical depth (e.g. Chisholm et al., 2020). In Fig. 10c and 10d we show the relations of $f_{\rm esc}$(LyC) with the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ flux ratios, respectively. We note an interesting feature in Figs. 10b and 10d that the LyC leakers have preferentially Mg${}_{2}$ $\ga$ 1.3, as expected because high values of Mg${}_{2}$ indicate low optical depth (Chisholm et al., 2020). Similarly, Naidu et al. (2022) found that galaxies with low $f_{\rm esc}$(LyC) have preferentially low Mg${}_{2}$ $\sim$ 0.9. Possibly, a tendency of increasing $f_{\rm esc}$(LyC) with increasing of the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ is present albeit scatter of the data is large. The statistics in Fig. 10 are small and subject to large errors of individual mesurements. Therefore, for a comparison we selected $\sim$ 6000 galaxies with $z$ $\geq$ 0.3 from the sample of compact star-forming galaxies by Izotov et al. (2021c) in which both the Mg ii $\lambda$2796 and 2803 emission lines were observed. The errors of Mg  line fluxes in this sample are also large. However, large statistics in each bin of the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ flux ratios considerably reduces the impact of uncertain individual values. These galaxies constitute 60 per cent of the total number of galaxies in the catalogue of Izotov et al. (2021c) with $z$ $\geq$ 0.3. Mg ii in the remaining galaxies is either in absorption or only one of the two lines is detected. The distribution of Mg${}_{2}$ for selected galaxies is shown in Fig. 11a. This distribution is broad and approximately 1/3 galaxies have Mg${}_{2}$ $>$ 2. The scatter is likely caused not only by errors of measurements. It remains even if only brightest galaxies with well measured Mg ii fluxes are considered (compare Fig. 11c and Fig. 11d). On the other hand, correction for underlying absorption can make the distribution narrower together with the decreasing number of galaxies with Mg${}_{2}$ $>$ 2. We find that nearly 2/3 of the sample is characterised by a Mg${}_{2}$ $>$ 1.3 implying that most of selected compact star-forming galaxies could possibly be LyC leakers. The distribution of ionizing photon production efficiency $\xi_{\rm ion}$ for the same galaxies is shown in Fig. 11b. Here $\xi_{\rm ion}$ = $N$(LyC)/$L_{\nu}$, where $N$(LyC) and $L_{\nu}$ are the production rate of the LyC radiation in photons s${}^{-1}$ and the intrinsic monochromatic luminosity at the rest-frame wavelength of 1500Å in erg s${}^{-1}$ Hz${}^{-1}$. It is seen that log $\xi_{\rm ion}$ in the sample galaxies is high. In most of galaxies it is above the threshold of 25.2, adopted in models of reionization (e.g. Robertson et al., 2013). Finally, we show the relations between log $\xi_{\rm ion}$ and Mg${}_{2}$ for all selected SDSS galaxies (Fig. 11c) and brightest SDSS galaxies in the sense that the H$\beta$ fluxes in these galaxies are above 5$\times$10${}^{-16}$ erg s${}^{-1}$cm${}^{-2}$ and equivalent widths of the Mg ii $\lambda$2796 emission line are above 10 Å (Fig. 11d). The unshaded region in Fig. 11c and 11d is populated by the galaxies with Mg${}_{2}$ $\geq$ 1.3 and log $\xi_{\rm ion}$ $\geq$ 25.2, which constitute nearly half of the total sample and somewhat more for the brightest galaxies. Most of low-$z$ LyC leakers (black filled circles) are located in this region. The few galaxies with log $\xi_{\rm ion}$ below 25.2 are only from the LzLCS sample by Flury et al. (2022a, b), which contains, in general, lower-excitation H ii regions compared e.g. with the galaxies from the Izotov et al. (2016a, b, 2018a, 2018b, 2021a) sample. Thus, the criterion Mg${}_{2}$ $<$ 1.3 can be a useful cut to selected LyC leaker candidates at low- and high-redshifts due to the fact that strong Mg ii emission is present in most LyC leaking galaxies. 9 Conclusions We present new HST COS low- and medium-resolution spectra of seven compact SFG in the redshift range $z$ = 0.3161 – 0.4276, with various O${}_{3}$Mg${}_{2}$ = [O iii]$\lambda$5007/Mg ii $\lambda$2796+2803 and Mg${}_{2}$ = Mg ii $\lambda$2796/Mg ii $\lambda$2803 emission-line ratios. We aim to obtain properties of leaking LyC and resolved Ly$\alpha$ emission and to study the dependence of leaking LyC emission on the characteristics of Mg ii emission along with other indirect indicators of escaping ionizing radiation. This study is an extension of the work reported earlier in Izotov et al. (2016a, b, 2018a, 2018b, 2021a). Our main results are summarised as follows: 1. Emission of Lyman continuum is detected in four out of the seven galaxies with the escape fraction $f_{\rm esc}$(LyC) in the range between 3.1 per cent (J1137+3605) and 4.6 per cent (J0844+5312). Only upper limits $f_{\rm esc}$(LyC) $\sim$ 1 – 3 per cent are obtained for the remaining three galaxies. 2. A Ly$\alpha$ emission line with two peaks is observed in the spectra of five galaxies. The Ly$\alpha$ emission line in two galaxies, J0130$-$0014 and J1157+5801, is very weak. Our new observations support a strong anti-correlation between $f_{\rm esc}$(LyC) and the peak velocity separation $V_{\rm sep}$ of the Ly$\alpha$ profile, confirming the finding of Izotov et al. (2018b, 2021a) and making $V_{\rm sep}$ the most robust indirect indicator of Lyman continuum radiation leakage. 3. Other characteristics such as O${}_{32}$ ratio, escape fraction of the Ly$\alpha$ emission line $f_{\rm esc}$(Ly$\alpha$) and the stellar mass $M_{\star}$ show weak or no correlations with $f_{\rm esc}$(LyC), with a high spread of values, in agreement with earlier studies by e.g. Izotov et al. (2016b, 2018a, 2021a), Flury et al. (2022b). 4. We study the characteristics of Mg ii $\lambda$2796+2803 emission, such as O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ ratios, as possible indirect indicators of escaping LyC emission. We find that galaxies with detected LyC emission have preferentially Mg${}_{2}$ $\geq$ 1.3, the latter indicating low optical depths. A high Mg${}_{2}$ ratio of $\geq$ 1.3 can be used to select LyC leaker candidates. A tendency of an increase of $f_{\rm esc}$(LyC) with increasing of both the O${}_{3}$Mg${}_{2}$ and Mg${}_{2}$ is possibly present. However, there is substantial scatter in these relations due to the low signal-to-noise ratio in the blue part of the SDSS spectra near the observed Mg ii emission not allowing their use for reliable prediction of $f_{\rm esc}$(LyC). 5. We find that galaxies with Mg${}_{2}$ $\geq$ 1.3 and ionizing photon production efficiency $\xi_{\rm ion}$ greater than the value of 10${}^{25.2}$ erg${}^{-1}$ Hz used in modelling of the process of reionization of the Universe (e.g. Robertson et al., 2013) constitute $\sim$ 40 per cent of all compact star-forming galaxies at redshift $z$ $\geq$ 0.3, which were selected by Izotov et al. (2021c) from the Data Release 16 of the Sloan Digital Sky Survey. 6. A bright compact star-forming region superimposed on a low-surface-brightness component is seen in the COS near ultraviolet (NUV) acquisition images of five galaxies (two images are missing due to technical problems). The surface brightness at the outskirts of our galaxies can be approximated by an exponential disc, with a scale length of $\sim$ 0.20 – 0.63 kpc. This is $\sim$ 4 times lower than the scale lengths of the LyC leakers observed by Izotov et al. (2016b, 2018a, 2018b), but is similar to that in low-mass galaxies with $M_{\star}$ $<$ 10${}^{8}$ M${}_{\odot}$ by Izotov et al. (2021a). Part of this difference may be explained by acquisition exposure times that are $\sim$ 2 shorter compared to those used by Izotov et al. (2016b, 2018a, 2018b), resulting in less deep images. 7. The star formation rates in the range SFR $\sim$ 4 – 36 M${}_{\odot}$ yr${}^{-1}$ and the metallicities of our new galaxies, ranging from 12 + logO/H = 7.81 to 8.06, are overlapping with those in the LyC leakers studied by Izotov et al. (2016a, b, 2018a, 2018b, 2021a). Acknowledgements Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. Support for this work was provided by NASA through grant number HST-GO-15845 from the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. Y.I. and N.G. acknowledge support from the National Academy of Sciences of Ukraine by its priority project No. 0122U002259 “Fundamental properties of the matter and its manifistation in micro world, astrophysics and cosmology”. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration. GALEX is a NASA mission managed by the Jet Propulsion Laboratory. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Data availability The data underlying this article will be shared on reasonable request to the corresponding author. References Ade et al. (2014) Ade P. A. R. et al., 2014, A&A, 571, A16 Aller (1984) Aller L. H., 1984, Physics of Thermal Gaseous Nebulae. Dordrecht: Reidel Bian et al. (2017) Bian F., Fan X., McGreer I., Cai Z., Jiang L., 2017, ApJ, 837, 12 Borthakur et al. (2014) Borthakur S., Heckman T. M., Leitherer C., Overzier R. A., 2014, Science, 346, 216 Bouwens et al. (2015) Bouwens R. J., Illingworth G. D., Oesch P. A., Caruana J., Holwerda B., Smit R., Wilkins S., 2015, ApJ, 811, 140 Bouwens et al. (2017) Bouwens R. J., Illingworth G. D., Oesch P. A., Atek H, Lam D, Stefanon M., 2017, ApJ, 843, 41 Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 Cardamone et al. (2009) Cardamone C. et al., 2009, MNRAS, 399, 1191 Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 Caruana et al. (2018) Caruana J. et al., 2018, MNRAS, 473, 30 Chisholm et al. (2017) Chisholm J., Orlitová I., Schaerer D., Verhamme A., Worseck G., Izotov Y. I., Thuan T. X., Guseva N. G., 2017, A&A, 605, A67 Chisholm et al. (2018) Chisholm J. et al., 2018, A&A, 616, 30 Chisholm et al. (2020) Chisholm J., Prochaska J. X., Schaerer D., Gazagnes S., Henry A., 2020, MNRAS, 498, 2554 Cowie et al. (2009) Cowie L. L., Barger A. J., Trouille L., 2009, ApJ, 692, 1476 Curtis-Lake et al. (2016) Curtis-Lake E. et al., 2016, MNRAS, 457, 440 de Barros et al. (2016) de Barros S. et al., 2016, A&A, 585, A51 de Barros et al. (2019) de Barros S., Oesch P. A., Labbé I., Stefanon M., González V., Smit R., Bouwens R. J., Illingworth G. D., 2019, MNRAS, 489, 2355 Dijkstra et al. (2016) Dijkstra M., Gronke M., Venkatesan A., 2016, ApJ, 828, 71 Dressler et al. (2015) Dressler A., Henry A., Martin C. L., Sawicki M., McCarthy P., Villaneuva E., 2015, ApJ, 806, 19 Endsley et al. (2021) Endsley R., Stark D. P., Chevallard J., Charlot S., 2021, MNRAS, 500, 5229 Erb et al. (2012) Erb D. K., Quider A. M., Henry A. L., Martin C. L., 2012, ApJ, 759, 26 Faisst (2016) Faisst A. L., 2016, ApJ, 829, 99 Finkelstein et al. (2019) Finkelstein S. L. et al., 2019, ApJ, 879, 36 Finley et al. (2017) Finley H. et al. 2017, A&A, 608, A7 Fletcher et al. (2019) Fletcher T. J., Tang M., Robertson B. E., Nakajima K., Ellis R. S., Stark D. P., Inoue A., 2019, ApJ, 878, 87 Flury et al. (2022a) Flury S. R. et al. 2022a, ApJS, 260, 1 Flury et al. (2022b) Flury S. R. et al. 2022b, ApJ, 930, 126 Gazagnes et al. (2018) Gazagnes S., Chisholm J., Schaerer D., Verhamme A., Rigby J. R., Bayliss M., 2018, A&A, 616, 29 Gazagnes et al. (2020) Gazagnes S., Chisholm J., Schaerer D., Verhamme A., Izotov Y., 2020, A&A, 639, 85 Girardi et al. (2000) Girardi L., Bressan A., Bertelli G., Chiosi C., 2000, A&AS, 141, 371 Grazian et al. (2016) Grazian A. et al., 2016, A&A, 585, A48 Gronke et al. (2021) Gronke M. et al., 2021, MNRAS, 508, 3697 Guseva et al. (2013) Guseva N. G., Izotov Y. I., Fricke K. J., Henkel C., 2013, A&A, 555, A90 Guseva et al. (2019) Guseva N. G., Izotov Y. I., Fricke K. J., Henkel C., 2019, A&A, 624, A21 Guseva et al. (2020) Guseva N. G. et al., 2020, MNRAS, 497, 4293 Henry et al. (2015) Henry A., Scarlata C., Martin C. S., Erb D., 2015, ApJ, 809, 19 Henry et al. (2018) Henry A., Berg D. A., Scarlata C., Verhamme A., Erb D., 2018, ApJ, 855, 96 Inoue et al. (2014) Inoue A. K., Shimizu I., Iwata I., Tanaka M., 2014, MNRAS, 442, 1805 Izotov et al. (1994) Izotov Y. I., Thuan T. X., Lipovetsky V. A., 1994, ApJ, 435, 647 Izotov et al. (2006) Izotov Y. I., Stasińska G., Meynet G., Guseva N. G., Thuan T. X., 2006, A&A, 448, 955 Izotov et al. (2011) Izotov Y. I., Guseva N. G., Thuan T. X., 2011, ApJ, 728, 161 Izotov et al. (2015) Izotov Y. I., Guseva N. G., Fricke K. J., Henkel C., 2015, MNRAS, 451, 2251 Izotov et al. (2016a) Izotov Y. I., Orlitová I., Schaerer D., Thuan T. X., Verhamme A., Guseva N. G., Worseck G., 2016a, Nature, 529, 178 Izotov et al. (2016b) Izotov Y. I., Schaerer D., Thuan, T. X., Worseck G., Guseva N. G., Orlitová I., Verhamme A., 2016b, MNRAS, 461, 3683 Izotov et al. (2017) Izotov Y. I., Guseva N. G., Fricke K. J., Henkel C., Schaerer D., 2017, MNRAS, 467, 4718 Izotov et al. (2018a) Izotov Y. I., Schaerer D., Worseck G., Guseva N. G., Thuan, T. X., Verhamme A., Orlitová I., Fricke K. J, 2018a, MNRAS, 474, 4514 Izotov et al. (2018b) Izotov Y. I., Worseck G., Schaerer D., Guseva N. G., Thuan, T. X., Fricke K. J, Verhamme A., Orlitová I., 2018b, MNRAS, 478, 4851 Izotov et al. (2018c) Izotov Y. I., Thuan T. X., Guseva N. G., Liss S. E., 2018c, MNRAS, 473, 1956 Izotov et al. (2020) Izotov Y. I., Schaerer D., Worseck G., Verhamme A., Guseva N. G., Thuan T. X., Orlitová I., Fricke K. J., 2020, MNRAS, 491, 468 Izotov et al. (2021a) Izotov Y. I., Worseck, G., Schaerer D., Guseva N. G., Chisholm J., Thuan T. X., Fricke K. J., Verhamme A., 2021a, MNRAS, 503, 1734 Izotov et al. (2021b) Izotov Y. I., Thuan T. X., Guseva N. G., 2021b, MNRAS, 504, 3996 Izotov et al. (2021c) Izotov Y. I., Guseva N. G., Fricke K. J., Henkel C., Schaerer D., Thuan T. X., 2021c, A&A, 646, A138 Jaskot & Oey (2013) Jaskot A. E., Oey M. S., 2013, ApJ, 766, 91 Jaskot & Oey (2014) Jaskot A. E., Oey M. S., 2014, ApJ, 791, L19 Katz et al. (2022) Katz H. et al., 2022, MNRAS, https://doi.org/10.1093/mnras/stac1437 Kennicutt (1998) Kennicutt R. C., Jr., 1998, ARA&A, 36, 189 Khaire et al. (2016) Khaire V., Srianand R., Choudhury T. R., Gaikwad P., 2016, MNRAS, 457, 4051 Kim et al. (2020) Kim K., Malhotra S., Rhoads J. E., Butler N. R., Yang H., 2020, ApJ, 893, 134 Kornei et al. (2013) Kornei K. A., Shapley A. E., Martin C. L., Coil A. L., Lotz J. M., Weiner B. J., 2013, ApJ, 774, 50 Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231 Labbé et al. (2013) Labbé I. et al., 2013, ApJ, 777, L19 Leitet et al. (2013) Leitet E., Bergvall N., Hayes M., Linné S., Zackrisson E., 2013, A&A, 553, A106 Leitherer et al. (2016) Leitherer C., Hernandez S., Lee J. C., Oey M. S., 2016, ApJ, 823, L64 Lejeune et al. (1997) Lejeune T., Buser R., Cuisinier F., 1997, A&AS, 125, 229 Lewis et al. (2020) Lewis J. S. W. et al., 2020, MNRAS, 496, 4342 Makan et al. (2021) Makan K., Worseck G., Davies F. B., Hennawi J. F., Prochaska J. X., Richter P., 2021, ApJ, 912, 38 Marchi et al. (2017) Marchi F. et al., 2017, A&A, 601, 73 Marchi et al. (2018) Marchi F. et al., 2018, A&A, 614, 11 Martin et al. (2012) Martin C. L., Shapley A. E., Coil A. L., Kornei K. A., Bundi K., Weiner B. J., Noeske K. G., Schiminovich D., 2012, ApJ, 760, 127 Mathis (1990) Mathis J. S., 1990, ARA&A, 28, 10 Matsuoka et al. (2018) Matsuoka Y. et al., 2018, ApJ, 869, 150 Meštric et al. (2020) Meštric U. et al., 2020, MNRAS, 494, 4986 Meyer et al. (2020) Meyer R. A. et al., 2020, 494, 1560 Mitra et al. (2013) Mitra S., Ferrara A., Choudhury T. R., 2013, MNRAS, 428, L1 Naidu et al. (2020) Naidu R. P., Tacchella S., Mason C. A., Bose S., Oesch P. A., Conroy C., 2020, ApJ, 892, 109 Naidu et al. (2022) Naidu R. P. et al., 2022, MNRAS, 510, 4582 Nakajima & Ouchi (2014) Nakajima K., Ouchi M., 2014, MNRAS, 442, 900 Nakajima et al. (2018) Nakajima K., Fletcher T., Ellis R. S., Robertson B. E., Iwata I., 2018, MNRAS, 477, 2098 Nakajima et al. (2020) Nakajima K., Ellis R. S., Robertson B. E., Tang M., Stark D. P., 2020, ApJ, 889, 161 Ouchi et al. (2009) Ouchi M. et al., 2009, ApJ, 706, 1136 Paulino-Afonso et al. (2018) Paulino-Afonso A. et al., 2018, MNRAS, 476, 5479 Pérez-Ràfols et al. (2015) Pérez-Ràfols I., Miralda-Escudé J., Lundgren B., Ge J., Petitjean P., Schneider D. P., York D. G., Weaver B. A., 2015, MNRAS, 447, 2784 Prochaska et al. (2011) Prochaska J. X., Kasen D., Rubin K., 2011, ApJ, 734, 24 Rivera-Thorsen et al. (2017) Rivera-Thorsen T. E. et al., 2017, A&A, 608, L4 Rivera-Thorsen et al. (2019) Rivera-Thorsen T. E. et al., 2019, Science, 366, 738 Robertson et al. (2013) Robertson B. E. et al., 2013, ApJ, 768, 71 Robertson et al. (2015) Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19 Saha et al. (2020) Saha K. et al., 2020, Nature Astronomy, 4, 1185 Saldana-Lopez et al. (2022) Saldana-Lopez A. et al., 2022, A&A, in press; preprint arXiv:2201.11800 Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161 Schmutz et al. (1992) Schmutz W., Leitherer C., Gruenwald R., 1992, PASP, 104, 1164 Shapley et al. (2016) Shapley A. E., Steidel C. C., Strom A. L., Bogosavljević M., Reddy N. A., Siana B. Mostardi R. E., Rudie G. C., 2016, ApJ, 826, L24 Smit et al. (2014) Smit R. et al., 2014, ApJ, 784, 58 Stasińska et al. (2015) Stasińska G., Izotov Y., Morisset C., Guseva N., 2015, A&A, 576, A83 Steidel et al. (2018) Steidel C. C., Bogosavljević M., Shapley A.E., Reddy N. A., Rudie G. C., Pettini M., Trainor R. F., Strom A. L., 2018, ApJ, 869, 123 Thuan & Martin (1981) Thuan T. X., Martin G. E., 1981, ApJ, 247, 823 Trebitsch et al. (2017) Trebitsch M., Blaizot J., Rosdahl J., Devriendt J., Slyz A., 2017, MNRAS, 470, 224 Vanzella et al. (2010) Vanzella E. et al., 2010, ApJ, 725, 1011 Vanzella et al. (2012) Vanzella E. et al., 2012, ApJ, 751, 70 Vanzella et al. (2015) Vanzella E. et al., 2015, A&A, 576, A116 Vanzella et al. (2018) Vanzella E. et al., 2018, MNRAS, 476, L15 Vanzella et al. (2020) Vanzella E. et al., 2020, MNRAS, 491, 1093 Verhamme et al. (2015) Verhamme A., Orlitová I., Schaerer D., Hayes M., 2015, A&A, 578, A7 Verhamme et al. (2017) Verhamme A., Orlitová I., Schaerer D., Izotov Y., Worseck G., Thuan T. X., Guseva N., 2017, A&A, 597, A13 Vielfaure et al. (2020) Vielfaure J.-B., et al. 2020, A&A, 640, 30 Wang et al. (2021) Wang B. et al., 2021, ApJ, 916, 3 Weiner et al. (2009) Weiner B. J. et al., 2009, ApJ, 692, 187 Witstok et al. (2021) Witstok J., Smit R., Maiolino R., Curti M., Laporte N., Massey R., Richard J., Swinbank M., 2021, MNRAS, 508, 1686 Wise & Chen (2009) Wise J. H., Cen R., 2009, ApJ, 693, 984 Wise et al. (2014) Wise J. H., Demchenko V. G., Halicek M. T., Norman M. L., Turk M. J., Abel T., Smith B. D., 2014, MNRAS, 442, 2560 Worseck et al. (2016) Worseck G., Prochaska J. X., Hennawi J. F., McQuinn M., 2016, ApJ, 825, 144 Wright (2006) Wright E. L., 2006, PASP, 118, 1711 Xu et al. (2022) Xu X. et al., 2022, ApJ, in press; preprint arXiv:2205.11317 Yajima et al. (2011) Yajima H., Choi J.-H., Nagamine K., 2011, MNRAS, 412, 411 Yang et al. (2017a) Yang H. et al., 2017a, ApJ, 844, 171 Yang et al. (2017b) Yang H., Malhotra S., Rhoads J. E., Leitherer C., Wofford A., Jiang T., Wang J., 2017b, ApJ, 847, 38 Appendix A Apparent magnitudes Appendix B Emission line fluxes and chemical composition
Steady state solutions of hydrodynamic traffic models H. K. Lee,${}^{1}$ H.-W. Lee,${}^{2}$ and D. Kim${}^{1}$ ${}^{1}$School of Physics, Seoul National University, Seoul 151-747, Korea ${}^{2}$Department of Physics, Pohang University of Science and Technology, Pohang, Kyungbuk 790-784, Korea (December 8, 2020) Abstract We investigate steady state solutions of hydrodynamic traffic models in the absence of any intrinsic inhomogeneity on roads such as on-ramps. It is shown that typical hydrodynamic models possess seven different types of inhomogeneous steady state solutions. The seven solutions include those that have been reported previously only for microscopic models. The characteristic properties of wide jam such as moving velocity of its spatiotemporal pattern and/or out-flux from wide jam are shown to be uniquely determined and thus independent of initial conditions of dynamic evolution. Topological considerations suggest that all of the solutions should be common to a wide class of traffic models. The results are discussed in connection with the universality conjecture for traffic models. Also the prevalence of the limit-cycle solution in a recent study of a microscopic model is explained in this approach. pacs: 05.45.-a, 89.40.-a, 45.70.Vn, 05.20.Dd I Introduction Vehicles on roads interact with other and various traffic phenomena can be regarded as collective behaviors of interacting vehicles Nagel92JP ; Kerner93PRE ; Bando95PRE . Analyses of highway traffic data revealed that there exist qualitatively different states of traffic flow Treiterer74Proceeding ; Kerner96PRE ; Neubert99PRE ; Treiber00PRE ; Lee00PRE . Transition between different traffic states was also reported Kerner96PRE . Some empirical findings, such as the so-called synchronized traffic flow phase Kerner96PRE , have become subjects of considerable concern and ignited intense theoretical investigations of traffic flow. Comprehensive reviews can be found for example in Ref. Chowdhury00PR . Many traffic flow models have been put forward and analyzed Komatsu95PRE ; Nagatani97JPSJ ; Lee98PRL ; Helbing98PRL ; Helbing99PRL ; Lee99PRE ; Mitarai99JPSJ ; Knospe00JPA ; Tomer00PRL ; Nelson00PRE ; Lubashevsky00PRE ; Safanov02EP . Quite often, the analysis aims to find out possible steady states, or dynamic phases, of the models and investigate their properties. A criterion for good and reliable traffic models is good agreement between steady states of models and traffic states revealed via real traffic data analysis. Thus an important first step in the analysis of traffic flow models is to find out all possible steady states. In this paper, we consider hydrodynamic models for traffic without bottlenecks such as ramps and present a systematic search for their steady state solutions which are time-independent in a proper moving reference frame, $$\rho(x,t)=\tilde{\rho}(x+v_{g}t),\ \ v(x,t)=\tilde{v}(x+v_{g}t),$$ (1) where $\rho$ and $v$ are density and velocity fields, respectively, and $-v_{g}$ is the constant velocity of the moving reference frame with respect to the stationary reference frame. Two well-known steady solutions of this type are free flow and traffic jam solutions Kerner94PRE ; Wada97CM ; Kerner97PRE . Surprisingly, we find that hydrodynamic models possess not only these two but also several other steady state solutions. Some of the newly recognized steady state solutions, including limit cycle solutions, have been reported previously only for microscopic traffic models Mitarai99JPSJ ; Tomer00PRL ; Berg01PRE and not been reported for hydrodynamic models, which led to the wide spread view that free flow and traffic jam are the only possible steady state solutions of hydrodynamic models in the absence of bottlenecks such as ramps. Our result shows that such view is incorrect and that the physics contained in hydrodynamic models may be much richer than previously recognized. In Sec. II, we first review the mapping to the single particle motion and introduce the concept of the flow diagram in the single particle phase space. It is demonstrated in Sec. III that the flow diagram can have various topologically different structures, which are directly linked to the existence of certain types of steady state solutions (Sec. IV). Section V discusses implications of our results. Section VI concludes the paper. II Mapping to single particle motion We consider a hydrodynamic model that consists of the following two equations, an equation for local vehicle number conservation, $${\partial\over{\partial t}}\rho(x,t)+{\partial\over{\partial x}}\left[\rho(x,t% )v(x,t)\right]=0,$$ (2) and an equation of motion, $${{\partial v}\over{\partial t}}+v{{\partial v}\over{\partial x}}={\cal R}\left% [V_{\rm op}\left(\rho^{-1}\right)-v\right]-{\cal A}{{\partial\rho}\over{% \partial x}}+{\cal D}{{\partial^{2}v}\over{{\partial x}^{2}}},$$ (3) where $V_{\rm op}\left(\rho^{-1}\right)$ is the so-called optimal velocity function and the coefficients (${\cal R}$ for the relaxation term, ${\cal A}$ for the anticipation term, and ${\cal D}$ for the diffusion term) are positive definite and depend in general on the density and velocity fields, i.e., ${\cal R}(\rho,v),~{}{\cal A}(\rho,v)$, and ${\cal D}(\rho,v)>0$. To find out steady state solutions of the type in Eq. (1), it is useful to map the problem into a single particle motion problem by using the method in Refs. Kerner94PRE ; Kerner97PRE . For the mapping, one first integrates out Eq. (2). The resulting constant of motion, $$q_{g}=\rho\cdot(v+v_{g}),$$ (4) relates two dynamic fields $\rho$ and $v$, and can be used to reduce the number of independent dynamic fields from two to one (we choose $v$ in this work). Then Eq. (3) can be transformed into an ordinary differential equation for the single dynamic field $v$ that depends on a single parameter $z\equiv x+v_{g}t$, $${{d^{2}v}\over{dz^{2}}}+{\cal C}(v;v_{g},q_{g}){{dv}\over{dz}}+{\cal F}(v;v_{g% },q_{g})=0,$$ (5) where $$\displaystyle{\cal C}(v;v_{g},q_{g})$$ $$\displaystyle\equiv$$ $$\displaystyle{1\over{\cal D}}\left[{{\cal A}q_{g}\over(v+v_{g})^{2}}-(v+v_{g})% \right],$$ (6) $$\displaystyle{\cal F}(v;v_{g},q_{g})$$ $$\displaystyle\equiv$$ $$\displaystyle{{\cal R}\over{\cal D}}(V_{\rm op}-v).$$ Here the field $\rho$ in the arguments of ${\cal R},~{}{\cal A},~{}{\cal D}$, and $V_{\rm op}$ should be understood as $q_{g}/(v+v_{g})$. Then the search for steady state solutions reduces to the analysis of Eq. (5) under the physically meaningful boundary condition that solutions should be bounded as $z\to\pm\infty$. To gain insights into implications of Eq. (5), it is useful to make an analogy with a classical mechanics of a particle by regarding $z$ as a time variable and $v$ as a coordinate of a particle with unit mass moving in a one-dimensional system. Then Eq. (5) describes the time evolution of a particle under the influence of a potential energy, $U(v;v_{g},q_{g})=\int^{v}dv^{\prime}{\cal F}(v^{\prime};v_{g},q_{g})$, and of a damping force with the coordinate-dependent damping coefficient ${\cal C}(v;v_{g},q_{g})$. For a physical choice of $V_{\rm op}(\rho^{-1})$, which decreases as $\rho$ increases, goes to zero for large $\rho$, and saturates at a finite value for small $\rho$, the potential energy $U$ becomes camelback-shaped for wide ranges of $v_{g}$ and $q_{g}$ (solid curve in Fig. 1). Thus the potential energy profile is of very typical shape. Below we focus on $U$ of this shape only and ignore the possibility of more exotic shaped $U$’s, such as $U$’s with three peaks, since we do not know of any reason to expect such exotic possibilities. Furthermore we will not address the trivial case occurring when the range of $v_{g}$ and $q_{g}$ allows less than two peaks in $U$. An unusual feature in this mechanical analogy is that the damping coefficient $\cal C$ is not necessarily positive and in those ranges of $v$, where $\cal C$ is negative, the particle may gain energy due to the damping. The possibility of the negative damping is crucial for the existence of certain steady state solutions presented in the next section. Before we close this section, we remark that the stability of a solution in the single particle problem should not be identified with the stability in the real traffic problem. We demonstrate this point with trivial $z$-independent solutions. For the camel-back-shaped potential in Fig. 1, extremal points of the potential become solutions. Thus there are three $z$-independent solutions, $v=v_{i}$ ($i=0,1,2$ and $0\leq v_{1}<v_{0}<v_{2}$), where $v_{0}$ is the coordinate of the local minimum and $v_{1}$, $v_{2}$ are coordinates of the two local maxima or saddle points. All $v_{i}$’s satisfy $V_{\rm op}((v+v_{g})/q_{g})=v$ and thus depend on $v_{g}$ and $q_{g}$. These $z$-independent solutions correspond to the homogeneous traffic states $v(x,t)=v_{0,1,2}$. From the shape of the potential energy, it is clear that in the single particle problem, the two solutions $v(z)=v_{1,2}$ are unstable with respect to small deviations and the other solution $v(z)=v_{0}$ is stable (if ${\cal C}$ is positive near $v_{0}$). For the real traffic problem [described by Eqs. (2) and (3)], however, the solutions $v(x,t)=v_{1,2}$ are (usually) stable with respect to small deviations, and $v(x,t)=v_{0}$ is linearly unstable. III Flow diagrams Besides the trivial static ($z$-independent) solutions, there also exist dynamic ($z$-dependent) solutions, some examples of which are shown in Fig. 1 (dotted, dashed, and dash-dotted lines). In the language of traffic flow, these dynamic solutions correspond to steady but inhomogeneous traffic flow states. For a systematic study of dynamic solutions, it is useful to introduce a two-dimensional phase space $(v,w\equiv dv/dz)$, where each trajectory in the phase space corresponds to a dynamic solution. Then finding all solutions for given $v_{g}$ and $q_{g}$ is equivalent to constructing a flow diagram in the phase space for the given $v_{g}$ and $q_{g}$. To gain an insight into flow diagram structures, it is useful to examine the flow near the three fixed points $(v=v_{i},w=0)$’s. Figure 2 show flows near the three fixed points. The fixed points $(v_{1,2},0)$ are saddle points regardless of the sign of ${\cal C}$ near $v=v_{1,2}$ since the damping force alone [the second term in Eq. (5)] can not reverse the direction of the particle motion (that is, the sign of $dv/dz$) even if the damping coefficient is negative. On the other hand, the fixed point $(v_{0},0)$ is a stable (unstable) fixed point of the flow diagram when the ${\cal C}$ is positive (negative) near $v=v_{0}$ (${\cal C}$ is assumed to be positive in Fig. 2). Flows running out of or into the fixed points can be mutually interconnected and the way they are interconnected will in general depend on the values of $v_{g}$ and $q_{g}$ and affect the structure of the flow diagram. To make our discussion concrete, we choose here a particular hydrodynamic model. We choose the coefficients in Eq. (3) as follows: $${\cal R}=\lambda,~{}~{}{\cal A}={{\lambda\eta}\over{2\rho^{3}}},~{}~{}{\cal D}% ={\lambda\over{6\rho^{2}}},$$ (7) where $\eta\equiv dV_{\rm op}/d(\rho^{-1})$. As demonstrated in Ref. Lee01PRE , this choice provides a macro-micro link between the hydrodynamic model [Eqs. (2) and (3)] and the microscopic optimal velocity model Bando95PRE , $$\ddot{y}_{n}=\lambda\left[V_{\rm op}(\Delta y_{n})-\dot{y}_{n}\right],$$ (8) where $y_{n}(t)$ represents the coordinate of the $n$-th vehicle in a one-dimensional road and $\Delta y_{n}$ is the distance to the preceding vehicle $y_{n+1}-y_{n}$. But we remark that as far as steady state solutions are concerned, the choice (7) is just one particular option and most results presented below are not sensitively dependent on it. Results which are dependent on the choice will be stated so. For the parameters, values in Ref. Takaki98JPSJ are used: $\lambda=2$ (sec${}^{-1}$), $$V_{\rm op}(y)={v_{\rm mag}\over 2}\left[\tanh{2(y-y_{\rm ref})\over y_{\rm width% }}+c_{\rm ref}\right],$$ (9) $v_{\rm mag}=33.6$ (m/sec), $y_{\rm ref}=25.0$ (m), $y_{\rm width}=23.3$ (m), and $c_{\rm ref}=0.913$. For this $c_{\rm ref}$, the maximum value of $V_{\rm op}$ is $[(1+0.913)/2]v_{\rm mag}$, which is slightly different from $v_{\rm mag}$. For this hydrodynamic model, the resulting flow diagram is shown for various values of $v_{g}$ and $q_{g}$ in Fig. 3. Note that depending on $v_{g}$ and $q_{g}$, trajectories departing from the two saddle points $(v_{1,2},0)$ behave in different ways and thus the flow diagrams acquire topologically different structures. Since the structure of the flow diagram is closely linked to characters of nonhomogeneous steady state solutions, it will be meaningful to divide the plane $(v_{g},q_{g})$ according to the flow diagram structures, which is given in Fig. 4. The $v_{g}$-$q_{g}$ plane is divided into six regions (region I, II, $\cdots$, VI). The flow diagram within each region is labeled accordingly in Fig. 3. On the boundary between two neighboring regions, (eg. boundary B${}_{\rm I,II}$ between the region I and II) the flow diagram acquires structures topologically different from those within the regions, and on the special point $(v_{g}^{*},q_{g}^{*})$, where all six boundaries join together, the flow diagram acquires a special structure still different from all others. Steady state solutions contained in the flow diagrams are presented in the next section. IV Steady state solutions Out of all flow trajectories contained in the flow diagrams (Fig. 3), only those trajectories that remain bounded both for $z\rightarrow\infty$ and $-\infty$ constitute physically meaningful steady state solutions. Below we focus on those bounded trajectories. IV.1 Saddle–Minimum solution For each $(v_{g},q_{q})$ in the region I and II, there exist a single trajectory starting from the saddle point $(v_{2},0)$ and converging to the potential minimum point $(v_{0},0)$ [see Figs. 3(I,II)]. This trajectory represents the steady state solution in Fig. 5(a). Similarly for each $(v_{g},q_{g})$ in the region II and III, there exist a single trajectory starting from the other saddle point $(v_{1},0)$ and converging to the potential minimum point $(v_{0},0)$ [see Figs. 3(II,III)]. The steady state solution for this trajectory is similar to that in Fig. 5(a) except that the spatial profile approaches $v_{1}$ instead of $v_{2}$ as $z\rightarrow-\infty$. We call this type of steady state solutions saddle-minimum solutions. We remark that the oscillation near $v_{0}$ may or may not appear depending on the parameter choice, which can be easily understood in the particle analogy; If the particle motion near $v_{0}$ is underdamped/overdamped, the convergence to $v_{0}$ in the saddle-minimum solution is oscillatory/non-oscillatory. Expansion of Eq. (5) near $(v_{0},0)$ shows that the motion near $v_{0}$ is overdamped if $\lambda>2$ and underdamped if $\lambda<2$ A study Berg01PRE on the optimal velocity model revealed that the microscopic model possesses so-called oscillatory traveling wave solutions and monotonic wave solution. These solutions are identical in their character to the saddle-minimum solution with the underdamped and overdamped convergence to $v_{0}$. IV.2 Limit-cycle solution For each $(v_{g},q_{g})$ in the region IV, V, or VI, there exists a single trajectory, which encircles the potential minimum point $(v_{0},0)$ and makes a loop [see dashed curves in Figs. 3(IV,V,VI)]. In the language of nonlinear dynamics, this type of flow is usually referred to as a limit-cycle. For traffic flow language, the limit-cycle corresponds to a steady state solution with periodic wave [Fig. 5(b)]. The “wavelength” of the limit-cycle solution increases and approaches infinite as $(v_{g},q_{g})$ approaches $(v_{g}^{*},q_{g}^{*})$. We remark that for the existence of the limit-cycle solution, it is crucial to have sign alternation of ${\cal C}$ with $v$. If ${\cal C}$ were always positive (negative), the single particle motion analogy indicates that the total energy of the particle would monotonically decrease (increase) with $z$, and thus the trajectory would be attracted to (repelled away from) $v_{0}$, destroying the limit-cycle. Note that since there is a limit-cycle solution for each $(v_{g},q_{g})$ within the region IV, V, and VI, there is an infinite number of limit-cycle solutions, each with different $(v_{g},q_{q})$. This feature is very similar to the report of many stable nonhomogeneous states in a revised car-following model in Ref. Tomer00PRL . To our knowledge, it has not been realized previously that hydrodynamic models also possess infinitely many limit-cycle solutions. IV.3 Limit-cycle–Minimum solution The limit-cycle solutions are inevitably accompanied by still different types of solutions. According to the flow diagrams in Figs. 3(IV,V,VI), all trajectories inside the limit-cycle approach the minimum point $(v_{0},0)$ as $z\rightarrow\infty$, and the limit-cycle as $z\rightarrow-\infty$. We call this type of steady state solutions limit-cycle–minimum solution. Its profile is shown in Fig. 5(c). Since there are infinitely many limit-cycle solutions and the limit-cycle–minimum solution is possible whenever the limit-cycle is possible, there also exist infinitely many limit-cycle–minimum solutions. This solution is closely related with an interesting phenomenon reported in previous studies of hydrodynamic models with on-ramps Helbing99PRL ; Lee99PRE : When a linearly unstable but convectively stable homogeneous flow is generated in the upstream part of an on-ramp, oscillatory flow is spontaneously generated from the homogeneous region and propagate in the upstream direction. The relation between this observation and the limit-cycle–minimum solution can be established by noting that the convectively stable homogeneous flow region corresponds to the trivial solution $v(z)=v_{0}$ and the spontaneously generated oscillatory flow to the limit-cycle solution. It is then clear that the phenomenon in Refs. Helbing99PRL ; Lee99PRE is nothing but a manifestation of the limit-cycle–minimum solution. Recent studies of a microscopic car-following model Mitarai99JPSJ also reported oscillatory flow growing out of a linearly unstable homogeneous region. IV.4 Limit-cycle–Saddle solution Each limit-cycle solution accompanies still another type of solutions. For each $(v_{g},q_{g})$ in the region IV and V, there exists a single trajectory converging to the saddle point $(v_{1},0)$ as $z\rightarrow\infty$ and approaching the limit-cycle as $z\rightarrow-\infty$ [Figs. 3(IV,V)]. Similarly for each $(v_{g},q_{g})$ in the region V and VI, there exist a single trajectory converging to the other saddle point $(v_{2},0)$ as $z\rightarrow\infty$ and approaching the limit-cycle as $z\rightarrow-\infty$ [Figs. 3(V,VI)]. We call this type of solutions limit-cycle–saddle solution. Its profile in the $z$-space is shown in Fig. 5(d). Similarly to the limit-cycle–minimum solutions, there exist infinitely many limit-cycle–saddle solutions. IV.5 Hetero saddle-saddle solution Still new types of bounded trajectories appear on the boundary ${\mathbf{B}}_{ij}$ between the region $i$ and $j$. On the boundaries ${\mathbf{B}}_{\rm I,II}$ and ${\mathbf{B}}_{\rm IV,V}$, there is a trajectory running from the saddle point $(v_{1},0)$ and converging to the other saddle point $(v_{2},0)$ [Figs. 3(B${}_{\rm I,II}$),(B${}_{\rm IV,V}$)]. This trajectory amounts to the kink solution in Fig. 5(e). Also on the boundaries ${\mathbf{B}}_{\rm II,III}$ and ${\mathbf{B}}_{\rm V,VI}$, there is a trajectory running from the second saddle point $(v_{2},0)$ and converging to the first saddle point $(v_{1},0)$ [Figs. 3(B${}_{\rm II,III}$),(B${}_{\rm V,VI}$)]. Since the roles of $(v_{1},0)$ and $(v_{2},0)$ have been swapped, this trajectory amounts to the anti-kink solution [not shown in Fig. 5]. In the nonlinear dynamics language, these kinds of trajectories connecting two different fixed points are called heteroclinic orbits. In this paper, we will name this type of solutions hetero saddle-saddle solutions. To be more specific, we will call the first (second) type of the hetero saddle–saddle solutions “upper” (“lower”) hetero saddle–saddle solutions since the trajectories appear in the upper (lower) half of the flow diagram. Note that there exist infinitely many hetero saddle–saddle solutions since this solution is allowed for each point $(v_{g},q_{g})$ on the abovementioned boundaries. IV.6 Homo saddle–saddle solution For each $(v_{g},q_{g})$ on ${\mathbf{B}}_{\rm VI,I}$, there exists a trajectory starting from the saddle point $(v_{2},0)$ and returning back to the same saddle point [Fig. 3(B${}_{\rm VI,I}$)]. This trajectory represents the narrow cluster solution [Fig. 5(f)]. For each $(v_{g},q_{g})$ on ${\mathbf{B}}_{\rm III,IV}$, on the other hand, there exists a trajectory starting from the saddle point $(v_{1},0)$ and returning back to $(v_{1},0)$ [Fig. 3(B${}_{\rm III,IV}$)]. This trajectory represents the narrow anti-cluster solution (not shown in Fig. 5). In the nonlinear dynamics language, these kinds of trajectories are called homoclinic orbits. In this paper, we will name this type of solutions homo saddle-saddle solutions. To be more specific, we will call the first (second) type of the homo saddle–saddle solutions “right” (“left”) homo saddle–saddle solutions since the trajectories involve the saddle point on the right (left) half of the flow diagram. IV.7 Wide cluster solution Figure 4 shows that all six regions and six boundaries meet together at a single point $(v_{g}^{*},q_{g}^{*})$. The flow diagram at this point has a special structure [see Fig. 3$(v_{g}^{*},q_{g}^{*})$] that allow smooth connection between flow diagrams of different topological structure in different regions or boundaries. This point is also special in the sense that the flow running out of the fixed point $(v_{1},0)$ reaches the other fixed point $(v_{2},0)$ and returns back to $(v_{1},0)$. The steady state solution corresponding to this flow is shown in Fig. 5(g). Note that since the particle dynamics becomes infinitely slower as a trajectory approaches the fixed points $(v_{1,2},0)$, the cluster size of the steady state solution is infinitely large. For this reason, we call this solution wide cluster solution. The wide cluster solution is a limiting case of various solutions mentioned above; If $(v_{g}^{*},q_{g}^{*})$ is regarded as a limiting point of, for example, the boundary $\mathbf{B}_{\rm III,IV}$ or $\mathbf{B}_{\rm VI,I}$, the wide cluster solution is a homo saddle–saddle solution with an infinite cluster size. If $(v_{g}^{*},q_{g}^{*})$ is regarded as a limit point of the region VI, V, or VI, the wider cluster solution is a limit-cycle solution with an infinite period. Also if $(v_{g}^{*},q_{g}^{*})$ is regarded as the point where the borderlines $\mathbf{B}_{\rm II,III}$ and $\mathbf{B}_{\rm I,II}$ join, the wide cluster solution is a combined object of the upper hetero saddle–saddle solution (kink) and the lower hetero saddle–saddle solution (anti-kink). The fact that the wide cluster solution is possible only at the single point $(v_{g}^{*},q_{g}^{*})$ implies that the wide cluster solution has the so-called universal characteristics; when a given initial state of traffic evolves into the wide cluster solution following the real traffic dynamics [Eqs. (2) and (3)], characteristics such outflow and the speed of the final traffic state are independent of the initial state. Analyses Treiterer74Proceeding ; Kerner96PRE of empirical traffic data revealed that various characteristics of the wide jam are indeed universal. This empirical observation imposes a constraint on traffic models and the universality of the characteristics has been tested for traffic models. Although the universality has been verified for a number of traffic models Komatsu95PRE ; Kerner94PRE ; Wada97CM ; Helbing98PRL , the verification unfortunately has relied largely on repeated numerical simulations and thus the verification itself is also “empirical” in some sense. One exceptional case is the analysis in Ref. Kerner97PRE , where the singular perturbation theory is used. This analysis is however considerably model-dependent and thus it is not easy to perform the same analysis for other classes of traffic models. Contrarily, in our approach, the new discovery that the wide cluster solution can exist only at the single point $(v_{g}^{*},q_{g}^{*})$ is quite meaningful since it explains clearly and unambiguously why the characteristics of the wide cluster solution should be universal. We next address an interesting size dependence of the “wide cluster solution” reported in Ref. Kerner94PRE ; When numerical simulations are performed with periodic boundary conditions, the so-called universal characteristics are found Kerner94PRE ; Wada97CM to be not strictly universal but depend on the system size. This size dependence is not compatible with the statement of the universality given above. To resolve this conflict, we first note that periodic boundary conditions allow only those solutions whose periods are compatible with the imposed period. Then the wide cluster solution, whose period is infinite, can not be realized in such numerical simulations with finite system size, and the solutions, which are interpreted as the wide cluster solution in Ref. Kerner94PRE ; Wada97CM , are, strictly speaking, limit-cycle solutions whose periods are compatible with the imposed periodic boundary conditions. Then by recalling that there are infinitely many limit-cycle solutions, the size dependence is a very natural consequence and the conflict is resolved. We remark that the dependence however should become weaker as the period becomes longer since the limit cycle solutions approach the wide cluster solution as their period becomes longer. V Discussion V.1 Roles of topology: Generality As demonstrated in previous sections, each steady state solution exists only in restricted regions of the $v_{g}$-$q_{g}$ space where the flow diagram acquires certain topological structures. This relation between the steady state solutions and the topological structure of the flow diagram is not a mere coincidence; all steady state solutions presented in the preceding section are guaranteed to exist by the topologies of the flow diagrams. For example, when flows near the fixed point $(v_{0},0)$ is attracted towards $(v_{0},0)$ while flows running out of the other fixed points $(v_{1,2},0)$ are repelled away from $(v_{0},0)$, as in the regions (IV,V,VI), there should exist the limit-cycle solution in those regions (Poincaré-Bendixson theorem Jackson89Book ). This relation with the topology bears an interesting implication. As demonstrated in many branches of physics, physical objects, whose existence is closely related with a certain topological structure of systems, are not fragile and their existence does not depend on quantitative details of the systems. Vortices in type II superconductors are a well-known example. It is then expected that the steady state solutions presented in the preceding section are not specific to the particular model examined but common to many versions of hydrodynamic models. For example, our results are not sensitive to the values of the parameters in Eq. (9). The generality due to the topology can be argued in the following way as well. Let us consider an infinitesimal change in the traffic model [in Eqs. (7) and (9) for example]. At $(v_{g},q_{g})$ located sufficiently interior of a region in Fig. 4, the topological structure of the flow diagram will not be affected and thus all steady state solutions, which are originally allowed in the region, are still allowed for the modified traffic model. On the other hand, at $(v_{g},q_{g})$ located sufficiently close to a boundary in Fig. 4, the topological structure of the flow diagram may be modified to a new structure. However the only possible new structure is the one at just across the boundary. The net effect then amounts to a mere shift of the boundary. Thus as far as the topology is concerned, effects of the infinitesimal change in the traffic model is no more than shifting the boundaries of the six regions by infinitesimal amounts, and the existence of steady state solutions in each region is not affected. Here we remark that despite the shifts, all six boundaries should still meet at a single point, whose coordinate may be slightly different from the original $(v_{g}^{*},q_{g}^{*})$, though. Otherwise, continuous conversion of one topological structure to another near $(v_{g}^{*},q_{g}^{*})$ is not possible. Lastly we discuss briefly one special kind of modifications that shows negative-valued coefficient ${\cal C}$ [Eq. (6)] near the fixed point $(v_{0},0)$. In the hydrodynamic model proposed by Kerner et al. Kerner93PRE , ${\cal C}$ is indeed negative when the parameters in the model are set to the values suggested in Ref. Lee99PRE . For this case, the flow is repelled away from $(v_{0},0)$. Then the limit-cycle solution does not appear any more in the regions IV, V, VI but appears instead in the regions I, II, III. This shift is natural in view of the Pointcaré-Bendixson theorem. However this negative ${\cal C}$ does not affect the very existence of the six regions because these are determined according to the trajectories departing from the two saddle point $(v_{1,2},0)$ as mentioned already. Thus our approach focused on the bounded trajectories between the four limiting behaviors (one limit-cycle and three fixed points) is still valid. A more exotic kind of modifications are those, due to which ${\cal C}$ changes its sign multiple times with $v$ so that multiple limit-cycle solutions exist for given $v_{g}$ and $q_{g}$ in certain regions of parameter space $(v_{g},q_{g})$. However this possibility seems to be very unlikely since it requires considerable fluctuations of ${\cal A}$ [see Eq. (6)], which is unphysical. V.2 Implications on universality conjecture A conjecture has been put forward by Herrmann and Kerner Herrmann98PA ; many traffic models with different mathematical structures may belong to the same “universality” class in the sense that they predict same traffic phenomena. Although it is not clear yet to what extent the universality conjecture is valid, there are indications that there indeed exists close relationship between some traffic models. For example, macroscopic hydrodynamic models have been derived from a microscopic car-following model via certain approximation methods Berg00PRE ; Lee01PRE , and good agreement between two types of traffic models has been demonstrated via numerical simulations Helbing02MCM . In particular, an exact correspondence between two different types of microscopic models has been established Matsukidaira03PRL . We note in passing that the hydrodynamic model derived in Ref. Berg00PRE however has an instability that arises from short-wave length fluctuations that is not present in the original car-following model. Such instability disappears when the short-wave length fluctuations are properly regularized as in Ref. Lee01PRE . On the other hand, there exist reports which could not have been reconciled with the universality conjecture. For example, a recent study Tomer00PRL on a certain special type of car-following model reported the existence of many limit-cycle solutions (even without intrinsic inhomogeneities on roads such as on-ramps). To our knowledge, the limit-cycle solution had not been reported for any other traffic models and thus it had been inferred that the limit-cycle solution might be specific to the special model, in clear contrast with the universality conjecture. Our results however indicate that this inference is wrong and reopen a possibility that the special car-following model may also be closely related to other traffic models. An evidence for this will follow in the next subsection. By investigating topological structures of the flow diagrams, we found from a single traffic model seven inhomogeneous steady state solutions. Although many of these solutions were already reported by earlier studies, earlier reports were scattered over various different traffic models and thus it was not clear whether a certain solution is specific to certain traffic models or generic to a wide class of models. Our analysis in the preceding subsection indicates that the seven inhomogeneous steady state solutions are generic to a wide class of traffic models. Our results are thus valuable in view of the universality conjecture. V.3 Prevalence of limit-cycle solution in Ref. Tomer00PRL As remarked above, most studies of traffic models failed to capture the limit-cycle solution. On the other hand, in the special car-following model studied in Ref. Tomer00PRL , a wide class of initial conditions evolve to the limit-cycle solution. A question arises naturally: what is special about the model in Ref. Tomer00PRL ? The model is defined as follows, $$\ddot{y}_{n}=A\left(1-{\Delta y_{n}^{0}\over\Delta y_{n}}\right)-{Z^{2}(-% \Delta\dot{y}_{n})\over 2(\Delta y_{n}-\rho_{m}^{-1})}-kZ(\dot{y}_{n}-v_{\rm per% }),$$ (10) where $Z(x)=(x+|x|)/2$, $A$ is a sensitivity parameter, $\rho_{m}^{-1}$ is the minimal distance between consecutive vehicles, $v_{\rm per}$ is the permitted velocity, $k$ is a constant, and $\Delta y_{n}^{0}\equiv\dot{y}_{n}T+\rho_{m}^{-1}$. Here $T$ is the safety time gap. When interpreted in terms of the hydrodynamic model in Eq. (3), the first and third terms together define the effective optimal velocity $V_{\rm op}^{\rm eff}(\rho^{-1})$, which is shown in Fig. 6. The role of the second term is to strictly prevent the distance from being smaller than the minimum distance $\rho_{m}^{-1}$ by establishing additional strong deceleration when a vehicle is faster than its preceding one and their separation approaches $\rho_{m}^{-1}$. In view of the steady state analysis given in previous sections, the optimal velocity profile in Fig. 6 is very special: out of three solutions $v_{1},v_{2},v_{3}$ of $V_{\rm op}^{\rm eff}((v+v_{g})/q_{g})=v$, which amount to the extremal points of the potential $U^{\rm eff}$, $v_{1}$ is dynamically forbidden since the vehicle spacing $\rho_{1}^{-1}$ related to $v_{1}$ via Eq. (4) is shorter than $\rho_{m}^{-1}$, which is strongly prohibited by the second term in Eq. (10). Then steady state solutions, such as wide cluster solution, that assume the accessibility to $(v_{1},0)$ are not allowed any more and the number of possible solutions reduces to 5 [saddle-minimum, limit-cycle, limit-cycle–minimum, limit-cycle–saddle, and homo saddle-saddle solutions]. Further reduction occurs when the periodic boundary condition is imposed as in Ref. Tomer00PRL . Then homo saddle-saddle and limit-cycle solutions are the only possible solutions. Among these two, the former is possible only when the average density satisfies $\rho<(\rho_{m}^{-1}+Tv_{\rm per})^{-1}$ because the unique limiting behavior of that solution, of which density converges to $\rho_{2}$, will govern the average density and $\rho_{2}<\rho_{c}$. Thus for the density range of $(\rho_{m}^{-1}+Tv_{\rm per})^{-1}<\rho<\rho_{m}$, the limit-cycle solution is the only possible solution. Therefore the characteristics of the model in Ref. Tomer00PRL is explained within the framework of the hydrodynamic approach. V.4 Stability of solutions The rigorous stability analysis, whether each solution above is realized through the dynamics in Eqs. (2) and (3), goes beyond the scope of this work. In this section, we instead summarize what has been known and also discuss implications of existing results. The stability of the solution in Sec. IV G is well-established Kerner93PRE ; Kerner94PRE . For the solution in Sec. IV C, its relation with the oscillatory flow generated spontaneously out of a convectively stable homogeneous flow, which was reported in previous numerical simulations of hydrodynamic models with on-ramps Helbing99PRL ; Lee99PRE , seems to indicate that this solution can be stable. Also the solution in Sec. IV B have been maintained stably in our own numerical simulation of the hydrodynamic model that is derived from the following microscopic model via the mapping rule in Ref. Lee01PRE : $$\ddot{y}_{n}=\lambda\left[V_{\rm op}^{\rm eff}\left(\Delta y_{n}\right)-\dot{y% }_{n}\right]+k{{\Delta\dot{y}_{n}}\over{\Delta y_{n}-\rho_{m}^{-1}}}\ \Theta% \left(-\Delta\dot{y}_{n}\right),$$ (11) where $V_{\rm op}^{\rm eff}$ is the same one depicted in Fig. 6, $k$ is a constant, and $\Theta(x)$ is the Heavyside function (1 for $x>0$ and 0 for $x<0$). Note that the idea of the prevalence of the limit-cycle solution discussed in Sec. V C is simply reflected in Eq. (11). These observations suggest that at least some steady state solutions addressed in this work can be maintained stably. However more systematic analysis is necessary to clarify the issue of the stability. VI conclusion Hydrodynamic traffic models are investigated by mapping them to the problem of single particle motion. It is found that typical hydrodynamic models possess seven different types of inhomogeneous steady state solutions. Although these solutions were already reported by earlier studies, earlier reports were scattered over various different traffic models and it was not clear whether a certain solution is specific to certain traffic models only or generic to a wide class of models. Our result combined with the topology argument indicates that the seven inhomogeneous steady state solutions should be common to a wide class of traffic models. Also the origin of the universal characteristics for the wide cluster solution is clearly identified and the reason for the prevalence of the limit-cycle solution in a previous report Tomer00PRL is provided. Acknowledgments This work is supported by the Korea Research Foundation (KRF 2000-015-DP0138). References (1) K. Nagel and M. Schreckenberg, J. Phys. I 2, 2221 (1992); O. Biham, A. A. Middleton, and D. Levine, Phys. Rev. A 46, R6124 (1992). (2) B. S. Kerner and P. Konhäuser, Phys. Rev. E 48, R2335 (1993). (3) M. Bando, K. Hasebe, A. Nakayama, A. Shibata, and Y. Sugiyama, Phys. Rev. E 51, 1035 (1995). (4) I. Treiterer and J. A. Myers, in Proceedings of the 6th International Symposium on Transportation and Traffic Theory, edited by D. J. Buckley (Elsevier, New York, 1974); M. Koshi, M. Iwasaki, and I. Ohkura, in Proceedings of the Eighth International Symposium on Transportation and Traffic Flow, edited by V. F. Hurdle, E. Hauer, and G. N. Stewart (University of Toronto Press, Toronto, 1983). (5) B. S. Kerner and H. Rehborn, Phys. Rev. E 53, R4275 (1996); Phys. Rev. Lett. 79, 4030 (1997); B. S. Kerner, ibid. 81, 3797 (1998); J. Phys. A 33, L221 (2000); Phys. Rev. E 65, 046138 (2002). (6) L. Neubert, L. Santen, A. Schadschneider, and M. Schreckenberg, Phys. Rev. E 60, 6480 (1999); W. Knospe, L. Santen, A. Schadschneider, and M. Schreckenberg, ibid. 65, 056133 (2002). (7) M. Treiber, A. Hennecke, and D. Helbing, Phys. Rev. E 62, 1805 (2000). (8) H. Y. Lee, H.-W. Lee, and D. Kim, Phys. Rev. E 62, 4737 (2000). (9) D. Chowdhury, L. Santen, and A. Schadschneider, Phys. Rep. 329, 199 (2000); D. Helbing, Rev. Mod. Phys. 73, 1067 (2001). (10) T. S. Komatsu, S. I. Sasa, Phys. Rev. E 52, 5574 (1995). (11) T. Nagatani, J. Phys. Soc. Jpn. 66, 1928 (1997). (12) H. Y. Lee, H.-W. Lee, and D. Kim, Phys. Rev. Lett. 81, 1130 (1998). (13) D. Helbing and M. Treiber, Phys. Rev. Lett. 81, 3042 (1998); D. Helbing and M. Schreckenberg, Phys. Rev. E 59, R2505 (1999); V. Shvetsov and D. Helbing, ibid. 59, 6328 (1999). (14) D. Helbing, A. Hennecke, and M. Treiber, Phys. Rev. Lett. 82, 4360 (1999) (15) H. Y. Lee, H.-W. Lee, and D. Kim, Phys. Rev. E 59, 5101 (1999). (16) N. Mitarai and H. Nakanishi, J. Phys. Soc. Jpn. 68, 2475 (1999); Phys. Rev. Lett. 85, 1766 (2000); J. Phys. Soc. Jpn. 69, 3752 (2000). (17) W. Knospe, L. Santen, A. Schadschneider, and M. Schreckenberg, J. Phys. A: Math. Gen. 33, L477 (2000); Phys. Rev. E 65, 015101 (2002); J. Phys. A: Math. Gen. 35, 3369 (2002). (18) E. Tomer, L. Safonov, and S.Havlin, Phys. Rev. Lett. 84, 382 (2000). (19) P. Nelson, Phys. Rev. E 61, R6052 (2000). (20) I. A. Lubashevsky and R. Mahnke, Phys. Rev. E 62, 6082 (2000); I. Lubashevsky, S. Kalenkov, and R. Mahnke, ibid. 65, 036140 (2002); R. Kühne, R. Mahnke, I. Lubashevsky, and J. Kaupužs, ibid. 65, 066125 (2002); I. Lubashevsky, R. Mahnke, P. Wagner, and S. Kalenkov, ibid. 66, 016117 (2002). (21) L. A. Safonov, E. Tomer, V. V. Strygin, Y. Ashkenazy, S. Havlin, Europhys. Lett. 57, 151 (2002); E. Tomer, L. Safonov, N. Madar, and S. Havlin, Phys. Rev. E 65, 065101 (2002). (22) B. S. Kerner and P. Konhäuser, Phys. Rev. E 50, 54 (1994). (23) S. Wada and H. Hayakawa, J. Phys. Soc. Jpn. 67, 763 (1998). (24) B. S. Kerner, S. L. Klenov, and P. Konhäuser, Phys. Rev. E 56, 4200 (1997). (25) P. Berg and A. Woods, Phys. Rev. E 63, 036107 (2001). (26) H. K. Lee, H.-W. Lee, and D. Kim, Phys. Rev. E 64, 056126 (2001). (27) S.-I. Takaki, M. Kikuchi, Y. Sugiyama, and S. Yukawa, J. Phys. Soc. Jpn. 67, 2270 (1998). (28) E. A. Jackson, Perspectives of nonlinear dynamics (Cambridge University Press, Cambridge, England, 1989), Vol. 1. (29) M. Herrmann and B. S. Kerner, Physica A 255, 163 (1998). (30) P. Berg, A. Mason, and A. Woods, Phys. Rev. E 61, 1056 (2000). (31) D. Helbing, A. Hennecke, V. Shvetsov, and M. Treiber, Math. Comp. Modelling 35, 517 (2002). (32) J. Matsukidaira and K. Nishinari, Phys. Rev. Lett. 90, 088701 (2003).
Disentangling the magneto-optic Kerr effect of manganite epitaxial heterostructures Jörg Schöpf University of Cologne, II Physics Institute, Cologne, Germany    Paul H. M. van Loosdrecht University of Cologne, II Physics Institute, Cologne, Germany    Ionela Lindfors-Vrejoiu University of Cologne, II Physics Institute, Cologne, Germany Abstract Magneto-optic Kerr effect can probe the process of magnetization reversal in ferromagnetic thin films and thus be used as an alternative to magnetometry. Kerr effect is wavelength-dependent and the Kerr rotation can reverse sign, vanishing at particular wavelengths. We investigate epitaxial heterostructures of ferromagnetic manganite, La${}_{0.7}$Sr${}_{0.3}$Mn${}_{0.9}$Ru${}_{0.1}$O${}_{3}$, by polar Kerr effect and magnetometry. The manganite layers are separated by or interfaced with a layer of nickelate, NdNiO${}_{3}$. Kerr rotation hysteresis loops of trilayers, with two manganite layers of different thickness separated by a nickelate layer, have intriguing humplike features, when measured with light of 400 nm wavelength. By investigating additional reference samples we disentangle the contributions of the individual layers to the loops: we show that the humps originate from the opposite sense of the Kerr rotation of the two different ferromagnetic layers, combined with the additive behavior of the Kerr signal. ††preprint: AIP/123-QED I Introduction Magnetic properties of epitaxial ferromagnetic oxide heterostructures and multilayers are governed by the interplay between magnetocrystalline anisotropy, interface and shape anisotropy, and by the magnetic interlayer coupling.Bhattacharya and May (2014) In thin film systems these simultaneously affect the magnetic ordered phases as a function of temperature and applied magnetic field in an intricate way. Moreover, the evaluation of the magnetic interlayer coupling strength in such heterostructures is often problematic: one can assess it by performing minor magnetization hysteresis loops van der Heijden et al. (1997) or first order reversal curves (FORC) magnetometry or MOKE studies Gilbert et al. (2021); Gräfe et al. (2014). SQUID magnetometry of magnetic thin films is, however, often affected by spurious contributions from the bulk substrates.Wysocki et al. (2022) Magneto-optic Kerr effect (MOKE) measurements performed in reflection geometry can circumvent this difficulty. Concerning ferromagnetic oxide heterostructures, we investigated by SQUID magnetometry and MOKE the magnetic interlayer coupling between ferromagnetic SrRuO${}_{3}$ epitaxial layers with perpendicular magnetic anisotropy separated by various spacer layers, such as SrIrO${}_{3}$/SrZrO${}_{3}$ Wysocki et al. (2018), SrIrO${}_{3}$ Wysocki et al. (2022) or LaNiO${}_{3}$ Yang et al. (2021). The type of spacer layer and its physical properties influence strongly the strength of the coupling. In particular, a LaNiO${}_{3}$ spacer was a suitable choice for achieving strong ferromagnetic interlayer coupling. The coupling strength depended dramatically on the thickness of the LaNiO${}_{3}$, as the spacer undergoes a metal-insulator transition at about 4 pseudocubic monolayers thickness. It is thus interesting to study the interlayer coupling when the spacer layer has a metal-insulator transition (MIT) as a function of temperature, as it is the case for NdNiO${}_{3}$ (NNO), which as bulk has the MIT at about 200 K. Catalan, Bowman, and Gregg (2000); Catalano et al. (2018) For this purpose the ferromagnetic layers should have a Curie temperature (T${}_{C}$) significantly larger than 200 K and therefore we chose La${}_{0.7}$Sr${}_{0.3}$Mn${}_{0.9}$Ru${}_{0.1}$O${}_{3}$ (LSMRO) layers (T${}_{C}$ is about 300 K).Wang et al. (2007); Nakamura et al. (2018) Additionaly, epitaxial growth on (LaAlO${}_{3}$)${}_{0.3}$-(SrAl${}_{0.5}$Ta${}_{0.5}$O${}_{3}$)${}_{0.7}$ (LSAT(100)) substrates and Ru substitution renders LSMRO layer with perpendicular magnetic anisotropy. Nakamura et al. (2018) The perpendicular magnetic anisotropy is of importance as our aim is to study the heterostructures by polar MOKE measurements, in which the magnetic field is applied perpendicular to the sample surface. First we need to understand the polar MOKE response of a heterostructure with two ferromagnetic layers that have individual magneto-optic properties, which can be strongly wavelength-dependent. In particular, we have to understand how the Kerr rotation hysteresis loops relate to each of the layers of the heterostructure and to their magnetization hysteresis loops measured by magnetometry. This will enable the comparison of minor and full loops, which can be then employed to assess the strength of the interlayer coupling van der Heijden et al. (1997). Therefore, here we focus on the polar MOKE response of trilayer samples with two LSMRO layers of different thickness separated by a NNO layer (see schematics in Fig. 1a). The Kerr loops of the trilayer measured with light of 400 nm wavelengh have an unusual shape with symmetric humplike features, not shown by the magnetization loops obtained by magnetometry. We compare the Kerr rotation loops of the trilayer with the loops of reference samples (see Fig. 1a) and measure the wavelength dependence of the Kerr rotation of the trilayer and the reference samples. We demonstrate that the apparently anomalous behavior of the MOKE loops of the trilayer is the simple result of the wavelength dependence of the magneto-optic properties and of the additivity of the loops of the two different ferromagnetic layers. II Experimental results and discussion The layers of the heterostructures were made by pulsed-laser deposition (PLD). The as-received LSAT substrates of square shape and 4 mm size (CrysTec GmbH) were annealed at 1273 K for 2 hours in air prior to being used for PLD. Ceramic stoichiometric targets were used for laser ablation. The layers were grown with a laser fluence of 2.4 J/cm${}^{2}$ and a laser pulse repetition rate of 5 Hz for LSMRO and 2 Hz for NNO, at 923 K in 0.133 mbar O${}_{2}$ (LSMRO) and 0.3 mbar O${}_{2}$ (NNO). After growth the samples were cooled down in 100 mbar O${}_{2}$ with a rate of 10 K/min. A homemade setup was used for simultaneous polar MOKE and magnetotransport measurements. The MOKE measurements were done by employing a double modulation technique using an optical chopper and photoelastic modulator (PEM100, Hins Instruments). A Xe-lamp was used as a light source in conjunction with a Jobin Yvon monochromator. Magnetotransport measurements were performed in a van der Pauw geometry and electrical contacts were done by gluing copper wires with silver paint on the corners of the samples. SQUID magnetometry was performed with a commercially available SQUID magnetometer (MPMS-XL, Quantum Design inc.). II.1 SQUID magnetometry Figure  1 shows a summary of the SQUID magnetometry data (magnetization as a function of temperature and magnetic field, with the magnetic field applied perpendicular to the sample surface), for the three samples schematically depicted in Fig. 1a. The reference samples are used to mimic the top part and the bottom part of the sample of interest, the trilayer, and give us information about the magnetic properties and later of magneto-optic and magnetotransport properties of the analogue parts in the trilayer. The field cooled magnetization curves of Fig. 1b reveal a decrease of the Curie temperature of about 30 K for the 10 nm reference sample, for which the LSMRO layer is grown on top of the NNO layer. The 30 nm reference sample that mimics the bottom part of the trilayer, in which the 30 nm LSMRO layer grows directly on the LSAT substrate, has the same Curie temperature as for the trilayer. This already hints that the epitaxial growth of LSMRO on top of NNO influences drastically its magnetic properties, most likely via interfacial structural accommodations that affect the structure of the thin LSMRO layer and result in important physical property changes. This is further reflected by the magnetization hysteresis loops plotted in Fig. 1c: the 10 nm reference sample has a massively increased coercive field at 10 K, while the coercive field of the 30 nm reference sample matches well the coercive field of the trilayer. We summed up the magnetization loops of the two references and the comparison of this artificial loop to the loop of the trilayer is shown in the supplementary material (see Fig. S1): there is quite good agreement between the measured and the sum loops. We stress that the strongly slanted shape of the hysteresis loop of the 10 nm reference indicates a change of the magnetic anisotropy, the perpendicular direction (out-of-plane, OOP) is most likely not the easy axis for the 10 nm LSMRO layer grown on top of NNO. This explains also the shape of the hysteresis loop of the trilayer, where at higher fields a pronounced tail is observed before reaching saturation, corresponding to the gradual magnetization reversal in the top 10 nm LSMRO layer. We conclude that in the trilayer the two LSMRO have very different direction of the magnetization easy axis, with predominantly OOP for the bottom 30 nm LSMRO and a strongly in-plane tilted direction for the 10 nm LSMRO, the latter being induced by the growth on the NNO spacer. This conclusion is supported by the magnetic properties of bare 10 nm and 30 nm thick LSMRO layers grown directly on LSAT substrates (with no interfaces to NNO): both show clear OOP magnetic anisotropy, with square hysteresis loops with large remnant magnetization and low coercive fields (see supplementary material, Fig. S2). II.2 Magneto-optic Kerr effect investigations The magneto-optic properties of the heterostructures were investigated by measuring the Kerr rotation hysteresis loops with a probe of 400 nm wavelength, at 10 K (Fig. 2). One can note that both the trilayer and 30 nm reference have a negative sense of Kerr rotation in saturation in positive applied magnetic fields, while the 10 mn reference shows an (opposite) positive sense of the Kerr rotation. The Kerr rotation loop of the trilayer (plotted in black in Fig. 2a and b) has a peculiar behavior: starting in saturation at +1 T at about -70 mdeg, the Kerr rotation stays constant up to an applied field of -100 mT, where a sharp reversal of the magnetization of the 30 nm bottom layer, with lower coercivity, occurs (in agreement with the magnetization hysteresis loops in Fig.  1). However, in contrast to the magnetization hysteresis loop of the trilayer, the Kerr rotation has a pronounced humplike feature, observed from -290 mT to -1 T, after which saturation occurs. Similar behavior was observed for the Kerr rotation loops of ferromagnetic SrRuO${}_{3}$ films with inhomogeneous strain distribution.Bartram et al. (2020) For the Kerr loops of the trilayer, the apparent "hump" results from the opposite sense of the Kerr rotation of the 10 nm LSMRO top layer, with respect to the Kerr rotation of the 30 nm layer. The magnitude of total magnetization starts to increase as the 10 nm LSMRO top layer reverses its magnetization to be (mostly) parallel to magnetization of the 30 nm LSMRO bottom layer, but the total measured Kerr rotation of the trilayer decreases due to the opposite sense of the Kerr rotation of the different LSMRO layers in the trilayer. This can be readily proven by summing up the Kerr rotation loops of the 30 nm reference and 10 nm reference and noting that $\theta_{\mathrm{total}}\approx\theta_{\mathrm{30nm\>reference}}+\theta_{\mathrm{10nm\>reference}}$, where $\theta$ denotes the measured Kerr rotation. Hemrle (2003) The summation of the Kerr rotation hysteresis loops qualitatively reproduces the measured Kerr rotation loop of the trilayer (see the magenta loop in Fig. 2a), although not perfectly. Slight differences are to be expected, due to multiple reasons: for the two reference samples the overall thickness of the heterostructure is different than that of the trilayer; for the 10 nm reference the NNO layer is grown on top of the LSAT substrate directly, not on top of the 30 nm LSMRO, as in case of the trilayer and thus the NNO spacer layer of the trilayer and the NNO bottom layer of the 10 nm reference may have different properties. To verify the difference in the sign of the Kerr rotation at 400 nm of the two reference samples, Kerr rotation spectra were measured between 400 nm and 600 nm (see Fig. 3) for the trilayer and reference samples at 10 K. For the trilayer and the 30 nm reference, a sign change of the Kerr rotation occurs around 450 nm from negative at lower wavelengths to positive at higher wavelengths. This spectral behavior is consistent with previous studies of La${}_{2/3}$Sr${}_{1/3}$MnO${}_{3}$ on SrTiO${}_{3}$ and arises from magneto-optic active transitions: an intra-3d Mn crystal field transition around 460 nm and a charge transfer from O 2p to Mn 3d e${}_{\mathrm{g}}$ transition at 345 nm. The Kerr rotation of the 10 nm reference is positive at all wavelengths in the measurement range, which is consistent with the behavior of the measured Kerr rotation loops and indicating a change in the electronic structure of the 10 nm LSMRO film when grown epitaxially on NNO. The origin of this change in physical properties is most likely structural and requires further investigations. Spectra of 10 nm and 30 nm thick bare LSMRO films grown directly on LSAT and without NNO layers show similar behavior as the 30 nm reference and the trilayer, with the change of sign (see Fig. S3 in the supplementary material). We note that analysis of the ellipticity loops, measured simultaneously with the Kerr rotation loops (see Fig. S4 in the supplementary material), gives us further confidence in our conclusion concerning the origin of the hump features. III Summary In summary, we disentangled the behavior of the Kerr rotation loops of epitaxial trilyers with two ferromagnetic manganite layers separated by a nickelate spacer: the Kerr rotation loops measured at 400 nm wavelength showed intriguing humplike features in the magnetic field region where the top ferromagnetic layer reverses its magnetization. In order to unravel the origin of the humps, we compared the Kerr rotation loops with magnetization loops measured by SQUID magnetometry, and we saw that the hump of the Kerr loops corresponds to the tail of the SQUID loops before reaching saturation in high fields. We made reference samples that correspond to the trilayer lower part (with the 30 nm thick LSMRO and a top NNO) and upper part (with a NNO and a 10 nm thick LSMRO), and measured the SQUID and Kerr rotation loops of these two parts independently. This enabled us to probe the magnetic properties of the independent ferromagnetic layers of the trilayer and then understand the magnetization reversal in the trilayer: upon applying perpendicular magnetic field, the 30 nm LSMRO bottom layer that has predominantly OOP magnetic anisotropy and lower coercivity reverses first its magnetization ; the top 10 nm LSMRO, which has more towards in-plane tilted magnetic ansiotropy and enhanced coercivity, reverses the magentization at much larger fields and its magnetization reversal results into the tail of the SQUID loop and into the hump of the Kerr loop. The hump is also the result of the different sign of the Kerr rotation of the two manganite layers at 400 nm wavelength: it is negative for the bottom layer and positive for the top layer. The change of sign and the magnetic anisotropy difference between the two manganite layers of the trilayer are consequences of the epitaxial growth of the 10 nm LSMRO on the nickelate spacer and require further structural and electronic structure investigations. Our findings stress how important the interfacial effects can be for the effective magnetic anisotropy and for the magneto-optic properties of epitaxial ferromagnetic oxide heterostructures. See the supplementary material for the magnetization and Kerr rotation loops and Kerr rotation spectra of bare 10 nm thick and 30 nm LSMRO films grown directly on LSAT substrates, and for a discussion on the Kerr ellipticity loops. Authors declarations Conflict of Interest The authors have no conflicts to disclose. Author Contributions J. Schöpf performed the physical property measurements and data analyses and wrote the paper. P. van Loosdrecht participated in the MOKE data analyses and interpretation, and contributed to the paper writing. I. Lindfors-Vrejoiu conceived the research, made the samples, and wrote the paper. Acknowledgements. We wish to acknowledge the financial support from the German Research Foundation (DFG) within SFB1238, project A01 (Project No. 277146847). Data Availability Statement The data that support the findings of this study are available from the corresponding authors upon reasonable request. References References Bhattacharya and May (2014) A. Bhattacharya and S. J. May, “Magnetic oxide heterostructures,” Annual Review of Materials Research 44, 65–90 (2014), https://doi.org/10.1146/annurev-matsci-070813-113447 . van der Heijden et al. (1997) P. A. A. van der Heijden, P. J. H. Bloemen, J. M. Metselaar, R. M. Wolf, J. M. Gaines, J. T. W. M. van Eemeren, P. J. van der Zaag,  and W. J. M. de Jonge, “Interlayer coupling between fe3o4 layers separated by an insulating nonmagnetic mgo layer,” Phys. Rev. B 55, 11569–11575 (1997). Gilbert et al. (2021) D. A. Gilbert, P. D. Murray, J. De Rojas, R. K. Dumas, J. E. Davies,  and K. Liu, “Reconstructing phase-resolved hysteresis loops from first-order reversal curves,” Scientific Reports 11, 4018 (2021), https://doi.org/10.1063/5.0087098 . Gräfe et al. (2014) J. Gräfe, M. Schmidt, P. Audehm, G. Schütz,  and E. Goering, “Application of magneto-optical kerr effect to first-order reversal curve measurements,” Review of Scientific Instruments 85, 023901 (2014), https://doi.org/10.1063/1.4865135 . Wysocki et al. (2022) L. Wysocki, S. E. Ilse, L. Yang, E. Goering, F. Gunkel, R. Dittmann, P. H. M. van Loosdrecht,  and I. Lindfors-Vrejoiu, “Magnetic interlayer coupling between ferromagnetic srruo3 layers through a sriro3 spacer,” Journal of Applied Physics 131, 133902 (2022), https://doi.org/10.1063/5.0087098 . Wysocki et al. (2018) L. Wysocki, R. Mirzaaghayev, M. Ziese, L. Yang, J. Schöpf, R. B. Versteeg, A. Bliesener, J. Engelmayer, A. Kovács, L. Jin, F. Gunkel, R. Dittmann, P. H. M. van Loosdrecht,  and I. Lindfors-Vrejoiu, “Magnetic coupling of ferromagnetic srruo3 epitaxial layers separated by ultrathin non-magnetic srzro3/sriro3,” Applied Physics Letters 113, 192402 (2018), https://doi.org/10.1063/1.5050346 . Yang et al. (2021) L. Yang, L. Jin, L. Wysocki, J. Schöpf, D. Jansen, B. Das, L. Kornblum, P. H. M. van Loosdrecht,  and I. Lindfors-Vrejoiu, “Enhancing the ferromagnetic interlayer coupling between epitaxial ${\mathrm{srruo}}_{3}$ layers,” Phys. Rev. B 104, 064444 (2021). Catalan, Bowman, and Gregg (2000) G. Catalan, R. M. Bowman,  and J. M. Gregg, “Metal-insulator transitions in ndnio ${}_{3}$ thin films,” Phys. Rev. B 62, 7892–7900 (2000). Catalano et al. (2018) S. Catalano, M. Gibert, J. Fowlie, J. Íñiguez, J.-M. Triscone,  and J. Kreisel, “Rare-earth nickelates rnio3: thin films and heterostructures,” Reports on Progress in Physics 81, 046501 (2018). Wang et al. (2007) L.-M. Wang, J.-H. Lai, J.-I. Wu, Y.-K. Kuo,  and C. Chang, “Effects of Ru substitution for Mn on La${}_{0.7}$Sr${}_{0.3}$MnO${}_{3}$ perovskites,” Journal of Applied Physics 102, 023915 (2007). Nakamura et al. (2018) M. Nakamura, D. Morikawa, X. Yu, F. Kagawa, T. Arima, Y. Tokura,  and M. Kawasaki, “Emergence of topological Hall effect in half-metallic manganite thin films by tuning perpendicular magnetic anisotropy,” Journal of the Physical Society of Japan 87, 074704 (2018). Bartram et al. (2020) F. M. Bartram, S. Sorn, Z. Li, K. Hwangbo, S. Shen, F. Frontini, L. He, P. Yu, A. Paramekanti,  and L. Yang, “Anomalous kerr effect in ${\mathrm{srruo}}_{3}$ thin films,” Phys. Rev. B 102, 140408 (2020). Hemrle (2003) J. Hemrle, Determination of in-depth magnetization profile in magnetic multilayer structures by means of magneto-optics, PhD dissertation, Université Paris-Sud XI, Orsay, France and Charles University in Prague, Czech Republic (defence March 2003) (2003). Veis et al. (2014) M. Veis, M. Zahradnik, R. Antos, S. Visnovsky, P. Lecoeur, D. Esteve, S. Autier-Laurent, J. P. Renard,  and P. Beauvillain, “Interface effects and the evolution of ferromagnetism in La${}_{2/3}$Sr${}_{1/3}$MnO${}_{3}$ ultrathin films,” Science and Technology of Advanced Materials 15, 015001 (2014).
Solution of Maxwell’s equations on a de Sitter background Donato Bini Donato Bini Istituto per le Applicazioni del Calcolo “M. Picone,” CNR, Via del Policlinico 137, I-00161 Rome, Italy ICRA, University of Rome “La Sapienza,” I–00185 Rome, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Polo Scientifico, Via Sansone 1, I–50019, Sesto Fiorentino (FI), Italy 22email: binid@icra.itGiampiero Esposito Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 44email: giampiero.esposito@na.infn.itRoberto Valentino Montaquila Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 66email: montaquila@na.infn.it    Giampiero Esposito Donato Bini Istituto per le Applicazioni del Calcolo “M. Picone,” CNR, Via del Policlinico 137, I-00161 Rome, Italy ICRA, University of Rome “La Sapienza,” I–00185 Rome, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Polo Scientifico, Via Sansone 1, I–50019, Sesto Fiorentino (FI), Italy 22email: binid@icra.itGiampiero Esposito Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 44email: giampiero.esposito@na.infn.itRoberto Valentino Montaquila Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 66email: montaquila@na.infn.it    Roberto Valentino Montaquila Donato Bini Istituto per le Applicazioni del Calcolo “M. Picone,” CNR, Via del Policlinico 137, I-00161 Rome, Italy ICRA, University of Rome “La Sapienza,” I–00185 Rome, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Polo Scientifico, Via Sansone 1, I–50019, Sesto Fiorentino (FI), Italy 22email: binid@icra.itGiampiero Esposito Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 44email: giampiero.esposito@na.infn.itRoberto Valentino Montaquila Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia, Edificio 6, 80126 Napoli, Italy 66email: montaquila@na.infn.it (Received: date / Accepted: date / Version: December 6, 2020) Abstract The Maxwell equations for the electromagnetic potential, supplemented by the Lorenz gauge condition, are decoupled and solved exactly in de Sitter space-time studied in static spherical coordinates. There is no source besides the background. One component of the vector field is expressed, in its radial part, through the solution of a fourth-order ordinary differential equation obeying given initial conditions. The other components of the vector field are then found by acting with lower-order differential operators on the solution of the fourth-order equation (while the transverse part is decoupled and solved exactly from the beginning). The whole four-vector potential is eventually expressed through hypergeometric functions and spherical harmonics. Its radial part is plotted for given choices of initial conditions. We have thus completely succeeded in solving the homogeneous vector wave equation for Maxwell theory in the Lorenz gauge when a de Sitter spacetime is considered, which is relevant both for inflationary cosmology and gravitational wave theory. The decoupling technique and analytic formulae and plots are completely original. This is an important step towards solving exactly the tensor wave equation in de Sitter space-time, which has important applications to the theory of gravitational waves about curved backgrounds. ††journal: General Relativity and Gravitation 1 Introduction It is by now well known that the problem of solving vector and tensor wave equations in curved spacetime, motivated by physical problems such as those occurring in gravitational wave theory and relativistic astrophysics, is in general a challenge even for the modern computational resources. Within this framework, a striking problem is the coupled nature of the set of hyperbolic equations one arrives at. For example, on using the Maxwell action functional $$S=-{1\over 4}\int_{M}F_{ab}F^{ab}\sqrt{-g}\;d^{4}x$$ (1) jointly with the Lorenz PHMAA-34-287 gauge condition $$\nabla^{b}A_{b}=0,$$ (2) one gets, in vacuum, the coupled equations for the electromagnetic potential $$\left(-\delta_{a}^{\;b}\leavevmode\thinspace\hbox{\vrule width 1px\vtop{\vbox{% \hrule width 100% height 1px\kern 1.0pt\hbox{\vphantom{\tt/}\thinspace{\tt\ }% \thinspace}} \kern 1.0pt\hrule width 100% height 1px}\vrule width 1px}\thinspace+R_{a}^{\;b% }\right)A_{b}=0.$$ (3) It was necessary to wait until the mid-seventies to obtain a major breakthrough in the solution of coupled hyperbolic equations such as (3), thanks to the work of Cohen and Kegeles PHRVA-D10-1070 , who reduced the problem to the task of finding solutions of a complex scalar equation. Even on considering specific backgrounds such as de Sitter spacetime, only the Green functions of the wave operator have been obtained explicitly so far CMPHA-103-669 , to the best of our knowledge. Thus, in a recent paper 00436 , we have studied the vector and tensor wave equations in de Sitter space-time with static spherical coordinates, so that the line element reads as $$ds^{2}=-f\;dt^{2}+f^{-1}\;dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),$$ (4) where $f\equiv 1-H^{2}r^{2}$, and $H$ is the Hubble constant related to the cosmological constant $\Lambda$ by $H^{2}={\Lambda\over 3}$. The vector field $X$ solving the vector wave equation can be expanded in spherical harmonics according to PHRVA $$\displaystyle X$$ $$\displaystyle=$$ $$\displaystyle{\widetilde{Y}}_{lm}(\theta)e^{-i(\omega t-m\phi)}\Bigr{[}f_{0}(r% )dt+f_{1}(r)dr\Bigr{]}$$ (5) $$\displaystyle+$$ $$\displaystyle e^{-i(\omega t-m\phi)}\left[-{mr\over\sin\theta}f_{2}(r){% \widetilde{Y}}_{lm}(\theta)+f_{3}(r){d{\widetilde{Y}}_{lm}\over d\theta}\right% ]d\theta$$ $$\displaystyle+$$ $$\displaystyle ie^{-i(\omega t-m\phi)}\left[-r\sin\theta f_{2}(r){d{\widetilde{% Y}}_{lm}\over d\theta}+mf_{3}(r){\widetilde{Y}}_{lm}(\theta)\right]d\phi,$$ where ${\widetilde{Y}}_{lm}(\theta)$ is the $\theta$-dependent part of the spherical harmonics $Y_{lm}(\theta,\phi)$. As we have shown in 00436 , the function $f_{2}$ is decoupled and obeys a differential equation solved by a combination of hypergeometric functions, i.e. $$f_{2}(r)=f^{-i\Omega/2}\biggr{[}U_{1}r^{l}F\left(a_{-},a_{+};{3\over 2}+l;H^{2% }r^{2}\right)+U_{2}r^{-l-1}F\left(a_{+},a_{-};{1\over 2}-l;H^{2}r^{2}\right)% \biggr{]},$$ (6) where $$\Omega\equiv{\omega\over H},\;a_{\pm}\equiv-{1\over 4}\left({2i\Omega}-3-2l\pm 1% \right).$$ (7) At this stage, however, the problem remained of solving explicitly also for $f_{0}(r),f_{1}(r),f_{3}(r)$ in the expansion (5). For this purpose, Sec. II derives the decoupling procedure for such modes in de Sitter, and Sec. III writes explicitly the decoupled equations. Section IV solves explicitly for $f_{0},f_{1},f_{3}$ in terms of hypergeometric functions, while Sec. V plots such solutions for suitable initial conditions. Relevant details are presented in the Appendices. It now remains to be seen whether a technique similar to Secs. II and III can be used to solve completely also the tensor wave equation obtained in 00436 . This technical step would have far reaching consequences for the theory of gravitational waves in cosmological backgrounds, as is stressed in 00436 , and we hope to be able to perform it in a separate paper. 2 Coupled modes Unlike $f_{2}$, the functions $f_{0},f_{1}$ and $f_{3}$ obey instead a coupled set, given by Eqs. (54), (55), (57) of 00436 , which are here written, more conveniently, in matrix form as (our $L\equiv l(l+1)$, and we set $\epsilon=1$ in the Eqs. of 00436 , which corresponds to studying the vector wave equation (3)) $$\left(\begin{array}[]{ccc}P_{00}&A_{3}&0\\ f^{-2}A_{3}&P_{11}&r^{-2}f^{-1}LC_{3}\\ 0&C_{3}&P_{33}\end{array}\right)\,\left(\begin{array}[]{l}f_{0}\\ f_{1}\\ f_{3}\end{array}\right)=0,$$ (8) having defined $$A_{3}\equiv{2i\Omega H^{3}r\over f},$$ (9) $$C_{3}\equiv{2\over r},$$ (10) $$P_{00}\equiv{d^{2}\over dr^{2}}+Q_{1}{d\over dr}+Q_{2},$$ (11) $$P_{11}\equiv{d^{2}\over dr^{2}}+Q_{3}{d\over dr}+Q_{4},$$ (12) $$P_{33}\equiv{d^{2}\over dr^{2}}+Q_{5}{d\over dr}+Q_{6},$$ (13) $$Q_{1}\equiv C_{3}={2\over r},$$ (14) $$Q_{2}\equiv{\Omega^{2}H^{2}\over f^{2}}-{L\over r^{2}f},$$ (15) $$Q_{3}\equiv{6\over r}\left(1-{2\over 3}{1\over f}\right),$$ (16) $$Q_{4}\equiv{\Omega^{2}H^{2}\over f^{2}}-\left(4H^{2}+{(L+2)\over r^{2}}\right)% {1\over f},$$ (17) $$Q_{5}\equiv{2\over r}\left(1-{1\over f}\right),$$ (18) $$Q_{6}\equiv{\Omega^{2}H^{2}\over f^{2}}-{L\over r^{2}}{1\over f},$$ (19) where we have corrected misprints on the right-hand side of Eqs. (54) and (55) of 00436 . With our notation, the three equations resulting from (8) can be written as $$\displaystyle P_{00}f_{0}$$ $$\displaystyle=$$ $$\displaystyle-A_{3}f_{1},$$ (20) $$\displaystyle P_{11}f_{1}$$ $$\displaystyle=$$ $$\displaystyle-{A_{3}\over f^{2}}f_{0}-{LC_{3}\over r^{2}f}f_{3},$$ (21) $$\displaystyle P_{33}f_{3}$$ $$\displaystyle=$$ $$\displaystyle-C_{3}f_{1}.$$ (22) 3 Decoupled equations We now express $f_{1}$ from Eq. (20) and we insert it into Eq. (21), i.e. $$P_{11}\left(-{1\over A_{3}}P_{00}f_{0}\right)=-{A_{3}\over f^{2}}f_{0}-{LC_{3}% \over r^{2}f}f_{3}.$$ (23) Next, we exploit the Lorenz gauge condition (2), i.e. 00436 $$Lf_{3}=r^{2}f{d\over dr}\left(-{1\over A_{3}}P_{00}f_{0}\right)-2r(1-2f)\left(% -{1\over A_{3}}P_{00}f_{0}\right)+i{\Omega Hr^{2}\over f}f_{0},$$ (24) and from Eqs. (23) and (24) we obtain, on defining the new independent variable $x=rH$, the following fourth-order equation for $f_{0}$: $$\left[{d^{4}\over dx^{4}}+B_{3}(x){d^{3}\over dx^{3}}+B_{2}(x){d^{2}\over dx^{% 2}}+B_{1}(x){d\over dx}+B_{0}(x)\right]f_{0}(x)=0,$$ (25) where $$B_{0}(x)\equiv{b_{0}(x)\over x^{4}(x^{2}-1)^{4}},$$ (26) $$b_{0}(x)\equiv L(L-2)+2L(2-L-\Omega^{2})x^{2}+\Bigr{[}\Omega^{4}+4\Omega^{2}+L% (L+2(\Omega^{2}-1))\Bigr{]}x^{4},$$ (27) $$B_{1}(x)\equiv{4(\Omega^{2}+L-2+6x^{2})\over x(x^{2}-1)^{2}},$$ (28) $$B_{2}(x)\equiv{2\Bigr{[}-L+(\Omega^{2}+L-14)x^{2}+18x^{4}\Bigr{]}\over x^{2}(x% ^{2}-1)^{2}},$$ (29) $$B_{3}(x)\equiv{4(-1+3x^{2})\over x(x^{2}-1)}.$$ (30) Eventually, $f_{1}$ and $F_{3}\equiv Hf_{3}$ can be obtained from Eqs. (20) and (24), i.e. $$f_{1}(x)={i\over 2\Omega}{(1-x^{2})\over x}\left({d^{2}\over dx^{2}}+{2\over x% }{d\over dx}+{\Omega^{2}\over(1-x^{2})^{2}}-{L\over x^{2}(1-x^{2})}\right)f_{0% }(x),$$ (31) $$LF_{3}(x)=\left[x^{2}(1-x^{2}){d\over dx}-2x(2x^{2}-1)\right]f_{1}(x)+i\Omega{% x^{2}\over(1-x^{2})}f_{0}(x).$$ (32) Our $f_{1}$ and $f_{3}$ are purely imaginary, which means we are eventually going to take their imaginary part only. Moreover, as a consistency check, Eqs. (31) and (32) have been found to agree with Eq. (22), i.e. (22) is then identically satisfied. 4 Exact solutions Equation (25) has four linearly independent integrals, so that its general solution involves four coefficients of linear combination $C_{1},C_{2},C_{3},C_{4}$, according to (hereafter, $F$ is the hypergeometric function already used in (6)) $$\displaystyle f_{0}(x)$$ $$\displaystyle=$$ $$\displaystyle C_{1}x^{-1-l}(1-x^{2})^{-{i\over 2}\Omega}F\left(-{i\over 2}% \Omega-{l\over 2},-{i\over 2}\Omega+{1\over 2}-{l\over 2};{1\over 2}-l;x^{2}\right)$$ (33) $$\displaystyle+$$ $$\displaystyle C_{2}x^{-1-l}(1-x^{2})^{-{i\over 2}\Omega}F\left(-{i\over 2}% \Omega+1-{l\over 2},-{i\over 2}\Omega-{1\over 2}-{l\over 2};{1\over 2}-l;x^{2}\right)$$ $$\displaystyle+$$ $$\displaystyle C_{3}x^{l}(1-x^{2})^{-{i\over 2}\Omega}F\left(-{i\over 2}\Omega+% {l\over 2},-{i\over 2}\Omega+{3\over 2}+{l\over 2};{3\over 2}+l;x^{2}\right)$$ $$\displaystyle+$$ $$\displaystyle C_{4}x^{l}(1-x^{2})^{-{i\over 2}\Omega}F\left(-{i\over 2}\Omega+% 1+{l\over 2},-{i\over 2}\Omega+{1\over 2}+{l\over 2};{3\over 2}+l;x^{2}\right).$$ Regularity at the origin ($x=0$ should be included, and we recall that the event horizon for an observer situated at $x=0$ is given by $x=1$ BOUCHER ) implies that $C_{1}=C_{2}=0$, and hence, on defining $$a_{1}\equiv-{i\over 2}\Omega+{l\over 2},\;b_{1}\equiv-{i\over 2}\Omega+{3\over 2% }+{l\over 2},\;d_{1}\equiv{3\over 2}+l,$$ (34) we now re-express the regular solution in the form (the points $x=0,1$ being regular singular points of the equation (25) satisfied by $f_{0}$) $$f_{0}(x)=x^{l}(1-x^{2})^{-{i\over 2}\Omega}\Bigr{[}C_{3}F(a_{1},b_{1};d_{1};x^% {2})+C_{4}F(a_{1}+1,b_{1}-1;d_{1};x^{2})\Bigr{]},$$ (35) where the second term on the right-hand side of (35) can be obtained from the first through the replacements $$C_{3}\rightarrow C_{4},\;a_{1}\rightarrow a_{1}+1,\;b_{1}\rightarrow b_{1}-1,$$ and the series expressing the two hypergeometric functions are conditionally convergent, because they satisfy ${\it Re}(c-a-b)=i\Omega$, with $$a=a_{1},a_{1}+1;\;b=b_{1},b_{1}-1;\;c=d_{1}.$$ Last, we exploit the identity $${d\over dz}F(a,b;c;z)={ab\over c}F(a+1,b+1;c+1;z)$$ (36) to find, in the formula (31) for $f_{1}(x)$, $$\displaystyle{d\over dx}f_{0}(x)=C_{3}\biggr{\{}x^{l-1}(1-x^{2})^{-{i\over 2}% \Omega-1}\Bigr{[}l(1-x^{2})+i\Omega x^{2}\Bigr{]}F(a_{1},b_{1};d_{1};x^{2})$$ (37) $$\displaystyle+$$ $$\displaystyle{2a_{1}b_{1}\over d_{1}}x^{l+1}(1-x^{2})^{-{i\over 2}\Omega}F(a_{% 1}+1,b_{1}+1;d_{1}+1;x^{2})\biggr{\}}$$ $$\displaystyle+$$ $$\displaystyle\Bigr{\{}C_{3}\rightarrow C_{4},\;a_{1}\rightarrow a_{1}+1,\;b_{1% }\rightarrow b_{1}-1\Bigr{\}}.$$ It is then straightforward, although tedious, to obtain the second derivative of $f_{0}$ (see Eq. (38) of Appendix A) in the equation for $f_{1}$, and the third derivative of $f_{0}$ in the formula (32) for $Hf_{3}$. The results are exploited to plot the solutions in Sec. V. In general, for given initial conditions at $\alpha\in[0,1[$, one can evaluate $C_{3}$ and $C_{4}$ from $$f_{0}(\alpha)=\beta,\;f_{0}^{\prime}(\alpha)=\gamma,$$ i.e. $C_{3}=C_{3}(\beta,\gamma),C_{4}=C_{4}(\beta,\gamma)$. 5 Plot of the solutions To plot the solutions, we begin with $f_{0}$ as given by (35), which is real-valued despite the many $i$ factors occurring therein. Figures 1 to 3 describe the solutions for the two choices $C_{3}=0,C_{4}=1$ or the other way around, and various values of $l$ and $\Omega$. We next plot $f_{1}/i$ and $F_{3}/i\equiv Hf_{3}/i$ by relying upon (31) and (32). As far as we can see, all solutions blow up at the event horizon, corresponding to $x=1$, since there are no static solutions of the wave equation which are regular inside and on the event horizon other than the constant one BOUCHER . Appendix A Derivatives of $f_{0}$ The higher-order derivatives of $f_{0}$ in sections III and IV get increasingly cumbersome, but for completeness we write hereafter the result for $f_{0}^{\prime\prime}(x)$, i.e. $$\displaystyle{d^{2}\over dx^{2}}f_{0}(x)$$ $$\displaystyle=$$ $$\displaystyle C_{3}\biggr{\{}{4a_{1}(a_{1}+1)b_{1}(b_{1}+1)\over d_{1}(d_{1}+1% )}x^{l+1}(1-x^{2})^{-{i\over 2}\Omega}F(a_{1}+2,b_{1}+2;d_{1}+2;x^{2})$$ (38) $$\displaystyle+$$ $$\displaystyle x^{l-1}(1-x^{2})^{-{i\over 2}\Omega-1}{2a_{1}b_{1}\over d_{1}}x$$ $$\displaystyle\times$$ $$\displaystyle[(2l+1)(1-x^{2})+2i\Omega x^{2}]F(a_{1}+1,b_{1}+1;d_{1}+1;x^{2})$$ $$\displaystyle+$$ $$\displaystyle x^{l-2}(1-x^{2})^{-{i\over 2}\Omega-2}\biggr{[}l(l-1)(x^{2}-1)^{% 2}-{i\Omega\over 2}(x^{2}-1)(lx+2(l+1)x^{2})$$ $$\displaystyle+$$ $$\displaystyle(2i\Omega-\Omega^{2})x^{4}\biggr{]}F(a_{1},b_{1};d_{1};x^{2})% \biggr{\}}$$ $$\displaystyle+$$ $$\displaystyle\biggr{\{}C_{3}\rightarrow C_{4},\;a_{1}\rightarrow a_{1}+1,\;b_{% 1}\rightarrow b_{1}-1\biggr{\}}.$$ Appendix B Special cases: $l=0$ and $l=1$ We list here the main equations for $l=0,1$ for completeness. These equations do not add too much to the above discussion and hence we have decided to include them in the appendix. B.1 The case $l=0,m=0$ In this case we have $Y(\theta)=(4\pi)^{-1/2}=$ constant and the only surviving functions are $f_{0}$ and $f_{1}$. The main equations in 00436 reduce then to (recalling that our Eq. (3) requires setting $\epsilon=1$ in 00436 ) $$\frac{d^{2}f_{0}}{dx^{2}}=-\frac{2}{x}\frac{df_{0}}{dx}-\frac{\Omega^{2}}{(x^{% 2}-1)^{2}}f_{0}+\frac{2i\Omega x}{(x^{2}-1)}f_{1}$$ (39) and the Lorenz gauge condition (24) which now becomes $$\Omega f_{0}=i(x^{2}-1)^{2}\frac{df_{1}}{dx}+\frac{2i(x^{2}-1)(2x^{2}-1)}{x}f_% {1}\,.$$ (40) These equations can be easily separated and explicitly solved in terms of hypergeometric functions. The details are not very illuminating and are therefore omitted. B.2 The case $l=1,m=0,1$ In this case we have $$\displaystyle Y$$ $$\displaystyle=$$ $$\displaystyle\sqrt{\frac{3}{4\pi}}\cos\theta\,\quad l=1,m=0$$ $$\displaystyle Y$$ $$\displaystyle=$$ $$\displaystyle-\sqrt{\frac{3}{8\pi}}\sin\theta\,\quad l=1,m=1\ .$$ (41) However, by virtue of the spherical symmetry of the background, the equations for both cases $l=1,m=0$ and $l=1,m=1$ do coincide. We have $$\displaystyle\frac{d^{2}f_{0}}{dx^{2}}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2}{x}\frac{df_{0}}{dx}-\frac{(\Omega^{2}x^{2}+2x^{2}-2)}{x% ^{2}(x^{2}-1)^{2}}f_{0}+\frac{2i\Omega x}{(x^{2}-1)}f_{1},$$ $$\displaystyle\frac{d^{2}f_{1}}{dx^{2}}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2(3x^{2}-1)}{x(x^{2}-1)}\frac{df_{1}}{dx}-\frac{(\Omega^{2% }x^{2}+4x^{4}-4)}{x^{2}(x^{2}-1)^{2}}f_{1}+\frac{2i\Omega x}{(x^{2}-1)^{3}}f_{0}$$ $$\displaystyle+\frac{4F_{3}}{x^{3}(x^{2}-1)},$$ $$\displaystyle\frac{d^{2}f_{2}}{dx^{2}}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2(2x^{2}-1)}{x(x^{2}-1)}\frac{df_{2}}{dx}-\frac{(\Omega^{2% }x^{2}+2x^{4}-2)}{x^{2}(x^{2}-1)^{2}}f_{2},$$ $$\displaystyle\frac{d^{2}F_{3}}{dx^{2}}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2x}{(x^{2}-1)}\frac{dF_{3}}{dx}-\frac{(\Omega^{2}x^{2}+2x^% {2}-2)}{x^{2}(x^{2}-1)^{2}}F_{3}-\frac{2}{x}f_{1}.$$ (42) To this set one has to add the Lorenz gauge condition (24), which now reads $$\Omega f_{0}=i(x^{2}-1)^{2}\frac{df_{1}}{dx}+\frac{2i(x^{2}-1)}{x^{2}}[F_{3}+f% _{1}x(2x^{2}-1)]\,.$$ (43) Once more, the detailed discussion of this case can be performed by repeating exactly the same steps as in the general case, and is hence omitted. Acknowledgments G. Esposito is grateful to the Dipartimento di Scienze Fisiche of Federico II University, Naples, for hospitality and support. R. V. Montaquila thanks CNR for partial support. D. Bini thanks ICRANet for support. References (1) Lorenz, L.: Phil. Mag. 34, 287 (1867). (2) Cohen, J.M., Kegeles, L.S.: Phys. Rev. D 10, 1070 (1974). (3) Allen, B., Jacobson, T.: Commun. Math. Phys. 103, 669 (1986). (4) Bini, D., Capozziello, S., Esposito, G.: Int. J. Geom. Meth. Mod. Phys. 5, 1069 (2008). (5) Zerilli, F.J.: Phys. Rev. D 9, 860 (1974). (6) Boucher, W., Gibbons, G.W.: in The Very Early Universe, eds. S.W. Hawking, G.W. Gibbons, S.T.C. Siklos, Cambridge University Press, Cambridge, 1983.